Demand for network automation and orchestration continues to rise as organizations reap the business and technical benefits it brings to their operations, including significant improvements in productivity, cost reduction and efficiency.
As a result, many organizations are now looking to the next wave of network orchestration: orchestration across technology domains, more commonly known as Multi-Domain Service Orchestration (MDSO).
Early adopters have learned that effectively leveraging automation and orchestration at the domain level doesn’t necessarily translate to the MDSO layer due to the different capabilities required to effectively coordinate and communicate across different technologies. While the potential benefits of MDSO are high, there are unique challenges in multidomain deployments that organizations must tackle.
The most obvious difference when orchestrating across domains versus within specific domains is the need to design around the direction your network data will travel.
Within a single domain, the activities are primarily focused north to south, and vice versa. Instructions are sent to the domain controller which executes the changes to the network functions. This makes single-domain orchestration relatively straightforward.
When you start orchestrating across domains, however, things get a little more complex. Now you need to account for both north/south activities and also for a large Continue reading
As your infrastructure is scaling and you start to get more traffic, it’s important to make sure everything works as expected. This is most commonly done through testing, with load testing being the optimal way of verifying the resilience of your services.
Traditionally, load testing has been accomplished via standalone clients, like JMeter. However, as the world of infrastructure has gotten more modern, and organizations are using tools like Kubernetes, it’s important to have a modern toolset as well.
With traditional load testing, you’ll commonly run into one of three major issues:
Scripting load tests takes a lot of time
Load tests typically run in large, complex, end-to-end environments, that are difficult to provision, as well as being expensive for production-scale infrastructure
Data and realistic use cases are impossible to mirror one-to-one, unless you have production data
A more modern approach is to integrate your load-testing tools directly into your infrastructure. If you’re using Kubernetes, that can be accomplished via something like an
Three years ago, when we posed the question, “Apache Kafka was emerging as the default go-to-publish/subscribe messaging engine for the cloud era. At the time, we drew comparisons with IPO’ed while Databricks continues Pulsar recently emerged as a competing project, but is it game over? Hyperscalers are offering alternatives like Azure Event Hub, and AWS co-markets Confluent Cloud, with a similar arrangement with Jay Kreps evangelized streaming using electricity as the metaphor. Kreps positioned streaming as pivotal to the next wave of apps in chicken and egg terms. That is, when electricity Continue reading
How Idit Levine’s Athletic Past Fueled Solo.io‘s Startup
“I was basically going to compete with all my international friends for two minutes without parents, without anything,” Levine said. “I think it made me who I am today. It’s really giving you a lot of confidence to teach you how to handle situations … stay calm and still focus.”
Developing that calm and focus proved an asset during Levine’s subsequent career in professional basketball in Israel, and when she later started her own company. In this episode of The Tech Founder Odyssey podcast series, Levine, founder and CEO of Colleen Coll and Heather Joslyn of The New Stack
After finishing school and service in the Israeli Army, Levine was still unsure of what she wanted to do. She noticed her brother and sister’s fascination with computers. Soon enough, she recalled, “I picked up a book to teach myself how to program.”
When Patrick Markert started AmeriSave, only general guidelines for determining rates were available online. “At that time, finance was very old-school, with lots of paper and face-to-face visits,” said
William is the co-founder and CEO of Buoyant, the creator of the open source service mesh projects Linkerd. Prior to Buoyant, he was an infrastructure engineer at Twitter, where he helped move Twitter from a failing monolithic Ruby on Rails app to a highly distributed, fault-tolerant microservice architecture. He was a software engineer at Powerset, Microsoft, and Adap.tv, a research scientist at MITRE, and holds an MS in computer science from Stanford University.
eBPF is a hot topic in the Kubernetes world, and the idea of using it to build a “sidecar-free service mesh” has generated recent buzz. Proponents of this idea claim that eBPF lets them reduce service mesh complexity by removing sidecars. What’s left unsaid is that this model simply replaces sidecar proxies with multitenant per-host proxies — a significant step backward for both security and operability that increases, not decreases, complexity.
The sidecar model represents a tremendous advancement for the industry. Sidecars allow the dynamic injection of functionality into the application at runtime, while — critically — retaining all the isolation guarantees achieved by containers. Moving from sidecars back to multitenant, shared proxies loses this critical isolation and results in significant regressions in security Continue reading
Keith McClellan is director, partner solutions engineering, at Cockroach Labs
These days, most application architecture is distributed by default: connected microservices running in containers in a cloud environment. Organizations large and small now deploy thousands of containers every day — a complexity of scale that is almost incomprehensible. The vast majority of organizations depend upon Kubernetes (K8s) to orchestrate, automate and manage all these workloads.
So what happens, then, when something happens with Kubernetes?
A fault domain is the area of a distributed system that suffers the impact when a critical piece of infrastructure or network service experiences problems. Has Kubernetes become the next fault domain?
Contemplating the disaster of a Kubernetes-related application failure is the stuff of DevOps nightmares. But in disaster, there is also opportunity: Kubernetes has the potential to help us have a common operating experience across data centers, cloud regions and even clouds by becoming the fault domain we design our high availability (HA) applications to survive.
Kubernetes as Common Operating System
Many distributed applications need to be distributed as close to users as possible, so let’s say we want to build a three-region cluster.
Without Kubernetes, even in a single cloud, that means Continue reading
Web Application Firewalls (WAF) first emerged in the late 1990s as Web server attacks became more common. Today, in the context of cloud native technologies, there’s an ongoing rethinking of how a WAF should be applied.
No longer is it solely static applications sitting behind a WAF, said Tigera CEO
Van is a technical product marketing manager for Consul at HashiCorp. He has been in the infrastructure space for most of his career and loves learning about new technologies and getting his hands dirty. When not staring at his computer screen, he's sharing pictures of food to his wife's dismay.
Even as service mesh adoption continues to grow, some organizations are still trying to understand the full extent of what a service mesh can and can’t do.
They may not realize that a service mesh is not just another single-purpose tool, but one that addresses a wide variety of networking needs. A service mesh may actually help consolidate multiple existing tools to help reduce management toil and costs.
Just take a look at these two multicloud network architectures.
Automating and offloading network services and security-related capabilities onto a cloud-agnostic service mesh can help simplify management in multicloud environments.
Multicloud architecture using cloud-vendor-specific networking solutions:
Using a cloud-agnostic service mesh:
Many service mesh products include service discovery, zero trust networking and load-balancing capabilities, while some other service mesh products extend even further to provide multicloud/multiruntime connectivity, network automation and north-south traffic control. Let’s take a look at the capabilities Continue reading
I’m sure, like me, you welcomed the IETF standard (Internet Engineering Task Force). No, of course, you didn’t — the web just works, so why worry about it? But if you are vaguely intrigued about why the change is happening, here is a short breakdown of the history behind it. Then we will get into the reasons why you should adopt it for your company.
HTTP/3 is the third version of the Hypertext Transfer Protocol (HTTP), and was previously known as HTTP-over-QUIC. QUIC was initially developed by Google and is the successor of HTTP/2. Companies such as Google and Facebook already use QUIC to speed up the web.
A Very Short History of HTTP
Back in the day, there were two internet protocols that you could choose to work with. Even before the web, we still had to squirt packets of information (or datagrams) from one machine to another across the internet. For a games developer, the important protocol was UDP (User Datagram Protocol). This was the quick, fire and forget standard: you threw a packet across the network Continue reading
It seems like we’ve been hearing about 5G for years now, and how when it’s here, it will revolutionize connectivity as we know it.
Steve is a director in the MongoDB Industry Solutions team, where he focuses on how MongoDB technology can be leveraged to solve challenges faced by organizations working in the telecommunications industry. Prior to this role, Steve held numerous leadership roles with MongoDB’s professional services team in EMEA.
Well, 5G is here, but beyond faster or more reliable cell service, few companies have begun to tap into the potential 5G holds for both business-to-business and business-to-consumer innovation.
In fact, this potential extends beyond the telecommunications industry into nearly all sectors that rely on connectivity, like the manufacturing, automotive and even agricultural industries, among others. By using the power of 5G networks and pairing that with intelligent software, enterprises can embrace the next generation of industry by launching IoT solutions and enabling enhanced data collection at the edge.
This article will explore key questions around the slow move toward 5G innovation and how mobile edge computing can accelerate the push to near-instantaneous network connectivity.
What’s Standing in the Way of Innovation?
When COVID-19 hit, numerous companies Continue reading
Ever since CVE-2022-2274, didn’t reach Heartbleed levels of ick, but it was more than bad enough.
What happened was that the OpenSSL 3.0.4 release introduced a serious RSA bug in X86-64 CPUs supporting the AVX512 IFMA instructions. This set of CPU single instruction, multiple data (SIMD) instructions for floating-point operations per second (FLOPS) was introduced in 2018. You’ll find it in pretty much every serious Intel processor, from Skylake to AMD’s forthcoming Zen 4. In other words, it’s probably in every server you’re currently running.
Is that great news or what?
The problem is that RSA 2048-bit private key implementations fail on this chip architecture. Adding insult to injury, memory corruption results during the computation. The last straw? An attacker can use this memory corruption to trigger a remote code execution (RCE) on the machine.
Exploiting it might not be easy, but it is doable. And, even if an attack isn’t that reliable, if it’s used to hit a server that constantly respawns, say a web server, it Continue reading
If data is the lifeblood of enterprise applications, networks are the arteries.
Wayne is vice president of engineering at Couchbase. Before Couchbase, Wayne spent seven years at Oracle as the architect responsible for driving mobile innovation within the CRM and SaaS product lines. He has 10 patents and patents pending from his work there.
Networks are so vital because they enable business, human and mission-critical processes by connecting organizations with customers, employees and partners, increasing efficiency, powering automation, driving engagement and accelerating productivity. Networks are the glue that knit modern applications together.
But apps can only be as available and fast as the network that underpins them. Achieving high levels of reliability and speed are keys to success. Network disruptions and slowness are a daily reality that lead to downtime with Starlink.
Dancing with the Stars
Here a security hole, there a security hole, everywhere a security hole. One of the latest is an obnoxious one labeled Apache HTTP Server‘s CVE-2022-23943, an Apache memory corruption vulnerability in mod_sed, was uncovered. This one was an out-of-bounds Write vulnerability that enabled attackers to overwrite heap memory. When you say, “overwrite heap memory,” you know it’s bad news. This impacted the Apache HTTP Server 2.4 version 2.4.52 and earlier versions.
It was quickly fixed. But, JFrog Security Research team’s Security Research Tech Lead, worried that while the
Tailscale SSH, which simplifies authentication and authorization by replacing SSH keys with the Tailscale identity of any machine.
A Secure Shell or SSH key is an access credential in the SSH.COM.
Tailscale gives each server and user device its own identity and node key for authenticating and encrypting the Tailscale network connection and uses access control lists defined in code for authorizing connections, making it a natural extension for Tailscale to now manage access for SSH connections in your network.
Removes the Pain
“SSH is an everyday tool for developers, but managing SSH keys for a server isn’t so simple or secure,” said Tailscale Product Manager
According to the Fortinet vArmour senior vice president. “This enables organizations to not have to in essence boil the ocean and try and adopt unilateral controls too quickly, but instead lock down their crown jewels and understand the relationships those assets have to address resilience planning in a phased approach.”
One of the biggest mistakes we see when implementing zero trust is insufficient investing in visibility, observability, and analytics across the organization, Kuehn added. “Without visibility, companies are limited Continue reading
Zero Trust Architecture (ZTA) builds on the foundational principles of zero trust security as defined by the National Institute of Standards and Technology (NIST) in publication Ansible, Puppet, and Crowdstrike offer products that cover the entire spectrum of detecting and protecting endpoints within a corporate network. This would include everything from antivirus and antimalware to abnormal network activity monitoring. Microsoft, Trend Micro, and SentinelOne offer similar capabilities and made Gartner’s upper quadrant in their 2021 Endpoint Protection report.
Wrap up Zero Trust Architecture
The real answer to the question of what is zero trust architecture depends on your most important corporate assets. Any network design should also include consideration of the humans with access to those critical assets. Trust but verify applies to corporate employees as well as geopolitical relationships.
Choosing the right vendors and partners to meet your specific objectives will help you implement a solid Zero Trust Architecture. Once implemented it comes down to diligence and persistence. New threats pop up regularly and must be met with an adaptive security posture. Those who don’t adapt and change will be doomed to failure.
The post What Is Zero Trust Architecture? appeared first on The New Stack.
Rivalries sometimes last a lifetime… and then turn to mist. zero trust architecture or achieve microsegmentation may seem different, the end goal of protecting systems and information from increasingly sophisticated attacks is the priority.
Zero Trust as a Model for Security Strategy