Cisco got SASE; Nutanix Q3 revenue jumped 11%; and Docker cozied up to Microsoft Azure.
I’ve been getting into the twitter poll thing lately and its producing some interesting polling by asking questions. Yesterday I asked are you eager to go back to the office ? With 129 responses we have statistical significance because the sample is large. What conclusions might we draw from this survey ? Reminder: Twitter […]
The post Who wants to work in an office ? appeared first on EtherealMind.
Using these APIs the vendor claims to be able to more efficiently correlate traces, identify root...
"We are confident that the demand for our products and services will be strong as we emerge from...
Securing workloads across an entire environment is the fundamental goal of a policy. But workloads come in a variety of form factors: virtual machines, containers, and bare metal servers. In order to protect every workload, experts recommend isolating workloads wherever possible — avoiding dependency on the host operating system and its firewall. Relying on the host firewall creates the dependency of a host to defend itself.
Securing virtual workloads is a task best handled by the hypervisor. Offering security via inspection of traffic on the virtual network interfaces of the virtual workload achieves the security you want. It also delivers isolation for security enforcement. Workloads to secure bare metal servers come in many form factors and a variety of means to achieve policy enforcement.
Bare metal servers remain in use for a variety of reasons. Securing these servers remains a necessary task in today’s virtualized data center. Reasons we still use bare metal servers:
If your household is anything like mine, your Internet connection has experienced a significant increase in usage over these last several months. We’re streaming more and more media each day, and we’re on seemingly endless hours of videoconferences for work and for school. While all of that streaming media consumes downstream capacity, those videoconferences can generate a significant amount of upstream traffic. I’m fortunate enough to have fiber-based broadband connectivity that can easily handle this traffic, but I know others aren’t as lucky. They’re stuck with copper-based connections or satellite links that struggle to deliver streaming media or video calls with any sort of viewable quality.
Across this spectrum of “last mile” Internet connections, I looked at the impact from both a provider and user perspective. What kind of traffic growth have last mile network providers experienced? What steps have these providers taken to ensure they have sufficient capacity? And most importantly for end users, how has increased traffic impacted last mile connection speeds?
The network connections from customer- and subscriber-facing Internet service providers are often referred to as last mile networks. These are Internet services delivered over a notional distance – the “last mile” – to subscriber premises, such Continue reading
For any network that provides routing services to customers it is important to segregate them in different virtual topologies that don’t interfere with each other.
This post is not about NFV, but it is important to understand …
It’s amazing how sometimes people fond of sharing their opinions and buzzwords on various social media can’t answer simple questions. Today’s blog post is based on a true story… a “senior network architect” fully engaged in a recent hype cycle couldn’t answer a simple question:
Why exactly would you need VXLAN and EVPN?
We could spend a day (or a week) discussing the nuances of that simple question, but all I have at the moment is a single web page, so here we go…
New applications and workloads are constantly being added to Kubernetes clusters. Those same apps need to securely communicate with resources outside the cluster behind a firewall or other control point. Firewalls require a consistent IP, but routable IPs are a limited resource that can be quickly depleted if applied to every service.
With the Calico Egress Gateway, a new feature in Calico Enterprise 3.0, existing firewalls and control points can now be used to securely manage access to infrastructure and services outside of the cluster. In addition, IT teams are now able to identify an application/workload in a Kubernetes namespace via the source IP.
As organizations progress on their Kubernetes journey from pilot to production, they begin migrating existing applications into the cluster, which has merged with the greater IT environment. For the platform teams involved, this creates challenges because these apps will need to communicate to services outside of the cluster.
Wound-licking Cisco targeted the next cloud computing wave; Nokia cried foul on rivals' 800G...
If your idea for interconnection and migration to the private cloud involves using NSX and L2VPN so that you can “stretch the vlan” between your NSX private farm and the one into the Cloud you are doing it wrong.
No matter if you are using VXLAN as a transport or any other technology, if your plan involves layer 2 extension you are doing it wrong.
Not every application should be migrated to the public cloud, and most definitely you should not migrate something that relies on a layer 2 adjacency to work.
If layer 2 extension is a way to allow ip mobility, then again, it’s just a lazy design. There are better ways to provide same-subnet IP mobility that doesn’t require layer 2 (see LISP or BGP-EVPN Type 5 routing for example).
Even if it works on Power Point or on a small demo, you really should NOT.
In this episode of the Hedge, Stephane Bortzmeyer joins Alvaro Retana and Russ White to discuss draft-ietf-dprive-rfc7626-bis, which “describes the privacy issues associated with the use of the DNS by Internet users.” Not many network engineers think about the privacy implications of DNS, a important part of the infrastructure we all rely on to make the Internet work.
Quarkus is a Kubernetes native Java framework that allows the programming language to be more...
“It’s not a secret that we did not do as well as we wanted to in the first phase of the web...
The Toronto-based telecom claims the service will provide customers with tools to support both...
The network integrator needed a services management platform that would allow it to configure...
On 15 May, the Telegraph reported that The Five Eyes intelligence alliance planned to meet to explore legal options to block plans to implement end-to-end encryption on Facebook Messenger. According to the UK-based newspaper, the discussions between the governments of the United States, the United Kingdom, Australia, Canada, and New Zealand would focus on how the “duty of care,” a basic concept found in tort law, could be stretched to force online platforms to remove or refrain from implementing end-to-end encryption. (A duty of care is the legal responsibility of a person or organization to avoid any behaviors or omissions that could reasonably be foreseen to cause harm to others.)
It’s easy to predict what such a strategy might look like – the playbook is familiar. In this case, if duty of care becomes the rationale for banning end-to-end encryption, it could be used as a framework to ban future deployments. Additionally, similar to other legislation, including the Online Harms, there will be an argument that social media companies have a special duty of care to protect vulnerable groups. This is nothing more Continue reading