If your household is anything like mine, your Internet connection has experienced a significant increase in usage over these last several months. We’re streaming more and more media each day, and we’re on seemingly endless hours of videoconferences for work and for school. While all of that streaming media consumes downstream capacity, those videoconferences can generate a significant amount of upstream traffic. I’m fortunate enough to have fiber-based broadband connectivity that can easily handle this traffic, but I know others aren’t as lucky. They’re stuck with copper-based connections or satellite links that struggle to deliver streaming media or video calls with any sort of viewable quality.
Across this spectrum of “last mile” Internet connections, I looked at the impact from both a provider and user perspective. What kind of traffic growth have last mile network providers experienced? What steps have these providers taken to ensure they have sufficient capacity? And most importantly for end users, how has increased traffic impacted last mile connection speeds?
The network connections from customer- and subscriber-facing Internet service providers are often referred to as last mile networks. These are Internet services delivered over a notional distance – the “last mile” – to subscriber premises, such Continue reading
For any network that provides routing services to customers it is important to segregate them in different virtual topologies that don’t interfere with each other.
This post is not about NFV, but it is important to understand …
It’s amazing how sometimes people fond of sharing their opinions and buzzwords on various social media can’t answer simple questions. Today’s blog post is based on a true story… a “senior network architect” fully engaged in a recent hype cycle couldn’t answer a simple question:
Why exactly would you need VXLAN and EVPN?
We could spend a day (or a week) discussing the nuances of that simple question, but all I have at the moment is a single web page, so here we go…
New applications and workloads are constantly being added to Kubernetes clusters. Those same apps need to securely communicate with resources outside the cluster behind a firewall or other control point. Firewalls require a consistent IP, but routable IPs are a limited resource that can be quickly depleted if applied to every service.
With the Calico Egress Gateway, a new feature in Calico Enterprise 3.0, existing firewalls and control points can now be used to securely manage access to infrastructure and services outside of the cluster. In addition, IT teams are now able to identify an application/workload in a Kubernetes namespace via the source IP.
As organizations progress on their Kubernetes journey from pilot to production, they begin migrating existing applications into the cluster, which has merged with the greater IT environment. For the platform teams involved, this creates challenges because these apps will need to communicate to services outside of the cluster.
Wound-licking Cisco targeted the next cloud computing wave; Nokia cried foul on rivals' 800G...
If your idea for interconnection and migration to the private cloud involves using NSX and L2VPN so that you can “stretch the vlan” between your NSX private farm and the one into the Cloud you are doing it wrong.
No matter if you are using VXLAN as a transport or any other technology, if your plan involves layer 2 extension you are doing it wrong.
Not every application should be migrated to the public cloud, and most definitely you should not migrate something that relies on a layer 2 adjacency to work.
If layer 2 extension is a way to allow ip mobility, then again, it’s just a lazy design. There are better ways to provide same-subnet IP mobility that doesn’t require layer 2 (see LISP or BGP-EVPN Type 5 routing for example).
Even if it works on Power Point or on a small demo, you really should NOT.
In this episode of the Hedge, Stephane Bortzmeyer joins Alvaro Retana and Russ White to discuss draft-ietf-dprive-rfc7626-bis, which “describes the privacy issues associated with the use of the DNS by Internet users.” Not many network engineers think about the privacy implications of DNS, a important part of the infrastructure we all rely on to make the Internet work.
Quarkus is a Kubernetes native Java framework that allows the programming language to be more...
“It’s not a secret that we did not do as well as we wanted to in the first phase of the web...
The Toronto-based telecom claims the service will provide customers with tools to support both...
The network integrator needed a services management platform that would allow it to configure...
On 15 May, the Telegraph reported that The Five Eyes intelligence alliance planned to meet to explore legal options to block plans to implement end-to-end encryption on Facebook Messenger. According to the UK-based newspaper, the discussions between the governments of the United States, the United Kingdom, Australia, Canada, and New Zealand would focus on how the “duty of care,” a basic concept found in tort law, could be stretched to force online platforms to remove or refrain from implementing end-to-end encryption. (A duty of care is the legal responsibility of a person or organization to avoid any behaviors or omissions that could reasonably be foreseen to cause harm to others.)
It’s easy to predict what such a strategy might look like – the playbook is familiar. In this case, if duty of care becomes the rationale for banning end-to-end encryption, it could be used as a framework to ban future deployments. Additionally, similar to other legislation, including the Online Harms, there will be an argument that social media companies have a special duty of care to protect vulnerable groups. This is nothing more Continue reading
Today's Day Two Cloud podcast gets into the nerdy details of how an infrastructure professional can use GitHub Actions. Actions lets you chain together steps or instructions and trigger them to run as a workflow. Our guest and guide to GitHub Actions is Chris Wahl.
The post Day Two Cloud 050: Nerding Out On GitHub Actions With Chris Wahl appeared first on Packet Pushers.
In order to help easily migrate from NSX for vSphere to NSX-T, with minimal downtime, the latest release of VMware NSX-T 3.0 introduces Maintenance Mode to NSX-T Migration Coordinator (a tool that has been built into NSX-T since the 2.4 release). The Migration Coordinator tool is designed to run in-place on the same hardware that is running NSX for vSphere, and swap out NSX for vSphere bits with NSX-T.
This blog post is a follow up to the previous blog, Migration from VMware NSX for vSphere to NSX-T, which covers Migration Coordinator. For more details on the Migration Process, please check out the previous blog. This blog focuses on the Maintenance Mode feature which is part of the NSX-T 3.0 release.
Migration Coordinator is a tool that runs on NSX-T Manager. Its disabled by default since migrating from NSX for vSphere to NSX should only be a one-time task.
To enable Migration Coordinator, simply log in to NSX Manager via SSH and run the command “start service migration-coordinator”.
Note: This command is also Continue reading
If you are a young entrepreneur and have a bit of money that you want to invest in some tech businesses for sale, then here are the top 5 types of tech businesses that you may want to consider buying.
With so many people these days looking for ways to work online, becoming a blogging expert is an advantage. That’s why buying a blog consulting business may be a great business for you. Blog consulting is one of the tech businesses for sale, and it is much easier to buy an existing business then to start a blog consulting business from scratch.
As a blog consultant, you may help different people set up and run their blogs. As your business grows, you can also farm out blog related projects to others and take a percentage of the proceeds for each job.
A social media consulting business may be a great business to buy, especially if the business is already up and running and has several great consultants on staff. Social media consulting is big business these days and can cover everything from social media marketing to advising people Continue reading
When Cloudflare first launched in 2010, network security still relied heavily on physical security. To connect to a private network, most users simply needed to be inside the walls of the office. Once on that network, users could connect to corporate applications and infrastructure.
When users left the office, a Virtual Private Network (VPN) became a bandaid to let users connect back into that office network. Administrators poked holes in their firewall that allowed traffic to route back through headquarters. The backhaul degraded user experience and organizations had no visibility into patterns and events that occurred once users were on the network.
Cloudflare Access launched two years ago to replace that model with an identity-based solution built on Cloudflare’s global network. Instead of a private network, teams secure applications with Cloudflare’s network. Cloudflare checks every request to those applications for identity, rather than IP ranges, and accelerates those connections using the same network that powers some of the world’s largest web properties.
In this zero-trust model, Cloudflare Access checks identity on every request - not just the initial login to a VPN client. Administrators build rules that Cloudflare’s network continuously enforces. Each request is evaluated for permission and logged for Continue reading