One of the many takeaways I got from Future:Net last week was the desire for networks to do more. The presenters were talking about their hypothesized networks being able to make intelligent decisions based on intent and other factors. I say “hypothesized” because almost everyone admitted that we aren’t quite there. Yet. But the more I thought about it, the more I realized that perhaps the timeline for these mythical networks is a bit skewed in favor of refresh cycles that are shorter than we expect.
SDN has changed the way we look at things. Yes, it’s a lot of hype. Yes, it’s an overloaded term. But it’s also the promise of getting devices to do much more than we had ever dreamed. It’s about automation and programmability and, now, deriving intent from plain language. It’s everything we could ever want a simple box of ASICs to do for us and more.
But why are we asking so much? Why do we now believe that the network is capable of so much more than it was just five years ago? Is it because we’ve developed a revolutionary new method for making chips that are ten times Continue reading
I discussed the BGP Router Reflector design, Settlement Free Peering , Transit Operator choice, Internet Gateways and the Route Reflector connections, MPLS deployment option at the Internet Edge and many other things with the Operator from Maldives. Operator name is Dhiraagu. Autonomous System Number is 7642. Engineer from the ISP Core team, who is […]
The post Discussion with Maldivian Operator Dhiraagu (AS7642) appeared first on Cisco Network Design and Architecture | CCDE Bootcamp | orhanergun.net.
A look at the advantages NGFWs have over traditional network firewalls.
In June 2017, we concluded the Building Next Generation Data Center online course with a roundtable discussion with Andrew Lerner, Research Vice President, Networking, and Simon Richard, Research Director, Data Center Networking @ Gartner.
During the first 45 minutes, we covered a lot of topics including:
Read more ...For quite a long time installation and deployment have been deemed as major barriers for OpenStack adoption. The classic “install everything manually” approach could only work in small production or lab environments and the ever increasing number of project under the “Big Tent” made service-by-service installation infeasible. This led to the rise of automated installers that over time evolved from a simple collection of scripts to container management systems.
The first generation of automated installers were simple utilities that tied together a collection of Puppet/Chef/Ansible scripts. Some of these tools could do baremetal server provisioning through Cobbler or Ironic (Fuel, Compass) and some relied on server operating system to be pre-installed (Devstack, Packstack). In either case the packages were pulled from the Internet or local repository every time the installer ran.
The biggest problem with the above approach is the time it takes to re-deploy, upgrade or scale the existing environment. Even for relatively small environments it could be hours before all packages are downloaded, installed and configured. One of the ways to tackle this is to pre-build an operating system with all the necessary packages and only use Puppet/Chef/Ansible to change configuration files and Continue reading
I’m returning to my OpenStack SDN series to explore some of the new platform features like service function chaining, network service orchestration, intent-based networking and dynamic WAN routing. To kick things off I’m going to demonstrate my new fully-containerized OpenStack Lab that I’ve built using an OpenStack project called Kolla.
Continue reading AWS and Microsoft Azure already offer direct connections to their clouds.
Over the last two days, Cloudflare observed two events that had effects on global Internet traffic levels. Cloudflare handles approximately 10% of all Internet requests, so we have significant visibility into traffic from countries and networks across the world.
On Tuesday, September 5th, the government of Togo decided to restrict Internet access in the country following political protests. The government blocked social networks and rate-limited traffic, which had an impact on Cloudflare.
This adds Togo to the list of countries like Syria (twice), Iraq, Turkey, Libya, Tunisia, etc that have restricted or revoked Internet access.
The second event happened on Wednesday, September 6th, when a category 5 hurricane ravaged the Caribbean Islands.
The affected countries at the moment are:
Most of the network cables are buried underground or laying at the bottom of the oceans but the hardware which relies on electricity is the first one to go down.
Cell towers sometime have their own power source thus allowing local phone calls but without a backbone no outside Continue reading
The post Worth Reading: Cloud data storage data planes appeared first on rule 11 reader.
Several months ago I had created a simple GNS3 network topology for practicing my networking skills. What had firstly begun as a simple lab, later grew in to a real world enterprise network consisting of a campus, data center, DMZ network blocks and ISPs. During the next several weeks I added new devices into the topology, struggling with no time due to complicated family circumstances. In March 2017 I completely stopped working on this project. Luckily, I was done with the configuration of all devices and I wrote several articles describing my progress. Now, almost a half of the year later, I am ready to share my experience with the blog readers and publish the articles. Below is the list of the articles. I hope you find them useful.
Enterprise Network on GNS3 - Part 1 - Introduction
Enterprise Network on GNS3 - Part 2 - Access Layer
Enterprise Network on GNS3 - Part 3 - Distribution and Core Layers
Enterprise Network on GNS3 - Part 4 - Cisco ASAv-I
Enterprise Network on GNS3 - Part 5 - Data Center
Enterprise Network on GNS3 - Part 6 - Edge Router and ISPs
Enterprise Network on GNS3 - Part 7 - Continue reading
Containers vs. hypervisors: the battle is ongoing, but the two technologies don’t need to be pitted against one another—in fact, they each offer benefits that are more suitable for certain workloads than others.
Containers are considered resilient, in part, because they can be deployed both as classic monolithic applications as well as highly composable microservices. They are portable, and can be scaled up or down and deleted when no longer needed. Among many other benefits, containers pack more applications into a single physical server than a virtual machine (VM) is capable of, which means they are superior if you need the maximum amount of applications on a bare minimum number of servers.
When it comes to hypervisors in our current technology climate, their value seems to be slowly diminishing—and containers continue to enjoy a steady increase in popularity. Part of VM’s decline is due to resource allocation: they use a lot of system resources, requiring a full copy of the OS and a virtual copy of the hardware that the OS needs to run, while containers only need the supporting libraries required to run a specific program.
Furthermore, VM’s don’t provide the same level of portability, consistency, or speed that Continue reading
Today marks the one-year anniversary of the Dell and EMC merger.
The update includes engineering support for operating systems from Arista, Cisco, Juniper, and Open vSwitch.