In this week's IPv6 Buzz episode, Ed, Scott, and Tom chat with John Burns, a lead architect at Wells Fargo, about the relatively early adoption of IPv6 at the company. The discussion also covers adoption trends in the financial sector as a whole, along with the key challenges and opportunities of the protocol.
The COVID-19 pandemic has taught us once and for all that broadband access is critical infrastructure. Without it, communities cannot work, learn, or earn online – a necessity during stay-at-home orders. And policymakers are taking notice. In the past few months, trillions of dollars have been proposed by the House, Senate, and White House for […]
The edge's ability to aggregate, process, and analyze data locally opens up new opportunities for entrepreneurial ventures outside major hubs of commerce.
Chris Wahl explains how pipelines, sometimes thought of as a developer tool only, can be used by IT infrastructure professionals delivering infrastructure as code (IaC). Event triggers, automation, and testing.
The public cloud for years has tempted enterprises with the promise of much-needed agility and scalability that come with an elastic IT environment and of cost savings from not having to invest a lot of money upfront to buy a lot of infrastructure, adopting instead more flexible consumption models that allow organizations to pay only for what they use. …
Some time ago I was looking at a hot section in our code and I saw this:
if (debug) {
log("...");
}
This got me thinking. This code is in a performance critical loop and it looks like a waste - we never run with the "debug" flag enabled[1]. Is it ok to have if clauses that will basically never be run? Surely, there must be some performance cost to that...
Just how bad is peppering the code with avoidable if statements?
Back in the days the general rule was: a fully predictable branch has close to zero CPU cost.
To what extent is this true? If one branch is fine, then how about ten? A hundred? A thousand? When does adding one more if statement become a bad idea?
At some point the negligible cost of simple branch instructions surely adds up to a significant amount. As another example, a colleague of mine found this snippet in our production code:
create LSP from R1 to R6. The primary path should have bandwidth constraint (e.g. 500Mbit/s)
describe reserving bandwidth process
examine signaling with cspf and no cspf option
examine opaque LSA
check maximum bandwidth, reservable bandwidth, and unreserved bandwidth fields
Any changes after LSP signaling?
change path bandwidth and check opaque LSA again. Pay attention to Age and Sequence especially. What is a problem that can occur if we have an unstable network and a lot of LSP with bandwidth constraints?
How can we decrease the amount of LSA flood?
configure Threshold-Triggered IGP TE Updates and examine how it works
Bandwidth Reservation Styles
configure LSP to_R6 with primary "totally loose" path (bandwidth 200Mbit/s) and standby secondary "totally loose" path (bandwidth 300Mbit/s)
find a shared link
examine TED
What is unreserved bandwidth?
What is the default Bandwidth Reservation Style?
change Bandwidth Reservation Style and examine TED again
Big Blue got out of the chip foundry business when it sold off its IBM Microelectronics division to GlobalFoundries, itself a spinout of AMD, in 2014. …
I love hearing real-life “how did I start my automation journey” stories. Here’s what one of ipSpace.net subscribers sent me:
Make peace with your network engineering soul and mind and open up to the possibility that the world has moved on to something else when it comes to consuming apps and software. Back in 2017, this was very hard on me :)
I love hearing real-life “how did I start my automation journey” stories. Here’s what one of ipSpace.net subscribers sent me:
Make peace with your network engineering soul and mind and open up to the possibility that the world has moved on to something else when it comes to consuming apps and software. Back in 2017, this was very hard on me :)
If data governance stakeholders can plan for common challenges and address them at the outset of a new initiative, their projects will have a much greater chance for success.
Join host Peter McKee and Python wizard Michael Kennedy for a warts-and-all demo of how to Dockerize a Python app using FastAPI, a popular Python framework. Kennedy is a developer and entrepreneur, and the founder and host of two successful Python podcasts — Talk Python To Me and Python Bytes. He’s also a Python Software Foundation Fellow.
With some skillful back-seat driving by McKee, Kennedy shows how to build a bare-bones web API — in this case one that allows you to ask questions and get answers about movies (director, release date, etc.) — by mashing together a movie service and FastAPI. Next, he shows how to put it into a Docker container, create an app and run it, finally sharing the image on GitHub.
If you’re looking for a scripted, flawless, pre-recorded demo, this is not the one for you! McKee and Kennedy iterate and troubleshoot their way through the process — which makes this a great place to start if you’re new to Dockerizing Python apps. Install scripts, libraries, automation, security, best practices, and a pinch of Python zen — it’s all here. (Duration 1 hour, 10 mins.)
IBM has taken the wraps off a version of its Cloud Pak for Security that aims to help customers looking to deploy zero-trust security facilities for enterprise resource protection.IBM Cloud Paks are bundles of Red Hat’s Kubernetes-based OpenShift Container Platform along with Red Hat Linux and a variety of connecting technologies to let enterprise customers deploy and manage containers on their choice of private or public infrastructure, including AWS, Microsoft Azure, Google Cloud Platform, Alibaba and IBM Cloud.To read this article in full, please click here
IBM has taken the wraps off a version of its Cloud Pak for Security that aims to help customers looking to deploy zero-trust security facilities for enterprise resource protection.IBM Cloud Paks are bundles of Red Hat’s Kubernetes-based OpenShift Container Platform along with Red Hat Linux and a variety of connecting technologies to let enterprise customers deploy and manage containers on their choice of private or public infrastructure, including AWS, Microsoft Azure, Google Cloud Platform, Alibaba and IBM Cloud.To read this article in full, please click here
Containerlab is a new open-source network emulator that quickly builds network test environments in a devops-style workflow. It provides a command-line-interface for orchestrating and managing container-based networking labs and supports containerized router images available from the major networking vendors.
More interestingly, Containerlab supports any open-source network operating system that is published as a container image, such as the Free Range Routing (FRR) router. This post will review how Containerlab works with the FRR open-source router.
While working through this example, you will learn about most of Containerlab’s container-based features. Containerlab also supports VM-based network devices so users may run commercial router disk images in network emulation scenarios. I’ll write about building and running VM-based labs in a future post.
While it was initially developed by Nokia engineers, Containerlab is intended to be a vendor-neutral network emulator and, since its first release, the project has accepted contributions from other individuals and companies.
The Containerlab project provides excellent documentation so I don’t need to write a tutorial. But, Containerlab does not yet document all the steps required to build an open-source router lab that starts in a pre-defined state. This post will cover that scenario so I hope it adds something of Continue reading
Today's Day Two Cloud is a wide-ranging discussion about the value of public cloud, a response to the growing backlash toward cloud cost and complexity, and techniques to better meld automation with application and infrastructure delivery. Our guest is Chris Wahl, Senior Principal at Slalom.
Today's Day Two Cloud is a wide-ranging discussion about the value of public cloud, a response to the growing backlash toward cloud cost and complexity, and techniques to better meld automation with application and infrastructure delivery. Our guest is Chris Wahl, Senior Principal at Slalom.
The top brass at FPGA maker Xilinx are not hosting calls with Wall Street because of the pending $35 billion acquisition of the company by AMD, so we are left to get our own insight out of the financial report and accompanying statement that Xilinx has released for its latest quarterly results. …
Mike Hicks
Mike is a principal solutions analyst at ThousandEyes, a part of Cisco, and a recognized expert with more than 30 years of experience in network and application performance.
In the olden days, users were in offices and all apps lived in on-premises data centers. The WAN (wide area network) was what connected all of them. Today, with the adoption of SaaS apps and associated dependencies such as cloud services and third-party API endpoints, the WAN is getting stretched beyond recognition. In its place, the internet is directly and exclusively carrying a large — if not majority — share of all enterprise traffic flows.
Enterprises are increasingly moving away from legacy WANs in favor of internet-centric, software-defined WANs, also called SD-WANs or software-defined networking in a wide area network. Architected for interconnection with cloud and external services, adopting SD-WANs can play a critical role in making enterprise networks cloud-ready, more cost-efficient and better suited to delivering quality digital experiences to customers and employees at all locations. But the transformation brings new visibility needs, and ensuring that SD-WAN delivers on expectations requires a new approach to monitoring that addresses network visibility and application performance equally.
WAN in the Light of Continue reading