After (hopefully) agreeing on what routing, bridging, and switching are, let’s focus on the first important topic in this area: how do we get a packet across the network? Yet again, there are three fundamentally different technologies:
Source node knows the full path (source routing)
Source node opens a path (virtual circuit) to the destination node and uses that path to send traffic
The network performs hop-by-hop destination-address-based packet forwarding.
After (hopefully) agreeing on what routing, bridging, and switching are, let’s focus on the first important topic in this area: how do we get a packet across the network? Yet again, there are three fundamentally different technologies:
Source node knows the full path (source routing)
Source node opened a path (virtual circuit) to the destination node and uses that path to send traffic
The network performs hop-by-hop destination-address-based packet forwarding.
By instrumenting the network and using additional data sources, IT can maintain high-quality access to critical applications and create a positive end-user experience.
After its purchase of cloud storage automation specialist Spot for $450 million this past June, NetApp is releasing its first new product under the brand. Called Spot Storage, it's a "storageless" solution that's designed to enable automated administration of cloud-native, container-based applications.NetApp describes Spot Storage as a cloud-based, serverless offering for application-driven architectures that run microservices-based applications in Kubernetes containers."Serverless computing" is a bit of a misnomer. Your application and data still reside on servers, but they're not tied to one particular physical location. Just like the cloud means never using the same physical box twice, a serverless storage service means the cloud provider runs the server and dynamically manages the allocation of machine resources.To read this article in full, please click here
We have been waiting for years to see the first discrete Xe GPU from Intel that is aimed at the datacenter, and as it turns out, the first one is not the heavy compute engine we have been anticipating, but rather a souped up version of the Iris Xe LP and Iris Max Xe LP graphics cards that were launch at the end of October, which themselves are essentially the GPU extracted from the hybrid CPU-GPU “Tiger Lake” Core i9 processors for PC clients. …
As we have been implementing rate limiting on Docker Hub for free anonymous and authenticated image pulls, we’ve heard a lot of questions from our users about how this will affect them. And we’ve also heard a number of statements that are inaccurate or misleading about the potential impacts of the change. We want to provide some answers here to help Docker users clearly understand the changes, quantify what is involved, and help developers choose the right Docker subscription for their needs.
First let’s look at the realities of what rate limiting looks like, and quantify what is still available for free to authenticated Docker users. Anyone can use a meaningful number of Docker Hub images for free. Anonymous, unauthenticated Docker users get 100 container pull requests per six hours. And when a user signs up for a free Docker ID, they get 2X the quantity of pulls. At 200 pulls per six hours, that is approximately 24,000 container image pulls per month per free Docker ID. This egress level is adequate for the bulk of the most common Docker Hub usage by developers. (Docker users can check their usage levels at any time through the command line. Docker developer Continue reading
The video discusses telemetry and requirements for network automation, providing an overview of sFlow measurement architecture and a discussion of recently added packet drop monitoring functionality, and ending with a live demonstration of GPU compute cluster analytics. The slides from the video are available here.
The video is part of recent talk Using Advanced Telemetry to Correlate GPU and Network Performance Issues [A21870] presented at the NVIDIA GTC conference.
In August 2019, the Internet Society supported the Mutually Agreed Norms for Routing Security (MANRS) initiative by creating a platform to visualize its members’ routing security data from around the globe. The MANRS Observatory’s interactive dashboard allows networks to check their progress in improving their routing security.
Last week, we updated some key features of the MANRS Observatory guided by member feedback. Below we share a summary of those changes.
Please note, detailed statistics and reports for specific networks are only available to MANRS participants. Your organization can become an MANRS member for free, and join a global group of people committed to making the Internet safer for us all. Find out how.
MANRS Observatory 3.0.1: Latest updates
Shorter reporting cycle
Improved favorite functionality
Access to RIPEstat widget
Change to how we round numbers
1. Shorter reporting cycle
Previously the MANRS Observatory provided status report updates up to 31 days after members’ had added their latest figures. While this wasn’t a real problem when looking at general trends, it was an issue for network operators who use the platform to check their network conformance. It was also an issue for the MANRS team, as we Continue reading
The network has never been more vulnerable. Covid-19 has flung users out from the data center to home offices—where they are accessing critical systems, applications, and other users from unsecured devices and WiFi connections. As a result, it’s all hands on deck for IT, with network engineers deputized as IT support staff in a mad rush to give remote users fast and reliable, yet secure, access to the tools and information they need.
But what of the regular duties of these engineers? They are being pushed back in favor of new priorities—stretching network engineering resources, already spread thin, to the breaking point.
Enter network automation. VMware NSX-T allows organizations to automate and simplify operations in the age of Covid.Tasks that were once performed manually through the UI or CLI can now be automated with the NSX API—creating the foundation for dynamic, flexible and responsive network architectures that can support a world where users, devices, applications and data connect across private, public and hybrid cloud environments.
Networking professionals who want to learn more about how to automate operations should check out the following on-demand sessions from VMworld:
Although hardware gets all the attention during Supercomputing week, much has been happening behind the scenes to make all the software run on the latest, fastest systems. …
In February 2019, I started my journey at Cloudflare. Back then, we lived in a COVID-19 free world and I was lucky enough, as part of the employee onboarding program, to visit our San Francisco HQ. As I took my first steps into the office, I was greeted by a beautiful bouquet of Protea flowers at the reception desk. Being from South Africa, seeing our national flower instantly made me feel at home and welcomed to the Cloudflare family - this memory will always be with me.
Later that day, I learnt it was Black History Month in the US. This celebration included African food for lunch, highlights of Black History icons on Cloudflare’s TV screens, and African drummers. At Cloudflare, Black History Month is coordinated and run by Afroflare, one of many Employee Resource Groups (ERGs) that celebrates diversity and inclusion. The excellent delivery of Black History Month demonstrated to me how seriously Cloudflare takes Black History Month and ERGs.
Today, I am one of the Afroflare leads in the London office and led this year’s UK Black History Month celebration. 2020 has been a year of historical events, which made this celebration uniquely significant. George Floyd’s murder Continue reading
With the COVID-19 pandemic showing no signs of abating, migration to the cloud is expected to accelerate as enterprises choose to let someone else worry about their server gear.In its global IT outlook for 2021 and beyond, IDC predicts the continued migration of enterprise IT equipment out of on-premises data centers and into data centers operated by cloud service providers (such as AWS and Microsoft) and colocation specialists (such as Equinix and Digital Realty).The research firm expects that by the end of 2021, 80% of enterprises will put a mechanism in place to shift to cloud-centric infrastructure and applications twice as fast as before the pandemic. CIOs must accelerate the transition to a cloud-centric IT model to maintain competitive parity and to make the organization more digitally resilient, the firm said.To read this article in full, please click here
With the COVID-19 pandemic showing no signs of abating, migration to the cloud is expected to accelerate as enterprises choose to let someone else worry about their server gear.In its global IT outlook for 2021 and beyond, IDC predicts the continued migration of enterprise IT equipment out of on-premises data centers and into data centers operated by cloud service providers (such as AWS and Microsoft) and colocation specialists (such as Equinix and Digital Realty).The research firm expects that by the end of 2021, 80% of enterprises will put a mechanism in place to shift to cloud-centric infrastructure and applications twice as fast as before the pandemic. CIOs must accelerate the transition to a cloud-centric IT model to maintain competitive parity and to make the organization more digitally resilient, the firm said.To read this article in full, please click here
A long while ago I found a great article explaining TLS 1.3 and its migration woes on CloudFlare blog. While I would strongly recommend you read it just to get familiar with TLS 1.3, the real fun starts when the author discusses migration problems, kludges you have to use trying to fix them, less-than-compliant implementations breaking those kludges, and options that were supposed to be dynamic, but turn out to be static (rusted shut) due to middleboxes that implemented protocols as-seen-in-the-wild not as-described-in-RFCs.
A long while ago I found a great article explaining TLS 1.3 and its migration woes on CloudFlare blog. While I would strongly recommend you read it just to get familiar with TLS 1.3, the real fun starts when the author discusses migration problems, kludges you have to use trying to fix them, less-than-compliant implementations breaking those kludges, and options that were supposed to be dynamic, but turn out to be static (rusted shut) due to middleboxes that implemented protocols as-seen-in-the-wild not as-described-in-RFCs.
Palo Alto is rolling out a cloud service that promises to protect the highly distributed data in contemporary enterprises.The cloud service -- Enterprise Data Loss Prevention (DLP) – will help prevent data breaches by automatically identifying confidential intellectual property and personally identifiable information across the enterprise, Palo Alto stated.Data breaches are a huge and growing problem worldwide, but most of the current DLP systems were only designed to help global-scale organizations that have huge data protection budgets and staffs. Legacy and point solutions are not accessible, appropriate or effective for many of the companies that need them, said Anand Oswal, senior vice president and general manager with Palo Alto Networks.To read this article in full, please click here