Hertzbleed explained

Hertzbleed explained
Hertzbleed explained

You may have heard a bit about the Hertzbleed attack that was recently disclosed. Fortunately, one of the student researchers who was part of the team that discovered this vulnerability and developed the attack is spending this summer with Cloudflare Research and can help us understand it better.

The first thing to note is that Hertzbleed is a new type of side-channel attack that relies on changes in CPU frequency. Hertzbleed is a real, and practical, threat to the security of cryptographic software.

Should I be worried?

From the Hertzbleed website,

“If you are an ordinary user and not a cryptography engineer, probably not: you don’t need to apply a patch or change any configurations right now. If you are a cryptography engineer, read on. Also, if you are running a SIKE decapsulation server, make sure to deploy the mitigation described below.”

Notice: As of today, there is no known attack that uses Hertzbleed to target conventional and standardized cryptography, such as the encryption used in Cloudflare products and services. Having said that, let’s get into the details of processor frequency scaling to understand the core of this vulnerability.

In short, the Hertzbleed attack shows that, under certain Continue reading

The Faster The Switch, The Cheaper Bit Flits

It may have taken a while for the transition to 200 Gb/sec and 400 Gb/sec networking to take off in the datacenter, but this higher gear to switching is finally kicking in and delivering unprecedented bang for the buck in networks, and in fairly short order at least compared to sluggish pace that 100 Gb/sec Ethernet took getting into the datacenter.

The Faster The Switch, The Cheaper Bit Flits was written by Timothy Prickett Morgan at The Next Platform.

Ansible For Network Automation Lesson 3: Ansible Modules Overview – Video

In lesson 3 of this course about Ansible for network automation, Josh VanDeraa covers the lab environment used in this course, reviews the Ansible Network Modules documentation page, and look at the parameters of an Ansible module to know what’s required and what the response will be. Josh has created a GitHub repo to store […]

The post Ansible For Network Automation Lesson 3: Ansible Modules Overview – Video appeared first on Packet Pushers.

Ansible For Network Automation Lesson 4: Gathering Device Information – Video

In this installment of the series on Ansible and network automation, Josh VanDeraa looks at how to update an Ansible config file, gather data from various devices using command modules including IOS, and use ios_facts to get IOS-specific information from IOS devices. Josh has created a GitHub repo to store additional material, including links and […]

The post Ansible For Network Automation Lesson 4: Gathering Device Information – Video appeared first on Packet Pushers.

New PCI Express spec doubles the bit rate

The latest PCI Express (PCIe) specification again doubles the data rate over the previous spec.PCI Express 7.0 calls for a data rate of 128 gigatransfers per second (GT/s) and up to 512 GB/s bi-directionally via x16 data lane slot (not every PCI Express slot in a PC or server uses 16 transfer lanes), according to PCI-SIG, the industry group that maintains and develops the specification. [ Get regularly scheduled insights by signing up for Network World newsletters. ] The slower, previous spec, PCI Express 6.0 has yet to come to market, and doubling the rate with each version has become the norm.To read this article in full, please click here

New PCI Express spec doubles the bit rate

The latest PCI Express (PCIe) specification again doubles the data rate over the previous spec.PCI Express 7.0 calls for a data rate of 128 gigatransfers per second (GT/s) and up to 512 GB/s bi-directionally via x16 data lane slot (not every PCI Express slot in a PC or server uses 16 transfer lanes), according to PCI-SIG, the industry group that maintains and develops the specification. [ Get regularly scheduled insights by signing up for Network World newsletters. ] The slower, previous spec, PCI Express 6.0 has yet to come to market, and doubling the rate with each version has become the norm.To read this article in full, please click here

New PCI Express spec features doubles the bit rate

The latest PCI Express (PCIe) specification again doubles the data rate over the previous spec.PCI Express 7.0 calls for a data rate of 128 gigatransfers per second (GT/s) and up to 512 GB/s bi-directionally via x16 data lane slot (not every PCI Express slot in a PC or server uses 16 transfer lanes), according to PCI-SIG, the industry group that maintains and develops the specification. [ Get regularly scheduled insights by signing up for Network World newsletters. ] The slower, previous spec, PCI Express 6.0 has yet to come to market, and doubling the rate with each version has become the norm.To read this article in full, please click here

What’s new: network automation with ansible.netcommon 3.0.0

libssh blog

With the recent release of the ansible.netcommon Collection version 3.0.0, we have promoted two features as standard: libssh transport and import_modules. These features provide increased performance and resiliency for your network automation. These two features, which have been available since July 2020 (libssh) and March 2021 (import_modules), are now turned on by default for playbooks running the latest version of the netcommon Collection, so let's take a look at what makes these changes so exciting!

 

The road to libssh

Libssh support was formally announced in November 2020 for FIPS mode compatibility and speed. This blog goes into great detail about why we started this change, and it's worth a read if you want to know more about how we use paramiko or libssh in the network_cli connection plugin. I'm going to try not to rehash everything from that post, but I do want to take a little time to revisit security and speed to show what libssh brings to the experience of using ansible with network devices.

 

FIPS Mode

One of the earliest issues we identified with paramiko, our earlier SSH transport plugin, is that it was not FIPS 140 compliant. This meant that environments Continue reading

New partner program for SMB agencies & hosting partners now in closed beta

New partner program for SMB agencies & hosting partners now in closed beta

A fundamental principle here at Cloudflare has always been that we want to serve everyone - from individual developers to small businesses to large corporations. In the earliest days, we provided services to hosting partners and resellers around the globe, who helped bring Cloudflare to thousands of domains with free caching and DDoS protection for shared infrastructures.

Today, we want to reinforce our commitment to our hosting ecosystem and small business partners that leverage Cloudflare to help bring a better Internet experience to their customers. We've been building a robust multi-tenant partner platform that we will begin to open up to everyone searching for a faster, safer, and better Internet experience. This platform will come in the form of a Self Serve Partner program that will allow SMB agencies & hosting partners to create accounts for all their customers under one dashboard, consolidate billing, and provide discounted plans to our partners.

Deprecation of our legacy APIs

To make way for the new, we first must discuss the end-of-life of some of Cloudflare’s earliest APIs. Built and launched in 2011, our Hosting and Optimized Partner Programs allowed our initial CDN and DDoS solutions to expand to brand-new audiences around the Continue reading

Network performance update: Cloudflare One Week June 2022

Network performance update: Cloudflare One Week June 2022
Network performance update: Cloudflare One Week June 2022

In September 2021, we shared extensive benchmarking results of 1,000 networks all around the world. The results showed that on a range of tests (TCP connection time, time to first byte, time to last byte), and on different measures (p95, mean), Cloudflare was the fastest provider in 49% of the top 1,000 networks around the world.

Since then, we’ve expanded our testing to cover not just 1,000 but 3,000 networks, and we’ve worked to continuously improve performance, with the ultimate goal of being the fastest everywhere and an intermediate goal to grow the number of networks where we’re the fastest by at least 10% every Innovation Week. We met that goal Platform Week May 2022), and we’re carrying the work over to Cloudflare One Week (June 2022).

We’re excited to share that Cloudflare was the fastest provider in 1,290 of the top 3,000 most reported networks, up from 1,280 even one month ago during Platform Week.

Measuring what matters

To quantify global network performance, we have to get enough data from around the world, across all manner of different networks, comparing ourselves with other providers. We use Real User Measurements (RUM) to fetch a 100kB file from different providers. Continue reading

Identifying content gaps in our documentation

Identifying content gaps in our documentation
Identifying content gaps in our documentation

If you’ve tuned into this blog for long enough, you’ll notice that we’re pretty big on using and stress-testing our own products (“dogfooding”) at Cloudflare.

That applies to our security team, product teams, and – as my colleague Kristian just blogged about – even our documentation team. We’re incredibly excited to be on the Pages platform, both because of the performance and workflow improvements and the opportunity to help the platform develop.

What you probably haven’t heard about is how our docs team uses dogfooding – and data – to improve our documentation.

Dogfooding for docs

As a technical writer, it’s pretty common to do the thing you’re documenting. After all, it’s really hard to write step-by-step instructions if you haven’t been through those steps. It’s also a great opportunity to provide feedback to our product teams.

What’s not as common for a writer, however, is actually using the thing you’re documenting. And it’s totally understandable why. You’re already accountable to your deadlines and product managers, so you might not have the time. You might not have the technical background. And then there’s the whole problem of a real-world use case. If you’re really dedicated, you can set Continue reading

Repost: Buffers, Congestion, Jitter, and Shapers

Béla Várkonyi left a great comment on a blog post discussing (among other things) whether we need large buffers on spine switches. I don’t know how many people read the comments; this one is too valuable to be lost somewhere below the fold


You might want to add another consideration. If you have a lot of traffic aggregation even when the ingress and egress port are roughly at the same speed or when the egress port has more capacity, you could still have congestion. Then you have two strategies, buffer and suffer jitter and delay, or drop and hope that the upper layers will detect it and reduce the sending by shaping.

Automation 14. Deep dive into Building CI/CD for Network Automation and Software Development with GitHub Actions

Hello my friend,

We planned to write this blogpost for a few weeks if not months, but due to various reasons it was delayed. We are delighted to finally post it, so that you can get some useful ideas how you can build your own CI/CD pipeline with GitHub, probably the most popular platform for collaborative software development.

1
2
3
4
5
No part of this blogpost could be reproduced, stored in a
retrieval system, or transmitted in any form or by any
means, electronic, mechanical or photocopying, recording,
or otherwise, for commercial purposes without the
prior permission of the author.

Regards

A lot of lessons about building the CI/CD pipelines and importance of unit testing and linting checks I learned from a colleague of mine, Leigh Anderson, whom I’m very grateful for that.

CI/CD Overview

CI/CD is an approach, which is very often used in software development, and discussed outside of that area. It stands for:

  • CI (Continuous Integration) is a process, where the created software (for sake of simplicity, any piece of code) is getting ready to be deployed.
  • CD (Continuous Deployment) is a process, where the software, which is ready for deployment, is actually deployed Continue reading

Optics Are More Important Than Your Switches At 400G

This post originally appeared on the Packet Pushers’ Ignition site on January 9, 2020.   This slide from the Cisco Live BRKOPT-2006 presentation on “Preparing for 400 GbE” jumped out at me. I recommend you download the whole presentation and keep it for future reference. It’s an excellent resource with lots of useful information. Optics […]

The post Optics Are More Important Than Your Switches At 400G appeared first on Packet Pushers.

Worth Reading: Smart Highways or Smart Cars?

I stumbled upon an interesting article in one of my RSS feeds: should be build smart highways or smart cars?

The article eloquently explains how ridiculous and expensive it would be to put the smarts in the infrastructure, and why most everyone is focused on building smart cars. The same concepts should be applied to networking, but of course the networking vendors furiously disagree – the network should be as complex, irreplaceable, and expensive as possible. I collected a few examples seven years ago, and nothing changed in the meantime.

Six Coaching Principles That Took Me Years to Learn

This post is overdue. Perhaps by a few years. Finally, earlier this week, I saw a few posts on Reddit that made me thumb through stacks of papers to find my initial draft. What comes here, at its finest, is merely personal experience. I would call the lesson “established rules” if I had enough scientific […]

The post Six Coaching Principles That Took Me Years to Learn appeared first on Packet Pushers.

AMD Needs To Complete The Datacenter Set With Switching

In the past several decades, data processing and storage systems could be architected from best of breed components, and the market could – and did – sustain multiple suppliers of competing technologies in each of the categories of compute, networking, and storage.

AMD Needs To Complete The Datacenter Set With Switching was written by Timothy Prickett Morgan at The Next Platform.

Cisco announces plan to exit Russia and Belarus

Cisco has announced plans to formally exit Russia, winding down its business operations in Russia and Belarus in response to the invasion of Ukraine earlier this year.The networking company first made a statement on March 3, declaring that it would be halting all business operations in Russia and Belarus "for the foreseeable future." On Thursday the company released another statement, noting that it had continued to "closely monitor" the war in Ukraine and as a result, a decision had been made to "begin an orderly wind-down of our business in Russia and Belarus."To read this article in full, please click here