VMware brings on-prem cloud connectivity to vSphere, vSAN

VMware is upgrading vSphere virtualization and vSAN hyperconverged software packages to better manage and efficiently meld on-prem applications with cloud-based resources.The company introduced two subscription-based offerings: vSphere+ and vSAN+ that integrate cloud connectivity into both, enabling cloud services for workloads running on vSphere, but specifically targeting on-premise apps. The packages will include all necessary components such as VMware vCenter instances, VMware ESXi hosts, Tanzu Standard Runtime, and Tanzu Mission Control Essentials and support. [ Get regularly scheduled insights by signing up for Network World newsletters. ]To read this article in full, please click here

Hertzbleed explained

Hertzbleed explained
Hertzbleed explained

You may have heard a bit about the Hertzbleed attack that was recently disclosed. Fortunately, one of the student researchers who was part of the team that discovered this vulnerability and developed the attack is spending this summer with Cloudflare Research and can help us understand it better.

The first thing to note is that Hertzbleed is a new type of side-channel attack that relies on changes in CPU frequency. Hertzbleed is a real, and practical, threat to the security of cryptographic software.

Should I be worried?

From the Hertzbleed website,

“If you are an ordinary user and not a cryptography engineer, probably not: you don’t need to apply a patch or change any configurations right now. If you are a cryptography engineer, read on. Also, if you are running a SIKE decapsulation server, make sure to deploy the mitigation described below.”

Notice: As of today, there is no known attack that uses Hertzbleed to target conventional and standardized cryptography, such as the encryption used in Cloudflare products and services. Having said that, let’s get into the details of processor frequency scaling to understand the core of this vulnerability.

In short, the Hertzbleed attack shows that, under certain Continue reading

Segment Routing IPv6 (SRv6) with FRR and Ubuntu

It is no secret that Segment Routing over MPLS offers a lot of promise and provides a simple path for network operators to migrate from existing LDP and RSVP-TE based networks. However, what if I told you that you could do even more with SR and not even run MPLS at all? What if then I told you that these nodes could be located anywhere with IPv6 access and physical adjacency is not even required?

Tech Bytes: Maximize Network Data With Nokia’s Streaming Telemetry (Sponsored)

On today's Tech Bytes podcast we discuss the value of streaming telemetry in a modern network with sponsor Nokia. Nokia's SR-Linux network OS enables streaming telemetry, so let's dive into the value of telemetry, how the OS supports it, and options for consuming the telemetry to do useful things with it.

The post Tech Bytes: Maximize Network Data With Nokia’s Streaming Telemetry (Sponsored) appeared first on Packet Pushers.

The Faster The Switch, The Cheaper Bit Flits

It may have taken a while for the transition to 200 Gb/sec and 400 Gb/sec networking to take off in the datacenter, but this higher gear to switching is finally kicking in and delivering unprecedented bang for the buck in networks, and in fairly short order at least compared to sluggish pace that 100 Gb/sec Ethernet took getting into the datacenter.

The Faster The Switch, The Cheaper Bit Flits was written by Timothy Prickett Morgan at The Next Platform.

Ansible For Network Automation Lesson 3: Ansible Modules Overview – Video

In lesson 3 of this course about Ansible for network automation, Josh VanDeraa covers the lab environment used in this course, reviews the Ansible Network Modules documentation page, and look at the parameters of an Ansible module to know what’s required and what the response will be. Josh has created a GitHub repo to store […]

The post Ansible For Network Automation Lesson 3: Ansible Modules Overview – Video appeared first on Packet Pushers.

Ansible For Network Automation Lesson 4: Gathering Device Information – Video

In this installment of the series on Ansible and network automation, Josh VanDeraa looks at how to update an Ansible config file, gather data from various devices using command modules including IOS, and use ios_facts to get IOS-specific information from IOS devices. Josh has created a GitHub repo to store additional material, including links and […]

The post Ansible For Network Automation Lesson 4: Gathering Device Information – Video appeared first on Packet Pushers.

New PCI Express spec doubles the bit rate

The latest PCI Express (PCIe) specification again doubles the data rate over the previous spec.PCI Express 7.0 calls for a data rate of 128 gigatransfers per second (GT/s) and up to 512 GB/s bi-directionally via x16 data lane slot (not every PCI Express slot in a PC or server uses 16 transfer lanes), according to PCI-SIG, the industry group that maintains and develops the specification. [ Get regularly scheduled insights by signing up for Network World newsletters. ] The slower, previous spec, PCI Express 6.0 has yet to come to market, and doubling the rate with each version has become the norm.To read this article in full, please click here

New PCI Express spec doubles the bit rate

The latest PCI Express (PCIe) specification again doubles the data rate over the previous spec.PCI Express 7.0 calls for a data rate of 128 gigatransfers per second (GT/s) and up to 512 GB/s bi-directionally via x16 data lane slot (not every PCI Express slot in a PC or server uses 16 transfer lanes), according to PCI-SIG, the industry group that maintains and develops the specification. [ Get regularly scheduled insights by signing up for Network World newsletters. ] The slower, previous spec, PCI Express 6.0 has yet to come to market, and doubling the rate with each version has become the norm.To read this article in full, please click here

New PCI Express spec features doubles the bit rate

The latest PCI Express (PCIe) specification again doubles the data rate over the previous spec.PCI Express 7.0 calls for a data rate of 128 gigatransfers per second (GT/s) and up to 512 GB/s bi-directionally via x16 data lane slot (not every PCI Express slot in a PC or server uses 16 transfer lanes), according to PCI-SIG, the industry group that maintains and develops the specification. [ Get regularly scheduled insights by signing up for Network World newsletters. ] The slower, previous spec, PCI Express 6.0 has yet to come to market, and doubling the rate with each version has become the norm.To read this article in full, please click here

What’s new: network automation with ansible.netcommon 3.0.0

libssh blog

With the recent release of the ansible.netcommon Collection version 3.0.0, we have promoted two features as standard: libssh transport and import_modules. These features provide increased performance and resiliency for your network automation. These two features, which have been available since July 2020 (libssh) and March 2021 (import_modules), are now turned on by default for playbooks running the latest version of the netcommon Collection, so let's take a look at what makes these changes so exciting!

 

The road to libssh

Libssh support was formally announced in November 2020 for FIPS mode compatibility and speed. This blog goes into great detail about why we started this change, and it's worth a read if you want to know more about how we use paramiko or libssh in the network_cli connection plugin. I'm going to try not to rehash everything from that post, but I do want to take a little time to revisit security and speed to show what libssh brings to the experience of using ansible with network devices.

 

FIPS Mode

One of the earliest issues we identified with paramiko, our earlier SSH transport plugin, is that it was not FIPS 140 compliant. This meant that environments Continue reading

New partner program for SMB agencies & hosting partners now in closed beta

New partner program for SMB agencies & hosting partners now in closed beta

A fundamental principle here at Cloudflare has always been that we want to serve everyone - from individual developers to small businesses to large corporations. In the earliest days, we provided services to hosting partners and resellers around the globe, who helped bring Cloudflare to thousands of domains with free caching and DDoS protection for shared infrastructures.

Today, we want to reinforce our commitment to our hosting ecosystem and small business partners that leverage Cloudflare to help bring a better Internet experience to their customers. We've been building a robust multi-tenant partner platform that we will begin to open up to everyone searching for a faster, safer, and better Internet experience. This platform will come in the form of a Self Serve Partner program that will allow SMB agencies & hosting partners to create accounts for all their customers under one dashboard, consolidate billing, and provide discounted plans to our partners.

Deprecation of our legacy APIs

To make way for the new, we first must discuss the end-of-life of some of Cloudflare’s earliest APIs. Built and launched in 2011, our Hosting and Optimized Partner Programs allowed our initial CDN and DDoS solutions to expand to brand-new audiences around the Continue reading

Network performance update: Cloudflare One Week June 2022

Network performance update: Cloudflare One Week June 2022
Network performance update: Cloudflare One Week June 2022

In September 2021, we shared extensive benchmarking results of 1,000 networks all around the world. The results showed that on a range of tests (TCP connection time, time to first byte, time to last byte), and on different measures (p95, mean), Cloudflare was the fastest provider in 49% of the top 1,000 networks around the world.

Since then, we’ve expanded our testing to cover not just 1,000 but 3,000 networks, and we’ve worked to continuously improve performance, with the ultimate goal of being the fastest everywhere and an intermediate goal to grow the number of networks where we’re the fastest by at least 10% every Innovation Week. We met that goal Platform Week May 2022), and we’re carrying the work over to Cloudflare One Week (June 2022).

We’re excited to share that Cloudflare was the fastest provider in 1,290 of the top 3,000 most reported networks, up from 1,280 even one month ago during Platform Week.

Measuring what matters

To quantify global network performance, we have to get enough data from around the world, across all manner of different networks, comparing ourselves with other providers. We use Real User Measurements (RUM) to fetch a 100kB file from different providers. Continue reading

Identifying content gaps in our documentation

Identifying content gaps in our documentation
Identifying content gaps in our documentation

If you’ve tuned into this blog for long enough, you’ll notice that we’re pretty big on using and stress-testing our own products (“dogfooding”) at Cloudflare.

That applies to our security team, product teams, and – as my colleague Kristian just blogged about – even our documentation team. We’re incredibly excited to be on the Pages platform, both because of the performance and workflow improvements and the opportunity to help the platform develop.

What you probably haven’t heard about is how our docs team uses dogfooding – and data – to improve our documentation.

Dogfooding for docs

As a technical writer, it’s pretty common to do the thing you’re documenting. After all, it’s really hard to write step-by-step instructions if you haven’t been through those steps. It’s also a great opportunity to provide feedback to our product teams.

What’s not as common for a writer, however, is actually using the thing you’re documenting. And it’s totally understandable why. You’re already accountable to your deadlines and product managers, so you might not have the time. You might not have the technical background. And then there’s the whole problem of a real-world use case. If you’re really dedicated, you can set Continue reading