Digital Experience Monitoring (DEM) is the topic on today's Heavy Networking. IT folks tend to view user experience from their own particular area of responsibility--networking, security, app development--but the reality is there's a common set of data that IT should consume and understand. Sponsor Catchpoint joins us to discuss its DEM platform and how it measures user experience using metrics that are relevant across the IT stack. Our guest is JP Blaho, Director, Product Marketing at Catchpoint.
The post Heavy Networking 557: User Experience Is A Full-Stack Responsibility (Sponsored) appeared first on Packet Pushers.
In case you missed it, Intel CEO Bob Swan is stepping down from his role effective February 15 and will be replaced by current VMware CEO Pat Gelsinger. Gelsinger was the former CTO at Intel for a number of years before leaving to run EMC and VMware. His return is a bright spot in an otherwise dismal past few months for the chip giant.
Why is Gelsinger’s return such a cause for celebration? The analysts that have been interviewed say that Intel has been in need of a technical leader for a while now. Swan came from the office of the CFO to run Intel on an interim basis after the resignation of Brian Krzanich. The past year has been a rough one for Intel, with delays in their new smaller chip manufacturing process and competition heating up from long-time rival AMD but also from new threats like ARM being potentially sold to NVIDIA. It’s a challenging course for any company captain to sail. However, I think one key thing makes is nigh impossible for Swan.
Swan is a manager. That’s not meant as a slight inasmuch as an accurate label. Managers are people that have things and Continue reading
The Transport Layer Security protocol (TLS), which secures most Internet connections, has mainly been a protocol consisting of a key exchange authenticated by digital signatures used to encrypt data at transport[1]. Even though it has undergone major changes since 1994, when SSL 1.0 was introduced by Netscape, its main mechanism has remained the same. The key exchange was first based on RSA, and later on traditional Diffie-Hellman (DH) and Elliptic-curve Diffie-Hellman (ECDH). The signatures used for authentication have almost always been RSA-based, though in recent years other kinds of signatures have been adopted, mainly ECDSA and Ed25519. This recent change to elliptic curve cryptography in both at the key exchange and at the signature level has resulted in considerable speed and bandwidth benefits in comparison to traditional Diffie-Hellman and RSA.
TLS is the main protocol that protects the connections we use everyday. It’s everywhere: we use it when we buy products online, when we register for a newsletter — when we access any kind of website, IoT device, API for mobile apps and more, really. But with the imminent threat of the arrival of quantum computers (a threat that seems to be getting closer and closer), we need Continue reading
After discussing the technology options one has when trying to get a packet across the network, we dived deep into two interesting topics:
You’ll find more details (including other hybrids like Loose Source Routing) in Multi-Layer Switching and Tunneling video.
After discussing the technology options one has when trying to get a packet across the network, we dived deep into two interesting topics:
You’ll find more details (including other hybrids like Loose Source Routing) in Multi-Layer Switching and Tunneling video.
On the 22nd, I’m giving a three hour course called How the Internet Really Works. I tried making this into a four hour course, but found I still have too much material, so I’ve split the webinar into two parts; the second part will be given in February. This part is about how systems work, who pays for what, and other higher level stuff. The second part will be all about navigating the DFZ. From the Safari Books site:
This training is designed for beginning engineers who do not understand the operation of the Internet, experienced engineers who want to “fill in the gaps,” project managers, coders, and anyone else who interacts with the Internet and wants to better understand the various parts of this complex, global ecosystem.
Today's IPv6 Buzz episode examines the state of IPv6 in the public cloud, including capabilities and limitations with current v6 support in AWS and Azure, ongoing customer demand for v4, and more. Our guest is Ivan Pepelnjak.
The post IPv6 Buzz 067: IPv6 In The Cloud With Ivan PepeInjak appeared first on Packet Pushers.
Tyler McDaniel joins Eyvonne, Tom, and Russ to discuss a study on BGP peerlocking, which is designed to prevent route leaks in the global Internet. From the study abstract:
Early last year, as people across the world quarantined to slow the spread of the COVID-19 virus, the Internet became critical to maintaining a semblance of routine and getting the latest lifesaving information. But there was a stark reality. Those without Internet access would have to grapple without this vital resource amidst a global pandemic.
Internet Society volunteers around the world understood the gravity of the situation. They jumped in to enable secure access.
In North America, NYC Mesh, a community network supported by the Internet Society, rushed to connect as many households as they could. While it was still safe to do so, they crossed rooftops to bring connectivity to some of the city’s most underserved.
In Europe, the Internet Society Italy Chapter launched SOSDigitale to mobilize resources and volunteers to respond to urgent technology gaps. The Portugal Chapter followed with their own SOS Digital campaign to donate computers and digital support to at-risk youth.
And in Latin America, residents of El Cuy, in remote Patagonia, Argentina, were able to reduce their potential exposure to COVID-19 via their newly-established community network, accessing medical prescriptions, education, banking, and government resources online.
No one could have predicted the events of 2020. Continue reading
I love building products that solve real problems for our customers. These days I don’t get to do so as much directly with our Engineering teams. Instead, about half my time is spent with customers listening to and learning from their security challenges, while the other half of my time is spent with other Cloudflare Product Managers (PMs) helping them solve these customer challenges as simply and elegantly as possible. While I miss the deeply technical engineering discussions, I am proud to have the opportunity to look back every year on all that we’ve shipped across our application security teams.
Taking the time to reflect on what we’ve delivered also helps to reinforce my belief in the Cloudflare approach to shipping product: release early, stay close to customers for feedback, and iterate quickly to deliver incremental value. To borrow a term from the investment world, this approach brings the benefits of compounded returns to our customers: we put new products that solve real-world problems into their hands as quickly as possible, and then reinvest the proceeds of our shared learnings immediately back into the product.
It is these sustained investments that allow us to release a flurry of small improvements Continue reading
Serving more than approximately 25 million Internet properties is not an easy thing, and neither is serving 20 million requests per second on average. At Cloudflare, we achieve this by running a homogeneous edge environment: almost every Cloudflare server runs all Cloudflare products.
As we offer more and more products and enjoy the benefit of horizontal scalability, our edge stack continues to grow in complexity. Originally, we only operated at the application layer with our CDN service and DoS protection. Then we launched transport layer products, such as Spectrum and Argo. Now we have further expanded our footprint into the IP layer and physical link with Magic Transit. They all run on every machine we have. The work of our engineers enables our products to evolve at a fast pace, and to serve our customers better.
However, such software complexity presents a sheer challenge to operation: the more changes you make, the more likely it is that something is going to break. Continue reading
The post Tier 1 Carriers Performance Report: December, 2020 appeared first on Noction.
When you want to transport a complex data structure between components of a distributed system you’re usually using a platform-independent data encoding format like XML, YAML, or JSON.
XML was the hip encoding format in days when Junos and Cisco Nexus OS was designed and lost most of its popularity in the meantime due to its complexity (attributes, namespaces…) that makes it hard to deal with XML documents in most programming languages.
JSON is the new cool kid on the block. It’s less complex than XML, maps better into data structures supported by modern programming languages, and has decently fast parser implementations.