Archive

Category Archives for "Networking"

Writing an API at the Edge with Workers and Cloud Firestore

We’re super stoked about bringing you Workers.dev, and we’re even more stoked at every opportunity we have to dogfood Workers. Using what we create keeps us tuned in to the developer experience, which takes a good deal of guesswork out of drawing our roadmaps.

Our goal with Workers.dev is to provide a way to deploy JavaScript code to our network of 165 data centers without requiring developers to register a domain with Cloudflare first. While we gear up for general availability, we wanted to provide users an opportunity to reserve their favorite subdomain in a fair and consistent way, so we built a system to allow visitors to reserve a subdomain where their Workers will live once Workers.dev is released. This is the story of how we wrote the system backing that submission process.

Requirements

Of course, we always want to use the best tool for the job, so designing the Workers that would back Workers.dev started with an inventory of constraints and user experience expectations:

Constraints

  1. We want to limit reservations to one per email address. It’s no fun if someone writes a bot to claim every good Workers subdomain in ten seconds; they Continue reading

Buzzword bingo: NetDevOps edition

Looking at the marketing landscape for IT, you could be forgiven for thinking that the current strategy was to dynamite a word factory and use the resulting debris as marketing content. DevSecOps. NetDevOps. Ops, ops, spam, eggs, spam, and DevSpamOps.

The naming trend lends itself easily to parody, but it began as shorthand for an attempt to solve real IT problems. And its iterations have more in common than a resemblance to alphabet salad. What lies beneath the buzzwords? And do you need to care?

Countless companies have jumped on the NetDevOps bandwagon, all with their own way of doing things; and most are utterly incompatible with everyone else. Some may have already abandoned the NetDevOps craze, believing it to be nothing but marketing hype wrapped around a YAML parser and some scripts. Others might have found a system that works for them and swear by it, using nothing else for provisioning.

Regardless of views, a system that allows for rapid provisioning and re-provisioning of applications, containers, virtual machines, and network infrastructure is paramount.

Ministry of Silly Names: A History

The modern era of namesmashing started with DevOps. This made a sort of sense because, before this, IT had Continue reading

The Blogging Mirror

Writing isn’t always the easiest thing in the world to do. Coming up with topics is hard, but so too is making those topics into a blog post. I find myself getting briefings on a variety of subjects all the time, especially when it comes to networking. But translating those briefings into blog posts isn’t always straight forward. When I find myself stuck and ready to throw in the towel I find it easy to think about things backwards.

A World Of Pure Imagination

When people plan blog posts, they often think about things in a top-down manner. They come up with a catchy title, then an amusing anecdote to open the post. Then they hit the main idea, find a couple of supporting arguments, and then finally they write a conclusion that ties it all together. Sound like a winning formula?

Except when it isn’t. How about when the title doesn’t reflect the content of the post? Or the anecdote or lead in doesn’t quite fit with the overall tone? How about when the blog starts meandering away from the main idea halfway through with a totally separate argument? Or when the conclusion is actually the place where the Continue reading

Nvidia launches new hardware and software for on-prem and cloud providers

Nvidia used its GPU Technology Conference in San Jose to introduce new blade servers for on-premises use and announce new cloud AI acceleration.The RTX Blade Server packs up to 40 Turing-generation GPUs into an 8U enclosure, and multiple enclosures can be combined into a "pod" with up to 1,280 GPUs working as a single system and using Mellanox technology as the storage and networking interconnect. Which likely explains why Nvidia is paying close to $7 billion for Mellanox.Instead of AI, where Nvidia has become a leader, the RTX Blade Server is positioned for 3D rendering, ray tracing and cloud gaming. The company said this setup will enable the rendering of realistic-looking 3D images in real time for VR and AR.To read this article in full, please click here

Nvidia launches new hardware and software for on-prem and cloud providers

Nvidia used its GPU Technology Conference in San Jose to introduce new blade servers for on-premises use and announce new cloud AI acceleration.The RTX Blade Server packs up to 40 Turing-generation GPUs into an 8U enclosure, and multiple enclosures can be combined into a "pod" with up to 1,280 GPUs working as a single system and using Mellanox technology as the storage and networking interconnect. Which likely explains why Nvidia is paying close to $7 billion for Mellanox.Instead of AI, where Nvidia has become a leader, the RTX Blade Server is positioned for 3D rendering, ray tracing and cloud gaming. The company said this setup will enable the rendering of realistic-looking 3D images in real time for VR and AR.To read this article in full, please click here

Creating Automation Source-of-Truth from Device Configurations

Remember the previous blog post in this sequence in which I explained the need for single source-of-truth used in your network automation solution? No? Please read it first ;)

Ready for the next step? Assuming your sole source-of-truth is the actual device configuration, is there a magic mechanism we can use to transform it into something we could use in network automation?

TL&DR: No.

Read more ...

Quantum computing will break your encryption in a few years

Modern public-key encryption is currently good enough to meet enterprise requirements, according to experts. Most cyberattacks target different parts of the security stack these days – unwary users in particular. Yet this stalwart building block of present-day computing is about to be eroded by the advent of quantum computing within the next decade, according to experts.“About 99% of online encryption is vulnerable to quantum computers,” said Mark Jackson, scientific lead for Cambridge Quantum Computing, at the Inside Quantum Technology conference in Boston on Wednesday.[ Now read: What is quantum computing (and why enterprises should care) ] Quantum computers – those that use the principles of quantum entanglement and superposition to represent information, instead of electrical bits – are capable of performing certain types of calculation orders of magnitude more quickly than classical, electronic computers. They’re more or less fringe technology in 2019, but their development has accelerated in recent years, and experts at the IQT conference say that a spike in deployment could occur as soon as 2024.To read this article in full, please click here

Quantum computing will break your encryption in a few years

Modern public-key encryption is currently good enough to meet enterprise requirements, according to experts. Most cyberattacks target different parts of the security stack these days – unwary users in particular. Yet this stalwart building block of present-day computing is about to be eroded by the advent of quantum computing within the next decade, according to experts.“About 99% of online encryption is vulnerable to quantum computers,” said Mark Jackson, scientific lead for Cambridge Quantum Computing, at the Inside Quantum Technology conference in Boston on Wednesday.[ Now read: What is quantum computing (and why enterprises should care) ] Quantum computers – those that use the principles of quantum entanglement and superposition to represent information, instead of electrical bits – are capable of performing certain types of calculation orders of magnitude more quickly than classical, electronic computers. They’re more or less fringe technology in 2019, but their development has accelerated in recent years, and experts at the IQT conference say that a spike in deployment could occur as soon as 2024.To read this article in full, please click here

Data center fiber to jump to 800 gigabits in 2019

The upper limits on fiber capacity haven't been reached just yet. Two announcements made around an optical-fiber conference and trade show in San Diego recently indicate continued progress in squeezing more data into fiber.In the first announcement, researchers say they’ve obtained 26.2 terabits per second over the roughly 4,000 mile-long trans-Atlantic MAREA cable, in an experiment; and in the second, networking company Ciena says it will start deliveries of an 800 gigabit-per-second, single wavelength light throughput system in Q3 2019.High-speed laser MAREA, translated as “tide” in Spanish, is the Telefónica-operated cable running between Virginia Beach, Va., and Bilbao in Spain. The fiber cable, initiated a year ago, is designed to handle 160 terabits of data per second through its eight 20-terabit pairs. Each one of those pairs is thus big enough to carry 4 million high-definition videos at the same time, network-provider Infinera explains in an Optical Fiber Conference and Exhibition published press release.To read this article in full, please click here

Data center fiber to jump to 800 gigabits in 2019

The upper limits on fiber capacity haven't been reached just yet. Two announcements made around an optical-fiber conference and trade show in San Diego recently indicate continued progress in squeezing more data into fiber.In the first announcement, researchers say they’ve obtained 26.2 terabits per second over the roughly 4,000 mile-long trans-Atlantic MAREA cable, in an experiment; and in the second, networking company Ciena says it will start deliveries of an 800 gigabit-per-second, single wavelength light throughput system in Q3 2019.High-speed laser MAREA, translated as “tide” in Spanish, is the Telefónica-operated cable running between Virginia Beach, Va., and Bilbao in Spain. The fiber cable, initiated a year ago, is designed to handle 160 terabits of data per second through its eight 20-terabit pairs. Each one of those pairs is thus big enough to carry 4 million high-definition videos at the same time, network-provider Infinera explains in an Optical Fiber Conference and Exhibition published press release.To read this article in full, please click here

Startups introduce new liquid cooling designs

With the increase in compute density making air cooling less and less feasible, liquid cooling is going mainstream. For data centers. Overclockers have been doing it for years.For the most part, liquid cooling involves piping in cooled water to a heat sink attached to the CPU. The water then cools the heat sink, and is pumped away to be circulated and cooled down.But there are some cases where immersion is used. That is where the entire motherboard is submerged in a nonvolatile liquid. Immersion is used in only the most extreme of cases, with the highest compute density and performance. For a variety of reasons, it isn’t widely used.One startup that hopes to change that showed its wares at the Open Compute Project Summit 2019, which ran last week in San Jose. The OCP has a special project called Advanced Cooling Solutions to promote liquid cooling and other advanced cooling approaches.To read this article in full, please click here

Startups introduce new liquid cooling designs

With the increase in compute density making air cooling less and less feasible, liquid cooling is going mainstream. For data centers. Overclockers have been doing it for years.For the most part, liquid cooling involves piping in cooled water to a heat sink attached to the CPU. The water then cools the heat sink, and is pumped away to be circulated and cooled down.But there are some cases where immersion is used. That is where the entire motherboard is submerged in a nonvolatile liquid. Immersion is used in only the most extreme of cases, with the highest compute density and performance. For a variety of reasons, it isn’t widely used.One startup that hopes to change that showed its wares at the Open Compute Project Summit 2019, which ran last week in San Jose. The OCP has a special project called Advanced Cooling Solutions to promote liquid cooling and other advanced cooling approaches.To read this article in full, please click here

Coming Togther for an All-Inclusive and Accessible Internet in South Asia

Last year, at the Internet Society Asia-Pacific and Middle-East Chapters Meeting, I was introduced to the series of easily-digestible and thought-provoking issue papers published by the Internet Society. Particularly, the one on digital accessibility had me shaking in disbelief. It stated that one in six people in the Asia-Pacific region lives with disability – that is a total of about 650 million people.

The Internet Society Pakistan Islamabad Chapter had always been active in promoting digital accessibility, but I realized that we need to do more, especially at the transnational level. Thus, the idea of organizing a regional forum on digital accessibility was born, and with support from the Internet Society Asia-Pacific Bureau, it became a reality.

The Regional Forum on Digital Accessibility was successfully held on 7 February in Islamabad. It brought together 120 participants, including Internet Society Chapter leaders from Afghanistan and Nepal, fellows from Sri Lanka, and speakers from India.

A major achievement emerging from the forum was the vow from Pakistan’s high-level government officials to include representation of persons with disabilities in the recently-established Prime Minister’s Task Force on Information Technology (IT) and Telecom that is developing a roadmap for Pakistan’s digital transformation. There was Continue reading