It wasn’t so long ago that only supercomputing centers had to resort to fancy cooling technology to keep their systems running smoothly and at peak performance. … “Dealing With Density In The Datacenter And Beyond”
Writing isn’t always the easiest thing in the world to do. Coming up with topics is hard, but so too is making those topics into a blog post. I find myself getting briefings on a variety of subjects all the time, especially when it comes to networking. But translating those briefings into blog posts isn’t always straight forward. When I find myself stuck and ready to throw in the towel I find it easy to think about things backwards.
A World Of Pure Imagination
When people plan blog posts, they often think about things in a top-down manner. They come up with a catchy title, then an amusing anecdote to open the post. Then they hit the main idea, find a couple of supporting arguments, and then finally they write a conclusion that ties it all together. Sound like a winning formula?
Except when it isn’t. How about when the title doesn’t reflect the content of the post? Or the anecdote or lead in doesn’t quite fit with the overall tone? How about when the blog starts meandering away from the main idea halfway through with a totally separate argument? Or when the conclusion is actually the place where the Continue reading
Nvidia used its GPU Technology Conference in San Jose to introduce new blade servers for on-premises use and announce new cloud AI acceleration.The RTX Blade Server packs up to 40 Turing-generation GPUs into an 8U enclosure, and multiple enclosures can be combined into a "pod" with up to 1,280 GPUs working as a single system and using Mellanox technology as the storage and networking interconnect. Which likely explains why Nvidia is paying close to $7 billion for Mellanox.Instead of AI, where Nvidia has become a leader, the RTX Blade Server is positioned for 3D rendering, ray tracing and cloud gaming. The company said this setup will enable the rendering of realistic-looking 3D images in real time for VR and AR.To read this article in full, please click here
Nvidia used its GPU Technology Conference in San Jose to introduce new blade servers for on-premises use and announce new cloud AI acceleration.The RTX Blade Server packs up to 40 Turing-generation GPUs into an 8U enclosure, and multiple enclosures can be combined into a "pod" with up to 1,280 GPUs working as a single system and using Mellanox technology as the storage and networking interconnect. Which likely explains why Nvidia is paying close to $7 billion for Mellanox.Instead of AI, where Nvidia has become a leader, the RTX Blade Server is positioned for 3D rendering, ray tracing and cloud gaming. The company said this setup will enable the rendering of realistic-looking 3D images in real time for VR and AR.To read this article in full, please click here
Ready for the next step? Assuming your sole source-of-truth is the actual device configuration, is there a magic mechanism we can use to transform it into something we could use in network automation?
Modern public-key encryption is currently good enough to meet enterprise requirements, according to experts. Most cyberattacks target different parts of the security stack these days – unwary users in particular. Yet this stalwart building block of present-day computing is about to be eroded by the advent of quantum computing within the next decade, according to experts.“About 99% of online encryption is vulnerable to quantum computers,” said Mark Jackson, scientific lead for Cambridge Quantum Computing, at the Inside Quantum Technology conference in Boston on Wednesday.[ Now read: What is quantum computing (and why enterprises should care) ]
Quantum computers – those that use the principles of quantum entanglement and superposition to represent information, instead of electrical bits – are capable of performing certain types of calculation orders of magnitude more quickly than classical, electronic computers. They’re more or less fringe technology in 2019, but their development has accelerated in recent years, and experts at the IQT conference say that a spike in deployment could occur as soon as 2024.To read this article in full, please click here
Modern public-key encryption is currently good enough to meet enterprise requirements, according to experts. Most cyberattacks target different parts of the security stack these days – unwary users in particular. Yet this stalwart building block of present-day computing is about to be eroded by the advent of quantum computing within the next decade, according to experts.“About 99% of online encryption is vulnerable to quantum computers,” said Mark Jackson, scientific lead for Cambridge Quantum Computing, at the Inside Quantum Technology conference in Boston on Wednesday.[ Now read: What is quantum computing (and why enterprises should care) ]
Quantum computers – those that use the principles of quantum entanglement and superposition to represent information, instead of electrical bits – are capable of performing certain types of calculation orders of magnitude more quickly than classical, electronic computers. They’re more or less fringe technology in 2019, but their development has accelerated in recent years, and experts at the IQT conference say that a spike in deployment could occur as soon as 2024.To read this article in full, please click here
As one of the dominant hyperscalers in the world, Microsoft is out there on the cutting edge, driving efficiencies on every front it can in server, storage, switching, software, and datacenter design. … “On The Hot Seat In The Hyperscale Datacenter”
The upper limits on fiber capacity haven't been reached just yet. Two announcements made around an optical-fiber conference and trade show in San Diego recently indicate continued progress in squeezing more data into fiber.In the first announcement, researchers say they’ve obtained 26.2 terabits per second over the roughly 4,000 mile-long trans-Atlantic MAREA cable, in an experiment; and in the second, networking company Ciena says it will start deliveries of an 800 gigabit-per-second, single wavelength light throughput system in Q3 2019.High-speed laser
MAREA, translated as “tide” in Spanish, is the Telefónica-operated cable running between Virginia Beach, Va., and Bilbao in Spain. The fiber cable, initiated a year ago, is designed to handle 160 terabits of data per second through its eight 20-terabit pairs. Each one of those pairs is thus big enough to carry 4 million high-definition videos at the same time, network-provider Infinera explains in an Optical Fiber Conference and Exhibition published press release.To read this article in full, please click here
The upper limits on fiber capacity haven't been reached just yet. Two announcements made around an optical-fiber conference and trade show in San Diego recently indicate continued progress in squeezing more data into fiber.In the first announcement, researchers say they’ve obtained 26.2 terabits per second over the roughly 4,000 mile-long trans-Atlantic MAREA cable, in an experiment; and in the second, networking company Ciena says it will start deliveries of an 800 gigabit-per-second, single wavelength light throughput system in Q3 2019.High-speed laser
MAREA, translated as “tide” in Spanish, is the Telefónica-operated cable running between Virginia Beach, Va., and Bilbao in Spain. The fiber cable, initiated a year ago, is designed to handle 160 terabits of data per second through its eight 20-terabit pairs. Each one of those pairs is thus big enough to carry 4 million high-definition videos at the same time, network-provider Infinera explains in an Optical Fiber Conference and Exhibition published press release.To read this article in full, please click here
With the increase in compute density making air cooling less and less feasible, liquid cooling is going mainstream. For data centers. Overclockers have been doing it for years.For the most part, liquid cooling involves piping in cooled water to a heat sink attached to the CPU. The water then cools the heat sink, and is pumped away to be circulated and cooled down.But there are some cases where immersion is used. That is where the entire motherboard is submerged in a nonvolatile liquid. Immersion is used in only the most extreme of cases, with the highest compute density and performance. For a variety of reasons, it isn’t widely used.One startup that hopes to change that showed its wares at the Open Compute Project Summit 2019, which ran last week in San Jose. The OCP has a special project called Advanced Cooling Solutions to promote liquid cooling and other advanced cooling approaches.To read this article in full, please click here
With the increase in compute density making air cooling less and less feasible, liquid cooling is going mainstream. For data centers. Overclockers have been doing it for years.For the most part, liquid cooling involves piping in cooled water to a heat sink attached to the CPU. The water then cools the heat sink, and is pumped away to be circulated and cooled down.But there are some cases where immersion is used. That is where the entire motherboard is submerged in a nonvolatile liquid. Immersion is used in only the most extreme of cases, with the highest compute density and performance. For a variety of reasons, it isn’t widely used.One startup that hopes to change that showed its wares at the Open Compute Project Summit 2019, which ran last week in San Jose. The OCP has a special project called Advanced Cooling Solutions to promote liquid cooling and other advanced cooling approaches.To read this article in full, please click here
In the modern distributed computing world, which is getting ever more disaggregated and some might say discombobulated, as every day passes, the architecture of the network in the datacenter is arguably the most important factor in determining if applications will perform well or not. … “How To Benefit From Facebook’s New Network Fabric”
Last
year, at the Internet Society Asia-Pacific
and Middle-East Chapters Meeting, I was introduced to the series of
easily-digestible and thought-provoking issue
papers published by the Internet Society. Particularly, the one on digital
accessibility had me shaking in disbelief. It stated that one in six people
in the Asia-Pacific region lives with disability – that is a total of about 650
million people.
The Internet Society Pakistan Islamabad
Chapter had always been active in promoting digital accessibility, but I
realized that we need to do more, especially at the transnational level. Thus,
the idea of organizing a regional forum on digital accessibility was born, and
with support from the Internet Society Asia-Pacific Bureau, it became a
reality.
The Regional Forum on Digital Accessibility was
successfully held on 7 February in Islamabad. It brought together 120 participants,
including Internet Society Chapter leaders from Afghanistan and Nepal, fellows
from Sri Lanka, and speakers from India.
A major
achievement emerging from the forum was the vow from Pakistan’s high-level
government officials to include representation of persons with disabilities in
the recently-established Prime Minister’s Task Force on Information Technology
(IT) and Telecom that is developing a roadmap for Pakistan’s digital
transformation. There was Continue reading