We often take dynamic routing protocol failover for granted but there’s a lot of complexity that goes into ensuring resilient loop free alternative paths. In this episode of History of Networking, Alia Atlas joins Network Collective to talk about her contributions to IP fast reroute.
Outro Music:
Danger Storm Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
http://creativecommons.org/licenses/by/3.0/
The post History Of Networking – Alia Atlas – Fast Reroute appeared first on Network Collective.
It includes features for containers, vGPUs, and software-defined storage.
Containers are also part of the equation in support of microservices and use cases.
Learn how to control what interfaces you see when using the network analyzer for troubleshooting.
There is no question right now that if you have a big computing job in either high performance computing – the colloquial name for traditional massively parallel simulation and modeling applications – or in machine learning – the set of statistical analysis routines with feedback loops that can do identification and transformation tasks that used to be solely the realm of humans – then an Nvidia GPU accelerator is the engine of choice to run that work at the best efficiency.
It is usually difficult to make such clean proclamations in the IT industry, with so many different kinds of …
The Engine Of HPC And Machine Learning was written by Timothy Prickett Morgan at The Next Platform.
In the Network Automation 101 webinar and Building Network Automation Solutions online course I described one of the biggest challenges the networking engineers are facing today: moving from thinking about boxes and configuring individual devices to thinking about infrastructure and services, and changing data models which result in changed device configurations.
The $1B question is obviously: and how do we get from here to there?
Read more ...
The vendor reported $542 million in revenue, a 28 percent year-over-year increase.
Artificial Intelligence, machine and deep learning have to be among the most popular tech-words of the past few years, and I was hoping that I wouldn’t get swept away by it. But when I heard a panel on this topic at our customer event this month on the state of AI networks, I found it incredibly fascinating and it piqued my curiosity! Let me start with a note of disclaimer for readers who are expecting a deep tutorial from me. There is a vast amount of research behind models and algorithms on this topic that I will not even attempt to cover. Instead I will try to share some thoughts on the practical relevance of this promising field.
Artificial Intelligence, machine and deep learning have to be among the most popular tech-words of the past few years, and I was hoping that I wouldn’t get swept away by it. But when I heard a panel on this topic at our customer event this month on the state of AI networks, I found it incredibly fascinating and it piqued my curiosity! Let me start with a note of disclaimer for readers who are expecting a deep tutorial from me. There is a vast amount of research behind models and algorithms on this topic that I will not even attempt to cover. Instead I will try to share some thoughts on the practical relevance of this promising field.
On today’s episode of “The Interview” with The Next Platform we talk about the use of petascale supercomputers for training deep learning algorithms. More specifically, how this happening in Astronomy to enable real-time analysis of LIGO detector data.
We are joined by Daniel George, a researcher in the Gravity Group at the National Center for Supercomputing Applications, or NCSA. His team garnered a great deal of attention at the annual supercomputing conference in November with work blending traditional HPC simulation data and deep learning.
George and his team have shown that deep learning with convolutional neural networks can provide many …
Deep Learning on HPC Systems for Astronomy was written by Nicole Hemsoth at The Next Platform.