Archive

Category Archives for "Networking"

Lessons learned from Black Friday and Cyber Monday

 

If you’re a consumer-facing business, Black Friday and Cyber Monday are the D-Day for IT operations. Low-level estimates indicate that upwards of 20% of all revenues for companies can occur within these two days. The stakes are even higher if you’re a payment processor as you aggregate the purchases across all consumer businesses. This means that the need to remain available during these crucial 96 hours is paramount.

My colleague, David, and I have been working the past 10 months preparing for this day.  In January 2018 we started a new deployment with a large payment processor to help them build out capacity for their projected 2018 holiday payment growth. Our goal was to create a brand new, 11 rack data center to create a third region to supplement the existing two regions used for payment processing. In addition, we helped deploy additional Cumulus racks and capacity at the existing two regions, which were historically built with traditional vendors.

Now that both days have come and gone, read on to find out what we learned from this experience.

Server Interop Testing

Payment processing has most of its weight on the payment applications running in the data center. As with Continue reading

How my team wrote 12 Cloudflare apps with fewer than 20 lines of code

How my team wrote 12 Cloudflare apps with fewer than 20 lines of code

This is a guest post by Ben Ross. Ben is a Berkeley PhD, serial entrepreneur, and Founder and CTO and POWr.io, where he spends his days helping small businesses grow online.

I like my code the same way I like my team of POWr RangersDRY.

And no, I don’t mean dull and unexciting! (If you haven’t heard this acronym before, DRY stands for Don’t Repeat Yourself, the single most important principle in software engineering. Because, as a mentor once told me, “when someone needs to re-write your code, at least they only need to do it once.”)

At POWr, being DRY is not just a way to write code, it’s a way of life. This is true whether you’re an Engineer, a Customer Support agent, or an Office Manager; if you find you’re repeating yourself, we want to find a way to automate that repetition away. Our employees’ time is our company’s most valuable resource. Not to mention, who wants to spend all day repeating themselves?

We call this process becoming a Scaled Employee. A Scaled Employee leverages their time and resources to make a multifold impact compared to an average employee in their Continue reading

BrandPost: Top Ten Reasons to Think Outside the Router #5: Manual CLI-based Configuration and Management

We’re now more than half-way through our homage to the iconic David Letterman Top Ten List segment from his former Late Show, as Silver Peak counts down the Top Ten Reasons to Think Outside the Router. Click for the #6, #7, #8, #9 and #10 reasons to retire traditional branch routers.To read this article in full, please click here

Podcast: The State of Packet Forwarding

Enterprise network architectures are being reshaped using tenets popularized by the major cloud properties. This podcast explores this evolution and looks at the ways that real-time streaming telemetry, machine learning, and artificial intelligence affect how networks are designed and operated.

Computers could soon run cold, no heat generated

It’s pretty much just simple energy loss that causes heat build-up in electronics. That ostensibly innocuous warming up, though, causes a two-fold problem:Firstly, the loss of energy, manifested as heat, reduces the machine’s computational power — much of the purposefully created and needed, high-power energy disappears into thin air instead of crunching numbers. And secondly, as data center managers know, to add insult to injury, it costs money to cool all that waste heat.For both of those reasons (and some others, such as ecologically related ones, and equipment longevity—the tech breaks down with temperature), there’s an increasing effort underway to build computers in such a way that heat is eliminated — completely. Transistors, superconductors, and chip design are three areas where major conceptual breakthroughs were announced in 2018. They’re significant developments, and consequently it might not be too long before we see the ultimate in efficiency: the cold-running computer.To read this article in full, please click here

Computers could soon run cold, no heat generated

It’s pretty much just simple energy loss that causes heat build-up in electronics. That ostensibly innocuous warming up, though, causes a two-fold problem:Firstly, the loss of energy, manifested as heat, reduces the machine’s computational power — much of the purposefully created and needed, high-power energy disappears into thin air instead of crunching numbers. And secondly, as data center managers know, to add insult to injury, it costs money to cool all that waste heat.For both of those reasons (and some others, such as ecologically related ones, and equipment longevity—the tech breaks down with temperature), there’s an increasing effort underway to build computers in such a way that heat is eliminated — completely. Transistors, superconductors, and chip design are three areas where major conceptual breakthroughs were announced in 2018. They’re significant developments, and consequently it might not be too long before we see the ultimate in efficiency: the cold-running computer.To read this article in full, please click here

IBM and Nvidia announce turnkey AI system

IBM and Nvidia further enhanced their hardware relationship with the announcement of a new turnkey AI solution that combines IBM Spectrum Scale scale-out file storage with Nvidia’s GPU-based AI server.The name is a mouthful: IBM SpectrumAI with Nvidia DGX. It combines Spectrum Scale, a high performance Flash-based storage system, with Nvidia’s DGX-1 server, which is designed specifically for AI. In addition to the regular GPU cores, the V100 processor comes with special AI chips called Tensor Cores optimized to run machine learning workloads. The box comes with a rack of nine Nvidia DGX-1 servers, with a total of with 72 Nvidia V100 Tensor Core GPUs.To read this article in full, please click here

IBM and Nvidia announce turnkey AI system

IBM and Nvidia further enhanced their hardware relationship with the announcement of a new turnkey AI solution that combines IBM Spectrum Scale scale-out file storage with Nvidia’s GPU-based AI server.The name is a mouthful: IBM SpectrumAI with Nvidia DGX. It combines Spectrum Scale, a high performance Flash-based storage system, with Nvidia’s DGX-1 server, which is designed specifically for AI. In addition to the regular GPU cores, the V100 processor comes with special AI chips called Tensor Cores optimized to run machine learning workloads. The box comes with a rack of nine Nvidia DGX-1 servers, with a total of with 72 Nvidia V100 Tensor Core GPUs.To read this article in full, please click here

Episode 41 – The Value of Networking Labs for Production

If you’ve been in networking for any time, you’ve likely had to lab something up to learn a new technology or study for a certification, but labs can be so much more than learning tools. In today’s episode Jody Lemoine and Iain Leiter join Network Collective to talk about practical uses for labs outside of the classroom.


 

We would like to thank VIAVI Solutions for sponsoring this episode of Network Collective. VIAVI Solutions is an application and network management industry leader focusing on end-user experience by providing products that optimize performance and speed problem resolution. Helping to ensure delivery of critical applications for businesses worldwide, Viavi offers an integrated line of precision-engineered software and hardware systems for effective network monitoring and analysis. Learn more at www.viavisolutions.com/networkcollective.

 


Jody Lemoine
Guest
Iain Leiter
Guest
Jordan Martin
Host
Eyvonne Sharp
Host
Russ White
Host

Outro Music:
Danger Storm Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
http://creativecommons.org/licenses/by/3.0/

The post Episode 41 – The Value of Networking Labs for Production appeared first on Network Collective.

Qualcomm makes it official; no more data center chip

A layoff of 269 people in a company of 33,000 usually isn’t noteworthy, but given where the layoffs hit, it’s notable. Qualcomm has signaled the end of the road for Centriq, its ARM-based server processor, which never got out of the starting gate.U.S. companies have to notify their state employment of layoffs 60 days before they happen, making these events less of a surprise as reporters get wind of them. A letter from Qualcomm to its home city of San Diego said 125 people would be let go on February 6, while a note to officials in Raleigh, North Carolina, says 144 people also will be cut loose.The news is a repeat of what happened last June, right down to the number of people let go and cities impacted. The cuts target several divisions, one of which is the company's data center division, which was barely staffed to begin with. The Information, which first reported on the layoffs, says the data center group will be down to just 50 people after a peak of more than 1,000. That includes the head of the group, Anand Chandrasekher, a former Intel executive.To read this article in full, please click here

Qualcomm makes it official; no more data center chip

A layoff of 269 people in a company of 33,000 usually isn’t noteworthy, but given where the layoffs hit, it’s notable. Qualcomm has signaled the end of the road for Centriq, its ARM-based server processor, which never got out of the starting gate.U.S. companies have to notify their state employment of layoffs 60 days before they happen, making these events less of a surprise as reporters get wind of them. A letter from Qualcomm to its home city of San Diego said 125 people would be let go on February 6, while a note to officials in Raleigh, North Carolina, says 144 people also will be cut loose.The news is a repeat of what happened last June, right down to the number of people let go and cities impacted. The cuts target several divisions, one of which is the company's data center division, which was barely staffed to begin with. The Information, which first reported on the layoffs, says the data center group will be down to just 50 people after a peak of more than 1,000. That includes the head of the group, Anand Chandrasekher, a former Intel executive.To read this article in full, please click here

More consistent LuaJIT performance

More consistent LuaJIT performance

This is a guest post by Laurence Tratt, who is a programmer and Reader in Software Development in the Department of Informatics at King's College London where he leads the Software Development Team. He is also an EPSRC Fellow.

A year ago I wrote about a project that Cloudflare were funding at King's College London to help improve LuaJIT. Our twelve months is now up. How did we do?

The first thing that happened is that I was lucky to employ a LuaJIT expert, Thomas Fransham, to work on the project. His deep knowledge about LuaJIT was crucial to getting things up and running – 12 months might sound like a long time, but it soon whizzes by!

The second thing that happened was that we realised that the current state of Lua benchmarking was not good enough for anyone to reliably tell if they'd improved LuaJIT performance or not. Different Lua implementations had different benchmark suites, mostly on the small side, and not easily compared. Although it wasn't part of our original plan, we thus put a lot of effort into creating a larger benchmark suite. This sounds like a trivial job, but it isn't. Many programs make Continue reading