Traffic Acceleration with Cloudflare Mobile SDK

Traffic Acceleration with Cloudflare Mobile SDK

We’re excited to announce early access for Traffic Acceleration with Cloudflare Mobile SDK. Acceleration uses novel transport algorithms built into the SDK to accelerate apps beyond the performance they would see with TCP. Enabling Acceleration through the SDK reduces latency, increases throughput, and improves app user experiences.

A year ago, we launched Cloudflare Mobile SDK with a set of free features focused on measuring mobile app networking performance. Apps are dependent on network connectivity to deliver their app’s user experiences, but developers have limited visibility into how network connectivity is impacting app performance. Integrating the Mobile SDK allows developers to measure and improve the speed of their app’s network interactions.

How it works

Mobile applications interact with the Internet to do everything — to fetch the weather, your email, to step through a check out flow. Everything that makes a smartphone magical is powered by a service on the Internet. How quickly those network interactions happen is dictated by two things: how large the payloads are for the given request/response, and what the available link bandwidth is.

Payload size is mostly application specific: a shopping app is going to request product images and similar medium sized assets, while a stock Continue reading

Lessons learned from Black Friday and Cyber Monday

 

If you’re a consumer-facing business, Black Friday and Cyber Monday are the D-Day for IT operations. Low-level estimates indicate that upwards of 20% of all revenues for companies can occur within these two days. The stakes are even higher if you’re a payment processor as you aggregate the purchases across all consumer businesses. This means that the need to remain available during these crucial 96 hours is paramount.

My colleague, David, and I have been working the past 10 months preparing for this day.  In January 2018 we started a new deployment with a large payment processor to help them build out capacity for their projected 2018 holiday payment growth. Our goal was to create a brand new, 11 rack data center to create a third region to supplement the existing two regions used for payment processing. In addition, we helped deploy additional Cumulus racks and capacity at the existing two regions, which were historically built with traditional vendors.

Now that both days have come and gone, read on to find out what we learned from this experience.

Server Interop Testing

Payment processing has most of its weight on the payment applications running in the data center. As with Continue reading

How my team wrote 12 Cloudflare apps with fewer than 20 lines of code

How my team wrote 12 Cloudflare apps with fewer than 20 lines of code

This is a guest post by Ben Ross. Ben is a Berkeley PhD, serial entrepreneur, and Founder and CTO and POWr.io, where he spends his days helping small businesses grow online.

I like my code the same way I like my team of POWr RangersDRY.

And no, I don’t mean dull and unexciting! (If you haven’t heard this acronym before, DRY stands for Don’t Repeat Yourself, the single most important principle in software engineering. Because, as a mentor once told me, “when someone needs to re-write your code, at least they only need to do it once.”)

At POWr, being DRY is not just a way to write code, it’s a way of life. This is true whether you’re an Engineer, a Customer Support agent, or an Office Manager; if you find you’re repeating yourself, we want to find a way to automate that repetition away. Our employees’ time is our company’s most valuable resource. Not to mention, who wants to spend all day repeating themselves?

We call this process becoming a Scaled Employee. A Scaled Employee leverages their time and resources to make a multifold impact compared to an average employee in their Continue reading

BrandPost: Top Ten Reasons to Think Outside the Router #5: Manual CLI-based Configuration and Management

We’re now more than half-way through our homage to the iconic David Letterman Top Ten List segment from his former Late Show, as Silver Peak counts down the Top Ten Reasons to Think Outside the Router. Click for the #6, #7, #8, #9 and #10 reasons to retire traditional branch routers.To read this article in full, please click here

Podcast: The State of Packet Forwarding

Enterprise network architectures are being reshaped using tenets popularized by the major cloud properties. This podcast explores this evolution and looks at the ways that real-time streaming telemetry, machine learning, and artificial intelligence affect how networks are designed and operated.

Computers could soon run cold, no heat generated

It’s pretty much just simple energy loss that causes heat build-up in electronics. That ostensibly innocuous warming up, though, causes a two-fold problem:Firstly, the loss of energy, manifested as heat, reduces the machine’s computational power — much of the purposefully created and needed, high-power energy disappears into thin air instead of crunching numbers. And secondly, as data center managers know, to add insult to injury, it costs money to cool all that waste heat.For both of those reasons (and some others, such as ecologically related ones, and equipment longevity—the tech breaks down with temperature), there’s an increasing effort underway to build computers in such a way that heat is eliminated — completely. Transistors, superconductors, and chip design are three areas where major conceptual breakthroughs were announced in 2018. They’re significant developments, and consequently it might not be too long before we see the ultimate in efficiency: the cold-running computer.To read this article in full, please click here

Computers could soon run cold, no heat generated

It’s pretty much just simple energy loss that causes heat build-up in electronics. That ostensibly innocuous warming up, though, causes a two-fold problem:Firstly, the loss of energy, manifested as heat, reduces the machine’s computational power — much of the purposefully created and needed, high-power energy disappears into thin air instead of crunching numbers. And secondly, as data center managers know, to add insult to injury, it costs money to cool all that waste heat.For both of those reasons (and some others, such as ecologically related ones, and equipment longevity—the tech breaks down with temperature), there’s an increasing effort underway to build computers in such a way that heat is eliminated — completely. Transistors, superconductors, and chip design are three areas where major conceptual breakthroughs were announced in 2018. They’re significant developments, and consequently it might not be too long before we see the ultimate in efficiency: the cold-running computer.To read this article in full, please click here

IBM and Nvidia announce turnkey AI system

IBM and Nvidia further enhanced their hardware relationship with the announcement of a new turnkey AI solution that combines IBM Spectrum Scale scale-out file storage with Nvidia’s GPU-based AI server.The name is a mouthful: IBM SpectrumAI with Nvidia DGX. It combines Spectrum Scale, a high performance Flash-based storage system, with Nvidia’s DGX-1 server, which is designed specifically for AI. In addition to the regular GPU cores, the V100 processor comes with special AI chips called Tensor Cores optimized to run machine learning workloads. The box comes with a rack of nine Nvidia DGX-1 servers, with a total of with 72 Nvidia V100 Tensor Core GPUs.To read this article in full, please click here

IBM and Nvidia announce turnkey AI system

IBM and Nvidia further enhanced their hardware relationship with the announcement of a new turnkey AI solution that combines IBM Spectrum Scale scale-out file storage with Nvidia’s GPU-based AI server.The name is a mouthful: IBM SpectrumAI with Nvidia DGX. It combines Spectrum Scale, a high performance Flash-based storage system, with Nvidia’s DGX-1 server, which is designed specifically for AI. In addition to the regular GPU cores, the V100 processor comes with special AI chips called Tensor Cores optimized to run machine learning workloads. The box comes with a rack of nine Nvidia DGX-1 servers, with a total of with 72 Nvidia V100 Tensor Core GPUs.To read this article in full, please click here

Episode 41 – The Value of Networking Labs for Production

If you’ve been in networking for any time, you’ve likely had to lab something up to learn a new technology or study for a certification, but labs can be so much more than learning tools. In today’s episode Jody Lemoine and Iain Leiter join Network Collective to talk about practical uses for labs outside of the classroom.


 

We would like to thank VIAVI Solutions for sponsoring this episode of Network Collective. VIAVI Solutions is an application and network management industry leader focusing on end-user experience by providing products that optimize performance and speed problem resolution. Helping to ensure delivery of critical applications for businesses worldwide, Viavi offers an integrated line of precision-engineered software and hardware systems for effective network monitoring and analysis. Learn more at www.viavisolutions.com/networkcollective.

 


Jody Lemoine
Guest
Iain Leiter
Guest
Jordan Martin
Host
Eyvonne Sharp
Host
Russ White
Host

Outro Music:
Danger Storm Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
http://creativecommons.org/licenses/by/3.0/

The post Episode 41 – The Value of Networking Labs for Production appeared first on Network Collective.