Archive

Category Archives for "Networking"

Day Two Cloud 021: Nice Design; We Need To Change It – The Reality Of Building A Cloud Service

How does technical implementation and user feedback shape a cloud-based solution? When is it time to make a significant change in your design? And how do you know you’re headed in the right direction? This Day Two Cloud podcast episode tackles these questions with guest Michael Fraser, co-founder and CEO of Refactr.

The post Day Two Cloud 021: Nice Design; We Need To Change It – The Reality Of Building A Cloud Service appeared first on Packet Pushers.

The TLS Post-Quantum Experiment

The TLS Post-Quantum Experiment
The TLS Post-Quantum Experiment

In June, we announced a wide-scale post-quantum experiment with Google. We implemented two post-quantum (i.e., not yet known to be broken by quantum computers) key exchanges, integrated them into our TLS stack and deployed the implementation on our edge servers and in Chrome Canary clients. The goal of the experiment was to evaluate the performance and feasibility of deployment in TLS of two post-quantum key agreement ciphers.

In our previous blog post on post-quantum cryptography, we described differences between those two ciphers in detail. In case you didn’t have a chance to read it, we include a quick recap here. One characteristic of post-quantum key exchange algorithms is that the public keys are much larger than those used by "classical" algorithms. This will have an impact on the duration of the TLS handshake. For our experiment, we chose two algorithms: isogeny-based SIKE and lattice-based HRSS. The former has short key sizes (~330 bytes) but has a high computational cost; the latter has larger key sizes (~1100 bytes), but is a few orders of magnitude faster.

During NIST’s Second PQC Standardization Conference, Nick Sullivan presented our approach to this experiment and some initial results. Quite accurately, Continue reading

The ease and importance of scaling in the enterprise

Networks are growing, and growing fast. As enterprises adopt IoT and mobile clients, VPN technologies, virtual machines (VMs), and massively distributed compute and storage, the number of devices—as well as the amount of data being transported over their networks—is rising at an explosive rate. It’s becoming apparent that traditional, manual ways of provisioning don’t scale. Something new needs to be used, and for that, we look toward hyperscalers; companies like Google, Amazon and Microsoft, who’ve been dealing with huge networks almost since the very beginning.

The traditional approach to IT operations has been focused on one server or container at a time. Any attempt at management at scale frequently comes with being locked into a single vendor’s infrastructure and technologies. Unfortunately, today’s enterprises are finding that even the expensive, proprietary management solutions provided by the vendors who have long supported traditional IT practices simply cannot scale, especially when you consider the rapid growth of containerization and VMs that enterprises are now dealing with.

In this blog post, I’ll take a look at how an organization can use open, scalable network technologies—those first created or adopted by the aforementioned hyperscalers—to reduce growing pains. These issues are increasingly relevant as new Continue reading

Watson IoT chief: AI can broaden IoT services

IBM thrives on the complicated, asset-intensive part of the enterprise IoT market, according to Kareem Yusuf, GM of the company’s Watson IoT business unit. From helping seaports manage shipping traffic to keeping technical knowledge flowing within an organization, Yusuf said that the idea is to teach artificial intelligence to provide insights from the reams of data generated by such complex systems.Predictive maintenance is probably the headliner in terms of use cases around asset-intensive IoT, and Yusuf said that it’s a much more complicated task than many people might think. It isn’t simply a matter of monitoring, say, pressure levels in a pipe somewhere and throwing an alert when they move outside of norms. It’s about aggregate information on failure rates and asset planning, that a company can have replacements and contingency plans ready for potential failures.To read this article in full, please click here

Getting the Unified Cloud Experience

Learn how Lenovo Open Cloud (LOC) provides cloud deployment and cloud management services, and...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Adaptiv Delivers SD-WAN to SkySwitch uCaaS Customers

By leveraging Adaptiv Networks' SD-WAN, SkySwitch aims to capitalize on small-to-medium size...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

T-Mobile US Expects Sprint Merger to Close in Early 2020

“We now expect the merger will be permitted to close in early 2020,” CEO John Legere said on an...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

CloudEvents Hits 1.0 Release, Gains CNCF Promotion

The eventing project is backed by cloud heavyweights Amazon, Microsoft, and Google.

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Google Cloud ‘Strong’ Q3 Revenue Disappoints Wall Street

Company management did not provide any revenue details specific to its could platform, but...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Viewing network bandwidth usage with bmon

Bmon is a monitoring and debugging tool that runs in a terminal window and captures network statistics, offering options on how and how much data will be displayed and displayed in a form that is easy to understand.To check if bmon is installed on your system, use the which command:$ which bmon /usr/bin/bmon [Get regularly scheduled insights by signing up for Network World newsletters.] Getting bmon On Debian systems, use sudo apt-get install bmon to install the tool.To read this article in full, please click here

IDG Contributor Network: JEDI award signals a critical ‘perception of parity’ in the cloud wars

AWS, Amazon’s cloud business, has enjoyed a long run as undisputed heavyweight champion of the cloud wars. With revenue run rate nearing 36 Billion and continuous double-digit market growth it was difficult to see anyone catching up. Until just like that, in the blink of an eye, a $10 Billion Federal cloud contract for the DoD known as JEDI (Joint Enterprise Defense Infrastructure) was awarded to Amazon’s crosstown rival, Microsoft. With this award, I believe the game has changed, and the market perception of such a substantial win will provide Microsoft the opportunity to apply significant pressure to AWS’s number one market position.This week’s earnings may have been the first domino Microsoft has been on an unparalleled run, and this past week the company delivered well above expectations on earnings coming in at $1.38 per share vs. $1.25 expected while also substantially beating revenue targets by nearly $800 million at $33.06 Billion. The cloud business, Azure, grew 59%*, which was actually viewed as a mooted result compared to previous quarters in the 60-70% range, but with margins up, revenues up and earnings up, Microsoft is flying high.To read this article in full, please click here

DNS Encryption Explained

DNS Encryption Explained
DNS Encryption Explained

The Domain Name System (DNS) is the address book of the Internet. When you visit cloudflare.com or any other site, your browser will ask a DNS resolver for the IP address where the website can be found. Unfortunately, these DNS queries and answers are typically unprotected. Encrypting DNS would improve user privacy and security. In this post, we will look at two mechanisms for encrypting DNS, known as DNS over TLS (DoT) and DNS over HTTPS (DoH), and explain how they work.

Applications that want to resolve a domain name to an IP address typically use DNS. This is usually not done explicitly by the programmer who wrote the application. Instead, the programmer writes something such as fetch("https://example.com/news") and expects a software library to handle the translation of “example.com” to an IP address.

Behind the scenes, the software library is responsible for discovering and connecting to the external recursive DNS resolver and speaking the DNS protocol (see the figure below) in order to resolve the name requested by the application. The choice of the external DNS resolver and whether any privacy and security is provided at all is outside the control of the application. It depends on Continue reading

The Invisible Internet

Editor’s Note: Fifty years ago today, on October 29th, 1969, a team at UCLA started to transmit five letters to the Stanford Research Institute: LOGIN. It’s an event that we take for granted now – communicating over a network – but it was historic. It was the first message sent over the ARPANET, one of the precursors to the Internet. UCLA computer science professor Leonard Kleinrock and his team sent that first message. In this anniversary guest post, Professor Kleinrock shares his vision for what the Internet might become.

On July 3, 1969, four months before the first message of the Internet was sent, I was quoted in a UCLA press release in which I articulated my vision of what the Internet would become. Much of that vision has been realized (including one item I totally missed, namely, that social networking would become so dominant). But there was a critical component of that vision which has not yet been realized. I call that the invisible Internet. What I mean is that the Internet will be invisible in the sense that electricity is invisible – electricity has the extremely simple interface of a socket in the wall from which something called Continue reading

Saved: TCP Is the Most Expensive Part of Your Data Center

Years ago Dan Hughes wrote a great blog post explaining how expensive TCP is. His web site is long gone, but I managed to grab the blog post before it disappeared and he kindly allowed me to republish it.


If you ask a CIO which part of their infrastructure costs them the most, I’m sure they’ll mention power, cooling, server hardware, support costs, getting the right people and all the usual answers. I’d argue one the the biggest costs is TCP, or more accurately badly implemented TCP.

Read more ...