How does technical implementation and user feedback shape a cloud-based solution? When is it time to make a significant change in your design? And how do you know you’re headed in the right direction? This Day Two Cloud podcast episode tackles these questions with guest Michael Fraser, co-founder and CEO of Refactr.
The post Day Two Cloud 021: Nice Design; We Need To Change It – The Reality Of Building A Cloud Service appeared first on Packet Pushers.
In June, we announced a wide-scale post-quantum experiment with Google. We implemented two post-quantum (i.e., not yet known to be broken by quantum computers) key exchanges, integrated them into our TLS stack and deployed the implementation on our edge servers and in Chrome Canary clients. The goal of the experiment was to evaluate the performance and feasibility of deployment in TLS of two post-quantum key agreement ciphers.
In our previous blog post on post-quantum cryptography, we described differences between those two ciphers in detail. In case you didn’t have a chance to read it, we include a quick recap here. One characteristic of post-quantum key exchange algorithms is that the public keys are much larger than those used by "classical" algorithms. This will have an impact on the duration of the TLS handshake. For our experiment, we chose two algorithms: isogeny-based SIKE and lattice-based HRSS. The former has short key sizes (~330 bytes) but has a high computational cost; the latter has larger key sizes (~1100 bytes), but is a few orders of magnitude faster.
During NIST’s Second PQC Standardization Conference, Nick Sullivan presented our approach to this experiment and some initial results. Quite accurately, Continue reading
I don’t remember who pointed me to the excellent How Complex Systems Fail document. It’s almost like RFC1925 – I could quote it all day long, and anyone dealing with large mission-critical distributed systems (hint: networks) should read it once a day ;))
Enjoy!
Networks are growing, and growing fast. As enterprises adopt IoT and mobile clients, VPN technologies, virtual machines (VMs), and massively distributed compute and storage, the number of devices—as well as the amount of data being transported over their networks—is rising at an explosive rate. It’s becoming apparent that traditional, manual ways of provisioning don’t scale. Something new needs to be used, and for that, we look toward hyperscalers; companies like Google, Amazon and Microsoft, who’ve been dealing with huge networks almost since the very beginning.
The traditional approach to IT operations has been focused on one server or container at a time. Any attempt at management at scale frequently comes with being locked into a single vendor’s infrastructure and technologies. Unfortunately, today’s enterprises are finding that even the expensive, proprietary management solutions provided by the vendors who have long supported traditional IT practices simply cannot scale, especially when you consider the rapid growth of containerization and VMs that enterprises are now dealing with.
In this blog post, I’ll take a look at how an organization can use open, scalable network technologies—those first created or adopted by the aforementioned hyperscalers—to reduce growing pains. These issues are increasingly relevant as new Continue reading
Learn how Lenovo Open Cloud (LOC) provides cloud deployment and cloud management services, and...
By leveraging Adaptiv Networks' SD-WAN, SkySwitch aims to capitalize on small-to-medium size...
“We now expect the merger will be permitted to close in early 2020,” CEO John Legere said on an...
The eventing project is backed by cloud heavyweights Amazon, Microsoft, and Google.
Company management did not provide any revenue details specific to its could platform, but...
Today's episode of Full Stack Journey focuses on the benefits of testing and validation for Infrastructure-as-Code (IaC). We discuss types of testing, available tools, and why IaC has value even for smaller shops. My guest is Gareth Rushgrove.
The post Full Stack Journey 035: Testing And Validation For Infrastructure As Code appeared first on Packet Pushers.
The Domain Name System (DNS) is the address book of the Internet. When you visit cloudflare.com or any other site, your browser will ask a DNS resolver for the IP address where the website can be found. Unfortunately, these DNS queries and answers are typically unprotected. Encrypting DNS would improve user privacy and security. In this post, we will look at two mechanisms for encrypting DNS, known as DNS over TLS (DoT) and DNS over HTTPS (DoH), and explain how they work.
Applications that want to resolve a domain name to an IP address typically use DNS. This is usually not done explicitly by the programmer who wrote the application. Instead, the programmer writes something such as fetch("https://example.com/news")
and expects a software library to handle the translation of “example.com” to an IP address.
Behind the scenes, the software library is responsible for discovering and connecting to the external recursive DNS resolver and speaking the DNS protocol (see the figure below) in order to resolve the name requested by the application. The choice of the external DNS resolver and whether any privacy and security is provided at all is outside the control of the application. It depends on Continue reading
Editor’s Note: Fifty years ago today, on October 29th, 1969, a team at UCLA started to transmit five letters to the Stanford Research Institute: LOGIN. It’s an event that we take for granted now – communicating over a network – but it was historic. It was the first message sent over the ARPANET, one of the precursors to the Internet. UCLA computer science professor Leonard Kleinrock and his team sent that first message. In this anniversary guest post, Professor Kleinrock shares his vision for what the Internet might become.
On July 3, 1969, four months before the first message of the Internet was sent, I was quoted in a UCLA press release in which I articulated my vision of what the Internet would become. Much of that vision has been realized (including one item I totally missed, namely, that social networking would become so dominant). But there was a critical component of that vision which has not yet been realized. I call that the invisible Internet. What I mean is that the Internet will be invisible in the sense that electricity is invisible – electricity has the extremely simple interface of a socket in the wall from which something called Continue reading
Years ago Dan Hughes wrote a great blog post explaining how expensive TCP is. His web site is long gone, but I managed to grab the blog post before it disappeared and he kindly allowed me to republish it.
If you ask a CIO which part of their infrastructure costs them the most, I’m sure they’ll mention power, cooling, server hardware, support costs, getting the right people and all the usual answers. I’d argue one the the biggest costs is TCP, or more accurately badly implemented TCP.
Read more ...