T-Mobile US Expects Sprint Merger to Close in Early 2020
“We now expect the merger will be permitted to close in early 2020,” CEO John Legere said on an...
“We now expect the merger will be permitted to close in early 2020,” CEO John Legere said on an...
The eventing project is backed by cloud heavyweights Amazon, Microsoft, and Google.
Company management did not provide any revenue details specific to its could platform, but...
AWS is the #1 cloud provider for open-source database hosting, and the go-to cloud for MySQL deployments. As organizations continue to migrate to the cloud, it’s important to get in front of performance issues, such as high latency, low throughput, and replication lag with higher distances between your users and cloud infrastructure. While many AWS users default to their managed database solution, Amazon RDS, there are alternatives available that can improve your MySQL performance on AWS through advanced customization options and unlimited EC2 instance type support. ScaleGrid offers a compelling alternative to hosting MySQL on AWS that offers better performance, more control, and no cloud vendor lock-in and the same price as Amazon RDS. In this post, we compare the performance of MySQL Amazon RDS vs. MySQL Hosting at ScaleGrid on AWS High Performance instances.
Today's episode of Full Stack Journey focuses on the benefits of testing and validation for Infrastructure-as-Code (IaC). We discuss types of testing, available tools, and why IaC has value even for smaller shops. My guest is Gareth Rushgrove.
The post Full Stack Journey 035: Testing And Validation For Infrastructure As Code appeared first on Packet Pushers.
A while ago I came across a utility named jk
, which purported to be able to create structured text files—in JSON, YAML, or HCL—using JavaScript (or TypeScript that has been transpiled into JavaScript). One of the use cases was creating Kubernetes manifests. The GitHub repository for jk
describes it as “a data templating tool”, and that’s accurate for simple use cases. In more complex use cases, the use of a general-purpose programming language like JavaScript in jk
reveals that the tool has the potential to be much more than just a data templating tool—if you have the JavaScript expertise to unlock that potential.
The basic idea behind jk
is that you could write some relatively simple JavaScript, and jk
will take that JavaScript and use it to create some type of structured data output. I’ll focus on Kubernetes manifests here, but as you read keep in mind you could use this for other purposes as well. (I explore a couple other use cases at the end of this post.)
Here’s a very simple example:
const service = new api.core.v1.Service('appService', {
metadata: {
namespace: 'appName',
labels: {
app: 'appName',
team: 'blue',
},
},
spec: {
selector: Continue reading
The fundamental advantage of using cloud services to deliver IT resources needed to support daily business operations is the flexibility they allow: on demand applications and instant infrastructure that can be ordered, provisioned, and delivered in minutes without the delays often involved in submitting equivalent requests to internal IT departments or waiting for suitable on-premise architecture to be implemented and configured. …
Reining In And Optimizing Cloud Spending was written by Timothy Prickett Morgan at The Next Platform.
The Domain Name System (DNS) is the address book of the Internet. When you visit cloudflare.com or any other site, your browser will ask a DNS resolver for the IP address where the website can be found. Unfortunately, these DNS queries and answers are typically unprotected. Encrypting DNS would improve user privacy and security. In this post, we will look at two mechanisms for encrypting DNS, known as DNS over TLS (DoT) and DNS over HTTPS (DoH), and explain how they work.
Applications that want to resolve a domain name to an IP address typically use DNS. This is usually not done explicitly by the programmer who wrote the application. Instead, the programmer writes something such as fetch("https://example.com/news")
and expects a software library to handle the translation of “example.com” to an IP address.
Behind the scenes, the software library is responsible for discovering and connecting to the external recursive DNS resolver and speaking the DNS protocol (see the figure below) in order to resolve the name requested by the application. The choice of the external DNS resolver and whether any privacy and security is provided at all is outside the control of the application. It depends on Continue reading
Editor’s Note: Fifty years ago today, on October 29th, 1969, a team at UCLA started to transmit five letters to the Stanford Research Institute: LOGIN. It’s an event that we take for granted now – communicating over a network – but it was historic. It was the first message sent over the ARPANET, one of the precursors to the Internet. UCLA computer science professor Leonard Kleinrock and his team sent that first message. In this anniversary guest post, Professor Kleinrock shares his vision for what the Internet might become.
On July 3, 1969, four months before the first message of the Internet was sent, I was quoted in a UCLA press release in which I articulated my vision of what the Internet would become. Much of that vision has been realized (including one item I totally missed, namely, that social networking would become so dominant). But there was a critical component of that vision which has not yet been realized. I call that the invisible Internet. What I mean is that the Internet will be invisible in the sense that electricity is invisible – electricity has the extremely simple interface of a socket in the wall from which something called Continue reading
Years ago Dan Hughes wrote a great blog post explaining how expensive TCP is. His web site is long gone, but I managed to grab the blog post before it disappeared and he kindly allowed me to republish it.
If you ask a CIO which part of their infrastructure costs them the most, I’m sure they’ll mention power, cooling, server hardware, support costs, getting the right people and all the usual answers. I’d argue one the the biggest costs is TCP, or more accurately badly implemented TCP.
Read more ...It was fifty years ago when the very first network packet took flight from the Los Angeles campus at UCLA to the Stanford Research Institute (SRI) building in Palo Alto. Those two California sites had kicked-off the world of packet networking, of the Arpanet, and of the modern Internet as we use and know it today. Yet by the time the third packet had been transmitted that evening, the receiving computer at SRI had crashed. The “L” and “O” from the word “LOGIN” had been transmitted successfully in their packets; but that “G”, wrapped in its own packet, caused the death of that nascent packet network setup. Even today, software crashes, that’s a solid fact; but this historic crash, is exactly that — historic.
So much has happened since that day (October 29’th to be exact) in 1969, in fact it’s an understatement to say “so much has happened”! It’s unclear that one blog article would ever be able to capture the full history of packets from then to now. Here at Cloudflare we say we are helping build a “better Internet”, so it would make perfect sense for us to Continue reading
This is a guest post by Steve Crocker of Shinkuro, Inc. and Bill Duvall of Consulair. Fifty years ago they were both present when the first packets flowed on the Arpanet.
On 29 October 2019, Professor Leonard (“Len”) Kleinrock is chairing a celebration at the University of California, Los Angeles (UCLA). The date is the fiftieth anniversary of the first full system test and remote host-to-host login over the Arpanet. Following a brief crash caused by a configuration problem, a user at UCLA was able to log in to the SRI SDS 940 time-sharing system. But let us paint the rest of the picture.
The Arpanet was a bold project to connect sites within the ARPA-funded computer science research community and to use packet-switching as the technology for doing so. Although there were parallel packet-switching research efforts around the globe, none were at the scale of the Arpanet project. Cooperation among researchers in different laboratories, applying multiple machines to a single problem and sharing of resources were all part of the vision. And over the fifty years since then, the vision has been fulfilled, albeit with some undesired outcomes mixed in with the enormous benefits. However, in this blog, we Continue reading
The next phase of software-defined infrastructure, according to HPE, is artificial...
The key differentiators for 5G operators — beyond a faster, more reliable, and more flexible...