Renting The Cleanest HPC On Earth

One of the most interesting and strategically located datacenters in the world has taken a shining to HPC, and not just because it is a great business opportunity. Rather, Verne Global is firing up an HPC system rental service in its Icelandic datacenter because its commercial customers are looking for supercomputer-style systems that they can rent rather than buy to augment their existing HPC jobs.

Verne Global, which took over a former NATO airbase and an Allied strategic forces command center outside of Keflavik, Iceland back in 2012 and converted it into a super-secure datacenter, is this week taking the

Renting The Cleanest HPC On Earth was written by Timothy Prickett Morgan at The Next Platform.

Reaction: Science and Engineering

Are you a scientist, or an engineer? This question does not seem to occur to most engineers, but it does seem science has “taken the lead role” in recent history, with engineers being sometimes (or perhaps often) seen as “the folks who figure out how to make use of what scientists are discovering.” There are few fields where this seems closer to the truth than computing. Peter Denning has written an insightful article over at the ACM on this topic; a few reactions are in order.

Denning separates engineers from scientists by saying:

The first concerns the nature of their work. Engineers design and build technologies that serve useful purposes, whereas scientists search for laws explaining phenomena.

While this does seem like a useful starting point, I’m not at all certain the two fields can be cleanly separated in this way. The reality is there is probably a continuum starting from what might be called “meta-engineers,” those who’s primary goal is to implement a technology designed by someone else by mentally reverse engineering what this “someone else” has done, to the deeply focused “pure scientist,” who really does not care about the practical application, but is rather simply searching Continue reading

AMD scores its first big server win with Azure

AMD built it, and now the OEM has come. In this case, its Epyc server processors have scored their first big public win, with Microsoft announcing Azure instances based on AMD’s Epyc server microprocessors.AMD was first to 64-bit x86 design with Athlon on the desktop and Opteron on the servers. Once Microsoft ported Windows Server to 64 bits, the benefit became immediately apparent. Gone was the 4GB memory limit of 32-bit processors, replaced with 16 exabytes of memory, something we won’t live to see in our lifetimes (famous last words, I know).Also on Network World: Micro-modular data centers set to multiply When Microsoft published a white paper in 2005 detailing how it was able to consolidate 250 32-bit MSN Network servers into 25 64-bit servers thanks to the increase in memory, which meant more connections per machine, that started the ball rolling for AMD. And within a few years, Opteron had 20 percent server market share.To read this article in full, please click here

AMD scores its first big server win with Azure

AMD built it, and now the OEM has come. In this case, its Epyc server processors have scored their first big public win, with Microsoft announcing Azure instances based on AMD’s Epyc server microprocessors.AMD was first to 64-bit x86 design with Athlon on the desktop and Opteron on the servers. Once Microsoft ported Windows Server to 64 bits, the benefit became immediately apparent. Gone was the 4GB memory limit of 32-bit processors, replaced with 16 exabytes of memory, something we won’t live to see in our lifetimes (famous last words, I know).Also on Network World: Micro-modular data centers set to multiply When Microsoft published a white paper in 2005 detailing how it was able to consolidate 250 32-bit MSN Network servers into 25 64-bit servers thanks to the increase in memory, which meant more connections per machine, that started the ball rolling for AMD. And within a few years, Opteron had 20 percent server market share.To read this article in full, please click here

History Of Networking – Alistair Woodman – VoIP

In this episode of History of Networking, Alistair Woodman joins us to discuss the beginnings of commercial VoIP including a look at early protocols, CTI and the early days of ATM versus Frame-Relay versus IP.


Alistair Woodman
Guest
Russ White
Host
Donald Sharp
Host
Phil Gervasi
Host
Jordan Martin
Host

Outro Music:
Danger Storm Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
http://creativecommons.org/licenses/by/3.0/

The post History Of Networking – Alistair Woodman – VoIP appeared first on Network Collective.

History Of Networking – Alistair Woodman – VoIP

In this episode of History of Networking, Alistair Woodman joins us to discuss the beginnings of commercial VoIP including a look at early protocols, CTI and the early days of ATM versus Frame-Relay versus IP.


Alistair Woodman
Guest
Russ White
Host
Donald Sharp
Host
Phil Gervasi
Host
Jordan Martin
Host

Outro Music:
Danger Storm Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
http://creativecommons.org/licenses/by/3.0/

The post History Of Networking – Alistair Woodman – VoIP appeared first on Network Collective.

Using Vagrant with Libvirt on Fedora 27

In this post, I’m going to show you how to use Vagrant with Libvirt via the vagrant-libvirt provider when running on Fedora 27. Both Vagrant and Libvirt are topics I’ve covered more than a few times here on this site, but this is the first time I’ve discussed combining the two projects.

If you’re unfamiliar with Vagrant, I recommend you start first with my quick introduction to Vagrant, after which you can browse all the “Vagrant”-tagged articles on my site for a bit more information. If you’re unfamiliar with Libvirt, you can browse all my “Libvirt”-tagged articles; I don’t have an introductory post for Libvirt.

Background

I first experimented with the Libvirt provider for Vagrant quite some time ago, but at that time I was using the Libvirt provider to communicate with a remote Libvirt daemon (the use case was using Vagrant to create and destroy KVM guest domains via Libvirt on a remote Linux host). I found this setup to be problematic and error-prone, and discarded it after only a short while.

Recently, I revisited using the Libvirt provider for Vagrant on my Fedora laptop (which I rebuilt with Fedora 27). As I mentioned in this post Continue reading

IDG Contributor Network: Linux then, and why you should learn it now

The booming popularity of Linux happened around the same time as the rise of the web. The server world, once proprietary, eventually fell in love with Linux just the same way networking did. But for years after it began growing in popularity, it remained in the background. It powered some of the largest servers, but couldn’t find success on personal devices. That all changed with Google’s release of Android in 2008, and just like that, Linux found its way not only onto phones but onto other consumer devices.The same shift from proprietary to open is happening in networking. Specialized hardware that came from one of the “big 3” networking vendors isn’t so necessary anymore. What used to require this specialized hardware can now be done (with horsepower to spare) using off-the-shelf hardware, with Intel CPUs, and with the Linux operating system. Linux unifies the stack, and knowing it is useful for both the network and the rest of the rack. With Linux, networking is far more affordable, more scalable, easier to learn, and more adaptable to the needs of the business.To read this article in full, please click here

IDG Contributor Network: Linux then, and why you should learn it now

The booming popularity of Linux happened around the same time as the rise of the web. The server world, once proprietary, eventually fell in love with Linux just the same way networking did. But for years after it began growing in popularity, it remained in the background. It powered some of the largest servers, but couldn’t find success on personal devices. That all changed with Google’s release of Android in 2008, and just like that, Linux found its way not only onto phones but onto other consumer devices.The same shift from proprietary to open is happening in networking. Specialized hardware that came from one of the “big 3” networking vendors isn’t so necessary anymore. What used to require this specialized hardware can now be done (with horsepower to spare) using off-the-shelf hardware, with Intel CPUs, and with the Linux operating system. Linux unifies the stack, and knowing it is useful for both the network and the rest of the rack. With Linux, networking is far more affordable, more scalable, easier to learn, and more adaptable to the needs of the business.To read this article in full, please click here

Large enterprises abandon data centers for the cloud

Sure, it was a cloud computing conference, and maybe the goal remains a bit unrealistic, but at AWS re:Invent in Las Vegas last week, the number of enterprises expressing a wish to stop running their own data centers was too big ignore.Even old-line enterprises companies said they weren’t content to create a foothold in the cloud and stay with a hybrid cloud environment, though that’s the situation many currently find themselves in. No, many are looking to exit the data center business entirely, just as soon as they can manage it.Also on Network World: How a giant like GE found a home in the cloud And from the size and quality of the companies signing on to this stretch goal — think PG&E, Expedia, and to some extent even Goldman Sachs — it seemed clear that they represent only the tip of the iceberg.To read this article in full, please click here

Large enterprises abandon data centers for the cloud

Sure, it was a cloud computing conference, and maybe the goal remains a bit unrealistic, but at AWS re:Invent in Las Vegas last week, the number of enterprises expressing a wish to stop running their own data centers was too big ignore.Even old-line enterprises companies said they weren’t content to create a foothold in the cloud and stay with a hybrid cloud environment, though that’s the situation many currently find themselves in. No, many are looking to exit the data center business entirely, just as soon as they can manage it.Also on Network World: How a giant like GE found a home in the cloud And from the size and quality of the companies signing on to this stretch goal — think PG&E, Expedia, and to some extent even Goldman Sachs — it seemed clear that they represent only the tip of the iceberg.To read this article in full, please click here

Deep Dive Into Qualcomm’s Centriq Arm Server Ecosystem

Qualcomm launched its Centriq server system-on-chip (SoC) a few weeks ago. The event filled in Centriq’s tech specs and pricing, and disclosed a wide range of ecosystem partners and customers. I wrote about Samsung’s process and customer testimonials for Centriq elsewhere.

Although Qualcomm was launching its Centriq 2400 processor, instead of focusing on a bunch of reference design driven hardware partners, Qualcomm chose to focus its Centriq launch event on ecosystem development, with a strong emphasis on software workloads and partnerships. Because so much of today’s cloud workload mix is based on runtime environments – using containers, interpretive languages,

Deep Dive Into Qualcomm’s Centriq Arm Server Ecosystem was written by Timothy Prickett Morgan at The Next Platform.

Simplifying the Management of Kubernetes with Docker Enterprise Edition

Back in October at DockerCon Europe, we announced that Docker will be delivering a  seamless and simplified integration of Kubernetes into the Docker platform. By integrating Kubernetes with Docker EE, we provide the choice to use Kubernetes and/or Docker Swarm for orchestration while maintaining the consistent developer to operator workflow users have come to expect from Docker. For users, this means they get an unmodified, conformant version of Kubernetes with the added value of the Docker platform including security, management, a familiar developer workflow and tooling, broad ecosystem compatibility and an adherence to industry standards including containerd and the OCI.

Kubernetes and Docker

One of the biggest questions that we’ve been asked since we announced support for Kubernetes at  DockerCon EU –  what does this mean for an operations team that is already using Kubernetes to orchestrate containers within their enterprise? The answer is really fairly straightforward  –  Kubernetes teams using Docker EE will have the following:

  • Full access to the Kube API and all Kubernetes constructs
  • Native use of KubeCTL
  • If you are developing in Kube YML, seamless deployment
  • Ability to develop  in Docker with Compose and leverage your best practices around Kubernetes services

Docker Enterprise Edition with support for Kubernetes Continue reading

Make SSL boring again

It may (or may not!) come as surprise, but a few months ago we migrated Cloudflare’s edge SSL connection termination stack to use BoringSSL: Google's crypto and SSL implementation that started as a fork of OpenSSL.

CTO tweet

We dedicated several months of work to make this happen without negative impact on customer traffic. We had a few bumps along the way, and had to overcome some challenges, but we ended up in a better place than we were in a few months ago.

TLS 1.3

We have already blogged extensively about TLS 1.3. Our original TLS 1.3 stack required our main SSL termination software (which was based on OpenSSL) to hand off TCP connections to a separate system based on our fork of Go's crypto/tls standard library, which was specifically developed to only handle TLS 1.3 connections. This proved handy as an experiment that we could roll out to our client base in relative safety.

However, over time, this separate system started to make our lives more complicated: most of our SSL-related business logic needed to be duplicated in the new system, which caused a few subtle bugs to pop up, and made it Continue reading

Dell EMC makes big hyperconverged systems push with new servers

Dell EMC is expanding its hyperconverged infrastructure portfolio with new systems built around 14th generation PowerEdge servers.Converged (CI) and hyperconverged infrastructure (HCI) is a fancy way of saying turnkey systems with compute, storage, networking and software all combined into a single bundle. Rather than building a system from a variety of vendors, the customer gets everything they need from one vendor and it comes pre-configured to run out of the box.It’s basically a page out of the mainframe book, when everything came from one vendor (usually IBM). As server technology moved away from big iron and the x86 market took over, pieces were fragmented. You got your servers from Dell, HP or IBM, storage from EMC or NetApp, networking from Cisco or 3Com, etc.To read this article in full, please click here