Libertarians are against net neutrality

This post claims to be by a libertarian in support of net neutrality. As a libertarian, I need to debunk this. "Net neutrality" is a case of one-hand clapping, you rarely hear the competing side, and thus, that side may sound attractive. This post is about the other side, from a libertarian point of view.



That post just repeats the common, and wrong, left-wing talking points. I mean, there might be a libertarian case for some broadband regulation, but this isn't it.

This thing they call "net neutrality" is just left-wing politics masquerading as some sort of principle. It's no different than how people claim to be "pro-choice", yet demand forced vaccinations. Or, it's no different than how people claim to believe in "traditional marriage" even while they are on their third "traditional marriage".

Properly defined, "net neutrality" means no discrimination of network traffic. But nobody wants that. A classic example is how most internet connections have faster download speeds than uploads. This discriminates against upload traffic, harming innovation in upload-centric applications like DropBox's cloud backup or BitTorrent's peer-to-peer file transfer. Yet activists never mention this, or other types of network traffic discrimination, because they no more care about "net Continue reading

Getting Started: Setting Up A Job Template

Getting-Started-Job-Template.png

Welcome to another post in our Getting Started series. In our previous post, we discussed the basic structure of how you can write your first playbook.

In this post, we will discuss how to set up job templates and run them against your inventory. We will also discuss job output and how you can view previous job runs to compare and contrast successful/failed runs.

Before we get started, a gentle reminder that in order to run job templates successfully in Red Hat® Ansible® Tower, you will need to have an inventory present, an updated project to select a playbook from to run against and up-to-date credentials

Job Templates: What Are They?

Job templates are a definition and set of parameters for running an Ansible Playbook. In Ansible Tower, job templates are a visual realization of the ansible-playbook command and all flags you can utilize when executing from the command line. A job template defines the combination of a playbook from a project, an inventory, a credential and any other Ansible parameters required to run.

When you run playbooks from the command line you use arguments to control and direct it. Whether you're invoking an inventory file Continue reading

Renting The Cleanest HPC On Earth

One of the most interesting and strategically located datacenters in the world has taken a shining to HPC, and not just because it is a great business opportunity. Rather, Verne Global is firing up an HPC system rental service in its Icelandic datacenter because its commercial customers are looking for supercomputer-style systems that they can rent rather than buy to augment their existing HPC jobs.

Verne Global, which took over a former NATO airbase and an Allied strategic forces command center outside of Keflavik, Iceland back in 2012 and converted it into a super-secure datacenter, is this week taking the

Renting The Cleanest HPC On Earth was written by Timothy Prickett Morgan at The Next Platform.

Reaction: Science and Engineering

Are you a scientist, or an engineer? This question does not seem to occur to most engineers, but it does seem science has “taken the lead role” in recent history, with engineers being sometimes (or perhaps often) seen as “the folks who figure out how to make use of what scientists are discovering.” There are few fields where this seems closer to the truth than computing. Peter Denning has written an insightful article over at the ACM on this topic; a few reactions are in order.

Denning separates engineers from scientists by saying:

The first concerns the nature of their work. Engineers design and build technologies that serve useful purposes, whereas scientists search for laws explaining phenomena.

While this does seem like a useful starting point, I’m not at all certain the two fields can be cleanly separated in this way. The reality is there is probably a continuum starting from what might be called “meta-engineers,” those who’s primary goal is to implement a technology designed by someone else by mentally reverse engineering what this “someone else” has done, to the deeply focused “pure scientist,” who really does not care about the practical application, but is rather simply searching Continue reading

AMD scores its first big server win with Azure

AMD built it, and now the OEM has come. In this case, its Epyc server processors have scored their first big public win, with Microsoft announcing Azure instances based on AMD’s Epyc server microprocessors.AMD was first to 64-bit x86 design with Athlon on the desktop and Opteron on the servers. Once Microsoft ported Windows Server to 64 bits, the benefit became immediately apparent. Gone was the 4GB memory limit of 32-bit processors, replaced with 16 exabytes of memory, something we won’t live to see in our lifetimes (famous last words, I know).Also on Network World: Micro-modular data centers set to multiply When Microsoft published a white paper in 2005 detailing how it was able to consolidate 250 32-bit MSN Network servers into 25 64-bit servers thanks to the increase in memory, which meant more connections per machine, that started the ball rolling for AMD. And within a few years, Opteron had 20 percent server market share.To read this article in full, please click here

AMD scores its first big server win with Azure

AMD built it, and now the OEM has come. In this case, its Epyc server processors have scored their first big public win, with Microsoft announcing Azure instances based on AMD’s Epyc server microprocessors.AMD was first to 64-bit x86 design with Athlon on the desktop and Opteron on the servers. Once Microsoft ported Windows Server to 64 bits, the benefit became immediately apparent. Gone was the 4GB memory limit of 32-bit processors, replaced with 16 exabytes of memory, something we won’t live to see in our lifetimes (famous last words, I know).Also on Network World: Micro-modular data centers set to multiply When Microsoft published a white paper in 2005 detailing how it was able to consolidate 250 32-bit MSN Network servers into 25 64-bit servers thanks to the increase in memory, which meant more connections per machine, that started the ball rolling for AMD. And within a few years, Opteron had 20 percent server market share.To read this article in full, please click here

History Of Networking – Alistair Woodman – VoIP

In this episode of History of Networking, Alistair Woodman joins us to discuss the beginnings of commercial VoIP including a look at early protocols, CTI and the early days of ATM versus Frame-Relay versus IP.


Alistair Woodman
Guest
Russ White
Host
Donald Sharp
Host
Phil Gervasi
Host
Jordan Martin
Host

Outro Music:
Danger Storm Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
http://creativecommons.org/licenses/by/3.0/

The post History Of Networking – Alistair Woodman – VoIP appeared first on Network Collective.

History Of Networking – Alistair Woodman – VoIP

In this episode of History of Networking, Alistair Woodman joins us to discuss the beginnings of commercial VoIP including a look at early protocols, CTI and the early days of ATM versus Frame-Relay versus IP.


Alistair Woodman
Guest
Russ White
Host
Donald Sharp
Host
Phil Gervasi
Host
Jordan Martin
Host

Outro Music:
Danger Storm Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 3.0 License
http://creativecommons.org/licenses/by/3.0/

The post History Of Networking – Alistair Woodman – VoIP appeared first on Network Collective.

Using Vagrant with Libvirt on Fedora 27

In this post, I’m going to show you how to use Vagrant with Libvirt via the vagrant-libvirt provider when running on Fedora 27. Both Vagrant and Libvirt are topics I’ve covered more than a few times here on this site, but this is the first time I’ve discussed combining the two projects.

If you’re unfamiliar with Vagrant, I recommend you start first with my quick introduction to Vagrant, after which you can browse all the “Vagrant”-tagged articles on my site for a bit more information. If you’re unfamiliar with Libvirt, you can browse all my “Libvirt”-tagged articles; I don’t have an introductory post for Libvirt.

Background

I first experimented with the Libvirt provider for Vagrant quite some time ago, but at that time I was using the Libvirt provider to communicate with a remote Libvirt daemon (the use case was using Vagrant to create and destroy KVM guest domains via Libvirt on a remote Linux host). I found this setup to be problematic and error-prone, and discarded it after only a short while.

Recently, I revisited using the Libvirt provider for Vagrant on my Fedora laptop (which I rebuilt with Fedora 27). As I mentioned in this post Continue reading

IDG Contributor Network: Linux then, and why you should learn it now

The booming popularity of Linux happened around the same time as the rise of the web. The server world, once proprietary, eventually fell in love with Linux just the same way networking did. But for years after it began growing in popularity, it remained in the background. It powered some of the largest servers, but couldn’t find success on personal devices. That all changed with Google’s release of Android in 2008, and just like that, Linux found its way not only onto phones but onto other consumer devices.The same shift from proprietary to open is happening in networking. Specialized hardware that came from one of the “big 3” networking vendors isn’t so necessary anymore. What used to require this specialized hardware can now be done (with horsepower to spare) using off-the-shelf hardware, with Intel CPUs, and with the Linux operating system. Linux unifies the stack, and knowing it is useful for both the network and the rest of the rack. With Linux, networking is far more affordable, more scalable, easier to learn, and more adaptable to the needs of the business.To read this article in full, please click here

IDG Contributor Network: Linux then, and why you should learn it now

The booming popularity of Linux happened around the same time as the rise of the web. The server world, once proprietary, eventually fell in love with Linux just the same way networking did. But for years after it began growing in popularity, it remained in the background. It powered some of the largest servers, but couldn’t find success on personal devices. That all changed with Google’s release of Android in 2008, and just like that, Linux found its way not only onto phones but onto other consumer devices.The same shift from proprietary to open is happening in networking. Specialized hardware that came from one of the “big 3” networking vendors isn’t so necessary anymore. What used to require this specialized hardware can now be done (with horsepower to spare) using off-the-shelf hardware, with Intel CPUs, and with the Linux operating system. Linux unifies the stack, and knowing it is useful for both the network and the rest of the rack. With Linux, networking is far more affordable, more scalable, easier to learn, and more adaptable to the needs of the business.To read this article in full, please click here