All the shiny and zippy hardware in the world is meaningless without software, and that software can only go mainstream if it is easy to use. It has taken Linux two decades to get enterprise features and polish, and Windows Server took as long, too. So did a raft of open source middleware applications for storing data and interfacing back-end databases and datastores with Web front ends.
Now, it is time for HPC and AI applications, and hopefully, it won’t take this long.
As readers of The Next Platform know full well, HPC applications are not new. In fact, they …
Mainstreaming HPC Codes Will Drive The Next GPU Wave was written by Timothy Prickett Morgan at The Next Platform.
More than one year after publishing a previous VyOS version, the new VyOS 1.1.8 is finally released. VyOS is an open source network operating system that can be installed on physical hardware or as a virtual machine. It is based on GNU/Linux and joins multiple applications such as Quagga, ISC DHCPD, OpenVPN, StrongS/WAN and others under a single management interface. VyOS is a cheap and effective solution for those who want to learn Junos like CLI.
Linux user can use my installation scripts for zero-touch VyOS deployment. Scripts download the newest stable VyOS x86-64 Live ISO image from web, create VMware VMDK disk and install VyOS from ISO on the disk. The scripts are available here (part 1.1).
Picture 1 - VyOS Version 1.1.8
Note: The scripts are tested on Linux with installed Qemu, KVM and Expect. First, run the Bash script deploy vyos.sh. The script downloads the latest VyOS ISO image. Then run the Expect script install vyos.exp that install on VyOS Live CD.
Whitespace handling is one of the most confusing aspects of Jinja2, thoroughly frustrating many attendees of my Ansible and Network Automation online courses.
I decided to fix that, ran a few well-controlled experiments, and documented the findings and common caveats in Whitespace Handling in Jinja2 video.
A recent project I was working on involved the need to join a new office to our existing Data Centres and OSPF core using a Gig circuit over the Internet. To flesh out this idea and test its viability I thought I would try and solve an ESX capacity problem I have at home by moving vCentre into the cloud.
Long-time readers are probably aware that I’m a big fan of Markdown. Specifically, I prefer the MultiMarkdown variant that adds some additional extensions beyond “standard” Markdown. As such, I’ve long used Fletcher Penny’s MultiMarkdown processor (the latest version, version 6, is available on GitHub). While Fletcher offers binary builds for Windows and macOS, the Linux binary has to be compiled from source. In this post, I’ll provide the steps I followed to compile a MultiMarkdown binary for Fedora 27.
The “How to Compile” page on the MMD-6 Wiki is quite sparse, so a fair amount of trial-and-error was needed. To keep my main Fedora installation as clean as possible, I used Vagrant with the Libvirt provider to create a “build VM” based on the “fedora/27-cloud-base” box.
Once the VM was running, I installed the necessary packages to compile the source code. It turns out only the following packages were necessary:
sudo dnf install gcc make cmake gcc-c++
Then I downloaded the source code for MMD-6:
curl -LO https://github.com/fletcher/MultiMarkdown-6/archive/6.2.3.tar.gz
Unpacking the archive with tar
created a MultiMarkdown-6-6.2.3
directory. Changing into that directory, then the instructions from the Wiki page worked as expected:
make
Continue reading
After explaining the basics of PowerShell, Mitja Robas described how to do implement the “Hello, World!” of network automation (collecting printouts from network devices) in PowerShell.
To watch all videos from this free webinar, register here.
Docker Machine is, in my opinion, a useful and underrated tool. I’ve written before about using Docker Machine with various services/providers; for example, see this article on using Docker Machine with AWS, or this article on using Docker Machine with OpenStack. Docker Machine also supports local hypervisors, such as VMware Fusion or VirtualBox. In this post, I’ll show you how to use Docker Machine with KVM and Libvirt on a Linux host (I’m using Fedora 27 as an example).
Docker Machine ships with a bunch of different providers, but the KVM/Libvirt provider must be obtained separately (you can find it here on GitHub). Download a binary release (make sure it is named docker-machine-driver-kvm
), mark it as executable, and place it somewhere in your PATH. Fedora 27 comes with KVM and the Libvirt daemon installed by default (in order to support the Boxes GUI virtualization app), but I found it helpful to also install the client-side tools:
sudo dnf install libvirt-client
This will make the virsh
tool available, which is useful for viewing Libvirt-related resources. Once you have both the KVM/Libvirt driver and the Libvirt client tools installed, you can launch a VM:
docker-machine create -d kvm --kvm-network Continue reading
Two years ago I wrote about how to use InfluxDB & Grafana for better visualization of network statistics. I still loathe MRTG graphs, but configuring InfluxSNMP was a bit of a pain. Luckily it’s now much easier to collect SNMP data using Telegraf. InfluxDB and Grafana have also improved a lot. Read on for details about to monitor network interface statistics using Telegraf, InfluxDB and Grafana.
There’s three parts to this:
Grafana: Grafana is “The open platform for beautiful analytics and monitoring.” It makes it easy to create dashboards for displaying data from many sources, particularly time-series data. It works with several different data sources such as Graphite, Elasticsearch, InfluxDB, and OpenTSDB. We’re going to use this as our main front end for visualising our network statistics.
InfluxDB: InfluxDB is “…a data store for any use case involving large amounts of timestamped data.” This is where we we’re going to store our network statistics. It is designed for exactly this use-case, where metrics are collected over time.
Telegraf: Telegraf is “…a plugin-driven server agent for collecting and reporting metrics.” This can collect data from a wide variety of sources, Continue reading
I was reading a great post this week from Gian Paolo Boarina (@GP_Ifconfig) about complexity in networking. He raises some great points about the overall complexity of systems and how we can never really reduce it, just move or hide it. And it made me think about complexity in general. Why are we against complex systems?
Complexity is difficult. The more complicated we make something the more likely we are to have issues with it. Reducing complexity makes everything easier, or at least appears to do so. My favorite non-tech example of this is the carburetor of an internal combustion engine.
Carburetors are wonderful devices that are necessary for the operation of the engine. And they are very complicated indeed. A minor mistake in configuring the spray pattern of the jets or the alignment of them can cause your engine to fail to work at all. However, when you spend the time to learn how to work with one properly, you can make the engine perform even above the normal specifications.
Carburetors have been largely replaced in modern engines by computerized fuel injectors. These systems accomplish the same goal of injecting the fuel-air mixture into Continue reading
Founded in 1792, Alm. Brand is a Danish insurance and banking company headquartered in Copenhagen, Denmark and one of the oldest companies to have ever presented at any DockerCon. Sune Keller, an IT architect, and Loke Johannessen, Systems Specialist, rode their bikes to DockerCon Europe 2017 to demonstrate how they helped lift and shift their legacy WebLogic applications to Docker Enterprise Edition (Docker EE). You can watch their entire talk here:
Alm. Brand started working with Docker EE after hearing about it at DockerCon 2015 (known as Docker Datacenter back then). After successfully deploying the first set of new greenfield apps in their Docker EE environment, Alm. Brand wanted to tackle their existing WebLogic applications which were causing the operations team the biggest headaches. The team operated the WebLogic applications in a large cluster, all running on the same JVM. When an app crashed, it would often crash the entire JVM or hang the entire cluster, making it hard to identify which application was the root cause. The setup was also very brittle and slow as they could only deploy one app at a time to the cluster.
With the skills Continue reading
Each year, at the ISC and SC supercomputing conference shows every year, a central focus tends to be the release of the Top500 list of the world’s most powerful supercomputers. As we’ve noted in The Next Platform, the 25-year-old list may have some issues with it, but it still captures the imagination, with lineups of ever-more powerful systems that reflect the trend toward heterogeneity and accelerators and illustrate the growing competition between the United States and China for dominance in the HPC field, the continued strength of Japan’s supercomputing industry and the desire of European Union countries to …
Green500 Drives Power Efficiency For Exascale was written by Jeffrey Burt at The Next Platform.
When your in-laws give your child a loud toy for the holidays, you know you are going to have to hear it for the next few months. But when that toy connects to the Internet, how can you be sure that you’re the only ones listening?
This holiday season, “smart toys” (Internet or Bluetooth-enabled toys) are some of the most popular toys on the market. A lot of these toys look awesome, including:
Smart toys come with fantastic features, but if left unsecured, smart toys can present a serious privacy risk to those who use them. For instance:
Unsecured smart toys present Continue reading
There has been a lot of talk about taking HPC technologies mainstream, taking them out of realm of research, education and government institutions and making them available to enterprises that are being challenged by the need to manage and process the huge amounts of data being generated through the use of such compute- and storage-intensive workloads such as analytics, artificial intelligence and machine learning.
At The Next Platform, we have written about the efforts by systems OEMs likes IBM, Dell EMC, and Hewlett Packard Enterprise and software makers like Microsoft and SAP to develop offerings that are cost-efficient and …
The Symmetry Of Putting Fluid Dynamics In The Cloud was written by Jeffrey Burt at The Next Platform.