In the next session of Network Automation Use Cases webinar (on Thursday, February 16th) I’ll describe how you could implement automatic deployment of network services, and what you could do to minimize the impact of unintended consequences.
If you attended one of the previous sessions of this webinar, you’re already registered for this one, if not, visit this page and register.
Community established networks are emerging and evolving in Africa as a sustainable solution to address the existing connectivity gaps. 37 community networks initiatives have so far been identified in 12 African countries, of which 25 are considered active trying to set up or improve their own telecommunications infrastructure to connect the unconnected.
One of the goals of the Internet Society is to help expand connectivity and promote increased collaboration between community network operators in the region as well as provide an opportunity for them to engage with other stakeholders.
As part of my migration to Linux as my primary laptop OS, I needed to revisit my choice of virtualization provider. Long-time readers probably know that I was an early adopter of VMware Fusion, starting way back in 2006 with the very first “friends and family” release (before it was even publicly available). Obviously I can’t use Fusion on Linux, but do I use VMware Workstation for Linux? VirtualBox? Or something else? That’s what I set out to determine, and in this post I’ll share what I selected and the reasoning behind my selection.
So what were the options to consider? While there may be some other solutions, these are the three I primarily assessed:
Since I have been using Vagrant quite a bit over the last few years, whatever solution I selected needed to work reasonably well with Vagrant.
I’m pretty familiar with KVM and Libvirt, so I started there. Given that KVM and Libvirt are “native” to Linux, it felt like it would be a clean solution. While Continue reading
I was an happy user of rxvt-unicode until I got a laptop with an HiDPI display. Switching from a LoDPI to a HiDPI screen and back was a pain: I had to manually adjust the font size on all terminals or restart them.
VTE is a library to build a terminal emulator using the GTK+ toolkit, which handles DPI changes. It is used by many terminal emulators, like GNOME Terminal, evilvte, sakura, termit and ROXTerm. The library is quite straightforward and writing a terminal doesn’t take much time if you don’t need many features.
Let’s see how to write a simple one.
Let’s start small with a terminal with the default settings. We’ll write that in C. Another supported option is Vala.
#include <vte/vte.h> int main(int argc, char *argv[]) { GtkWidget *window, *terminal; /* Initialise GTK, the window and the terminal */ gtk_init(&argc, &argv); terminal = vte_terminal_new(); window = gtk_window_new(GTK_WINDOW_TOPLEVEL); gtk_window_set_title(GTK_WINDOW(window), "myterm"); /* Start a new shell */ gchar **envp = g_get_environ(); gchar **command = (gchar * Continue reading
About 275 employees will be impacted by the cuts.
Biggest announcement ever! (In terms of volume.)
We spend a lot of time contemplating what technologies will be deployed at the heart of servers, storage, and networks and thereby form the foundation of the next successive generations of platforms in the datacenter for running applications old and new. While technology is inherently interesting, we are cognizant of the fact that the companies producing technology need global reach and a certain critical mass.
It is with this in mind, and as more of a thought experiment than a desire, that we consider the fate of International Business Machines in the datacenter. In many ways, other companies have long …
The Case For IBM Buying Nvidia, Xilinx, And Mellanox was written by Timothy Prickett Morgan at The Next Platform.
Google has paused its Fiber initiative.
We have written much about large-scale deep learning implementations over the last couple of years, but one question that is being posed with increasing frequency is how these workloads (training in particular) will scale to many nodes. While different companies, including Baidu and others, have managed to get their deep learning training clusters to scale across many GPU-laden nodes, for the non-hyperscale companies with their own development teams, this scalability is a sticking point.
The answer to deep learning framework scalability can be found in the world of supercomputing. For the many nodes required for large-scale jobs, the de facto …
Pushing MPI into the Deep Learning Training Stack was written by Nicole Hemsoth at The Next Platform.
The post Worth Reading: Addressing in 2016 appeared first on 'net work.