Hi to those of you subscribing and following. For those of you who have been watching of late, this is a brief note to let you know what’s going on, and what my plan is for ramping back up with writing posts here in this blogspace. Short version: I expect to be back to normal by August sometime, and I may post a few items here and there in the mean time.
The first 802.11ac Wave 2 deployment in a professional sports arena raises questions about older stadium WLANs.
Three years ago I was speaking with one of the attendees of my overlay virtual networking workshop @ Interop Las Vegas and he asked me how soon I thought the overlay virtual networking technologies would be accepted in the enterprise networks.
My response: “you might be surprised at the speed of the uptake.” Turns out, I was wrong (again). Today I’m surprised at the lack of that speed.
Read more ...Follow these seven steps to ensure a successful software upgrade.
Does your cable have the fiber you need to get the job done?
For the first time since the Top 500 rankings of the most powerful supercomputers in the world was started 23 years ago, the United States is not home to the largest number of machines on the list – and China, after decades of intense investment and engineering, is.
Supercomputing is not just an academic or government endeavor, but it is an intensely nationalistic one given the enormous sums that are required to create the components of these massive machines, write software for them, and keep them running until some new approach comes along. And given that the machines support the …
China Topples United States As Top Supercomputer User was written by Timothy Prickett Morgan at The Next Platform.
Much to the surprise of the supercomputing community, which is gathered in Germany for the International Supercomputing Conference this morning, news arrived that a new system has dramatically topped the Top 500 list of the world’s fastest and largest machines. And like the last one that took this group by surprise a few years ago, the new system is also in China.
Recall that the reigning supercomputer in China, the Tianhe-2 machine, has stood firmly at the top of that list for three years, outpacing the U.S. “Titan” system at Oak Ridge National Laboratory. We have a more detailed analysis …
A Look Inside China’s Chart-Topping New Supercomputer was written by Nicole Hemsoth at The Next Platform.
Nvidia wants for its latest “Pascal” GP100 generation of GPUs to be broadly adopted in the market, not just used in capability-class supercomputers that push the limits of performance for traditional HPC workloads as well as for emerging machine learning systems. And to accomplish this, Nvidia needs to put Pascal GPUs into a number of distinct devices that fit into different system form factors and offer various capabilities at multiple price points.
At the International Supercomputing Conference in Frankfurt, Germany, Nvidia is therefore taking the wraps off two new Tesla accelerators based on the Pascal GPUs that plug into systems …
Nvidia Rounds Out Pascal Tesla Accelerator Lineup was written by Timothy Prickett Morgan at The Next Platform.
This week at the International Supercomputing Conference (ISC ’16) we are expecting a wave of vendors and high performance computing pros to blur the borders between traditional supercomputing and what is around the corner on the application front—artificial intelligence and machine learning.
For some, merging those two areas is a stretch, but for others, particularly GPU maker, Nvidia, which just extended its supercomputing/deep learning roadmap this morning, the story is far more direct since much of the recent deep learning work has hinged on GPUs for training of neural networks and machine learning algorithms.
We have written extensively over …
What Will GPU Accelerated AI Lend to Traditional Supercomputing? was written by Nicole Hemsoth at The Next Platform.
This is a liveblog for the day 2 keynote of DockerCon 2016, which wraps up today in Seattle, WA. While today’s pre-keynote warm-up doesn’t include laser-equipped kittens, the music is much more upbeat and energetic (as opposed to yesterday’s more somber, dramatic music). If the number of laptops on the podium is any indicator (yesterday it was a cue to the number of demos planned), then today’s keynote will include a few demos as well.
Ben Golub kicks off the day 2 keynote—with the requisite coffee shot that is a sacrifice to the “demo gods”—and offers up some thanks to the supporters of last night’s party at the Space Needle. Golub quick reviews the key announcements and demos from the day 1 keynote (see my liveblog here). Today, though, will be focused on democratizing Docker in the enterprise. In referring to Docker’s adoption in the enterprise, Golub shares some numbers that vary widely, and admits that it’s really difficult to know what the real adoption rate is. He points to multiple “critical transformations” occurring within the enterprise: application modernization, cloud adoption, and DevOps (process/procedure/culture changes).
This leads Golub into a discussion of anti-patterns, or fallacies. The first fallacy he Continue reading
This is a liveblog for the day 1 keynote of DockerCon 2016, taking place over the next couple of days in Seattle, WA. Before the keynote starts in earnest, Gordon the Turtle entertains attendees with some “special” Docker containers that affect the display on the main stage: showing butterflies, playing sounds, launching a Docker-customized version of Pac-Man, or initiating a full-out battle of laser-shooting kittens.
The keynote starts with Ben Golub taking the stage to kick things off. Golub begins his portion with a quick “look back” at milestones from previous Docker events and the history of Docker (the open source project). Golub calls out a few particular sessions—protein folding, data analysis in sports, and extending a video game—and then unveils that these sessions are being presented by kids under the age of 13.
This leads Golub into a review of the efforts of Docker (the company) to democratize containers:
Golub gives a “shout out” to the technologies underpinning modern Linux containers (namespaces, cgroups, etc., and their predecessors) and calls out the 2,900+ contributors to the open source Docker project. He then spends the next several minutes talking about various metrics—pull requests, containers Continue reading