Microsoft Turns to Former Foe Linux to Secure IoT Devices
The Azure Sphere technology includes a thumbnail-sized micro-controller unit, a Linux-based operating system, and a cloud-based security service.
The Azure Sphere technology includes a thumbnail-sized micro-controller unit, a Linux-based operating system, and a cloud-based security service.
As humankind continues to stare into the dark abyss of deep space in an eternal quest to understand our origins, new computational tools and technologies are needed at unprecedented scales. Gigantic datasets from advanced high resolution telescopes and huge scientific instrumentation installations are overwhelming classical computational and storage techniques.
This is the key issue with exploring the Universe – it is very, very large. Combining advances in machine learning and high speed data storage are starting to provide hitherto unheard of levels of insight that were previously in the realm of pure science fiction. Using computer systems to infer knowledge …
GPUs Mine Astronomical Datasets For Golden Insight Nuggets was written by James Cuff at The Next Platform.
Many of us are impatient for Arm processors to take off in the datacenter in general and in HPC in particular. And ever so slowly, it looks like it is starting to happen.
Every system buyer wants choice because choice increases competition, which lowers cost and mitigates against risk. But no organization, no matter how large, can afford to build its own software ecosystem. Even the hyperscalers like Google and Facebook, whole literally make money on the apps running on their vast infrastructure, rely heavily on the open source community, taking as much as they give back. So it is …
Another Step In Building The HPC Ecosystem For Arm was written by Timothy Prickett Morgan at The Next Platform.
Hyperconverged storage is a hot commodity right now. Enterprises want to dump their disk arrays and get an easier and less costly way to scale the capacity and performance of their storage to keep up with application demands. Nutanix has a become as significant player in a space where established vendors like Hewlett Packard Enterprise, Dell EMC, and Cisco Systems are broadening their portfolios and capabilities.
But as hyperconverged infrastructure (HCI) becomes increasingly popular and begin moving up from midrange environments into larger enterprises, challenges are becoming evident, from the need to bring in new – and at times …
Pushing Up The Scale For Hyperconverged Storage was written by Jeffrey Burt at The Next Platform.
SDN and NFV are a reality today, but is it the reality that the industry wanted? It's up to the SDN community to set realistic expectations and be candid about the challenges.
The companies completed live tests of 5G in a Toronto stadium and plan to continue testing in other Canadian cities over the next year.
Just like with Windows and Linux servers, networking devices can be exploited by vulnerabilities found in their operating systems. Many IT organizations do not have a comprehensive strategy for mitigating security vulnerabilities that span multiple teams (networking, servers, storage, etc.). Since the majority of network operations is still manual, the need to mitigate quickly and reliably across multiple platforms consisting of hundreds of network devices becomes extremely important.
In Cisco’s March 2018 Semiannual Cisco IOS and IOS XE Software Security Advisory Bundled Publication, 22 vulnerabilities were detailed. While Red Hat does not report or keep track of individual networking vendors CVEs, Red Hat Ansible Engine can be used to quickly automate mitigation of CVEs based on instructions from networking vendors.
In this blog post we are going to walk through CVE-2018-0171 which is titled “Cisco IOS and IOS XE Software Smart Install Remote Code Execution Vulnerability.” This CVE is labeled as critical by Cisco, with the following headline summary:
“...a vulnerability in the Smart Install feature of Cisco IOS Software and Cisco IOS XE Software could allow an unauthenticated, remote attacker to trigger a reload of an affected device, resulting in a Continue reading
The company is looking to support unmodified big data software in containers so data scientists can spend their time analyzing data rather than fighting hardware and drivers.
A U.K. government agency is also recommending that telcos not purchase equipment from ZTE.
Recently, Bert Hubert wrote of a growing problem in the networking world: the complexity of DNS. We have two systems we all use in the Internet, DNS and BGP. Both of these systems appear to be able to handle anything we can throw at them and “keep on ticking.”
But how far can we drive the complexity of these systems before they ultimately fail? Bert posted this chart to the APNIC blog to illustrate the problem—

I am old enough to remember when the entire Cisco IOS Software (classic) code base was under 150,000 lines; today, I suspect most BGP and DNS implementations are well over this size. Consider this for a moment—a single protocol implementation that is larger than an entire Network Operating System ten to fifteen years back.
What really grabbed my attention, though, was one of the reasons Bert believes we have these complexity problems—
DNS developers frequently see immense complexity not as a problem but as a welcome challenge to be overcome. We say ‘yes’ to things we should say ‘no’ to. Less gifted developer communities would have to say no automatically since they simply would not be able to implement all that new stuff. Continue reading
An IBM security report found a 424-percent jump in breaches related to misconfigured cloud infrastructure in 2017, largely due to human error.

Sanitize and validate configuration inputs, and respond to implausible inputs by both continuing to operate in the previous state and alerting to the receipt of bad input. Bad input often falls into one of these categories:
Validate both syntax and, if possible, semantics. Watch for empty data and partial or truncated data (e.g., alert if the configuration is N% smaller than the previous version).
This may invalidate current data due to timeouts. Alert well before the data is expected to expire.
Fail in a way that preserves function, possibly at the expense of being overly permissive or overly simplistic. We’ve found that it’s generally safer for systems to continue functioning with their previous configuration and await a human’s approval before using the new, perhaps invalid, data.
In 2005, Google’s global DNS load- and latency-balancing system received an empty DNS entry file as a result of file permissions. It accepted this empty file and served NXDOMAIN for Continue reading