[ Learn about how server disaggregation can boost data center efficiency and learn the how Windows Server 2019 embraces hyperconverged data centers . | Get regularly scheduled insights by signing up for Network World newsletters. ]
Enterprises should find it easier to tap the benefits of FPGAs now that Dell EMC and Fujitsu are putting Intel Arria 10 GX Programmable Acceleration Cards into off-the-shelf servers for the data center.To read this article in full, please click here
“Friends don't let friends build data centers.”That slogan wasn’t even printed on a real T-shirt you could buy. It was just one of the choices in an online poll to choose what Amazon Web Services CTO Werner Vogels should wear. But it pretty much captured the mood at the AWS Summit San Francisco last week, where Vogels gave the opening keynote to some 9,000 cloud-loving attendees.
On stage, Vogels crowed about multiple enterprises abandoning large numbers of data centers in order to move their workloads to the cloud. He cited Cox Automotive—the company behind Autotrader, Dealer.com, Kelley Blue Book, and many more car-shopping brands—“going all in on AWS” and closing more than 40 data centers. He noted that U.K. news provider News International is shutting down 60 data centers, and GE is closing approximately 30 data centers. And Vogels mentioned that the U.K.’s Ministry of Justice was moving to AWS, as well, though he didn’t say whether it was closing any data centers in the process.To read this article in full, please click here
Cisco has added new cloud and virtual deployment options for customers looking to buy into its Tetration Analytics security system.Cisco’s Tetration system gathers information from hardware and software sensors and analyzes it using big-data analytics and machine learning to offer IT managers a deeper understanding of their data center resources.[ Don’t miss customer reviews of top remote access tools and see the most powerful IoT companies . | Get daily insights by signing up for Network World newsletters. ]
Tetration can improve enterprise security monitoring, simplify operational reliability, give customers a single tool to collect consistent security telemetry across the entire data center and analyze large volumes of data in real time. To read this article in full, please click here
IBM is widening its mainframe range with some narrower models – ZR1 and Rockhopper II – that are skinny enough to fit in a standard 19-inch rack, which will answer criticisms of potential customers that the hulking z14 introduced in July 2017 too big to fit in their data centers (see photo above).In addition to new, smaller, packaging for its z14 hardware, IBM is also introducing Secure Service Container technology. This makes use of the z14's encryption accelerator and other security capabilities to protect containerized applications from unwanted interference.[ Check out REVIEW: VMware’s vSAN 6.6 and hear IDC’s top 10 data center predictions . | Get regularly scheduled insights by signing up for Network World newsletters. ]
When IBM introduced the z14 last July, with an accelerator to make encrypting information standard practice in the data center, there was one problem: The mainframe's two-door cabinet was far too deep and too wide to fit in standard data center aisles.To read this article in full, please click here
Putting unused CPUs to work is nothing new. In the modern era, it started in 1999 when the SETI Institute launched SETI@Home, a screensaver that also examined slices of radio signals gathered by a giant telescope for signs of intergalactic life. Nineteen years later, and ET still hasn’t phoned us.But the concept grew to dozens of science and math-related projects. I took part in the World Community Grid run by IBM for years, letting my idle PC look for potential cures for AIDS and Ebola.To read this article in full, please click here
Five servers that exist thanks to the Open Compute ProjectImage by IDG News ServiceThe Open Compute Project began life when Facebook asked the question, “What if we could design our own servers, rather than having to take what vendors offer?”The answer was a series of designs for servers that would be cheaper to build and operate. Facebook decided that it stood a better chance of finding a manufacturer for its designs if others wanted to buy them too, so with the support of Intel and Rackspace, it opened up its designs and invited others to build and build on them too.To read this article in full, please click here
The European Union’s General Data Protection Regulation (GDPR) will force very strict new privacy compliance rules on firms doing business in the EU, but a startup that has an atrocious company and product name has what it says is the solution to maintaining compliance.Cockroach Labs has introduced version 2.0 of its CockroachDB distributed database, which can be run in a data center or cloud. The company bills the product as “the SQL database for global cloud services.” It automatically scales, rebalances, and repairs databases spread over multiple locations.To read this article in full, please click here
Reno-based analyst Synergy Research Group released a review of the 2017 cloud market on January 4th. The report, which estimated the total scope of the industry at $180 billion, gauged the year-over-year growth rate of infrastructure as a service (cloud hosting) and platform as a service (combined cloud hardware and software) at 47%. Such astronomical growth in the infrastructure of cloud is fueling growth of data centers. The extent to which cloud is becoming the new form of infrastructure cannot be overstated, with Cisco predicting 95% of data center traffic will be through cloud infrastructure by 2021.To read this article in full, please click here
The infrastructure required to run artificial intelligence algorithms and train deep neural networks is so dauntingly complex, that it’s hampering enterprise AI deployments, experts say.“55% of firms have not yet achieved any tangible business outcomes from AI, and 43% say it’s too soon to tell,” says Forrester Research about the challenges of transitioning from AI excitement to tangible, scalable AI success.[ Check out REVIEW: VMware’s vSAN 6.6 and hear IDC’s top 10 data center predictions. | Get regularly scheduled insights by signing up for Network World newsletters. ]
“The wrinkle? AI is not a plug-and-play proposition,” the analyst group says. “Unless firms plan, deploy, and govern it correctly, new AI tech will provide meager benefits at best or, at worst, result in unexpected and undesired outcomes.”To read this article in full, please click here
It’s fair to say that there has never been a bigger driver of network evolution than the cloud. The reason for this is the cloud is a fundamentally different kind of compute paradigm, as it enables applications, data, and architecture changes to be done seemingly instantly. Cloud-native infrastructure is what enables mobile app developers to roll out new versions daily if they so choose.The cloud is network-centric
Another fact about the cloud is that it is a network-centric compute model, so a poorly performing network leads to equally poorly performing applications. A lack of network agility means DevOps teams need to sit around twiddling their thumbs while network operations make changes to the network. To read this article in full, please click here
At its GPU Technology Conference this week, Nvidia took the wraps off a new DGX-2 system it claims is the first to offer multi-petaflop performance in a single server, thus greatly reducing the footprint to get to true high-performance computing (HPC).DGX-2 comes just seven months after the DGX-1 was introduced, although it won’t ship until the third quarter. However, Nvidia claims it has 10 times the compute power as the previous generation thanks to twice the number of GPUs, much more memory per GPU, faster memory, and a faster GPU interconnect.[ Learn how server disaggregation can boost data center efficiency. | Get regularly scheduled insights by signing up for Network World newsletters. ]
The DGX-2 uses a Tesla V100 CPU, the top of the line for Nvidia’s HPC and artificial intelligence-based cards. With the DGX-2, it has doubled the on-board memory to 32GB. Nvidia claims the DGX-2 is the world’s first single physical server with enough computing power to deliver two petaflops, a level of performance usually delivered by hundreds of servers networked into clusters.To read this article in full, please click here
The growth in cloud computing has shone a spotlight on data centers, which already consume at least 7 percent of the global electricity supply and growing, according to some estimates. This has led the IT industry to search for ways of making infrastructure more efficient, including some efforts that attempt to rethink the way computers and data centers are built in the first place.To read this article in full, please click here(Insider Story)
Composable infrastructure treats compute, storage, and network devices as pools of resources that can be provisioned as needed, depending on what different workloads require for optimum performance. It’s an emerging category of infrastructure that’s aimed at optimizing IT resources and improving business agility.The approach is like a public cloud in that resource capacity is requested and provisioned from shared capacity – except composable infrastructure sits on-premises in an enterprise data center.[ Check out REVIEW: VMware’s vSAN 6.6 and hear IDC’s top 10 data center predictions . | Get regularly scheduled insights by signing up for Network World newsletters. ]
IT resources are treated as services, and the composable aspect refers to the ability to make those resources available on the fly, depending on the needs of different physical, virtual and containerized applications. A management layer is designed to discover and access the pools of compute and storage, ensuring that the right resources are in the right place at the right time.To read this article in full, please click here
The notion of disaggregation – separating the operating system and applications from the underlying hardware – has always been a conundrum for Cisco. In a nutshell, why would the company risk losing all of the millions of dollars in development and the key networking features tied up in current Cisco hardware and software packages?But in the new world of all-things software in which Cisco plans to be king, the disaggregation strategy is gaining momentum.
[ Learn about how server disaggregation can boost data center efficiency and learn the how Windows Server 2019 embraces hyperconverged data centers . | Get regularly scheduled insights by signing up for Network World newsletters. ]
This week the company took things a step further in announcing a variety of disaggregation steps enterprise and service provider customers could be interested in.To read this article in full, please click here
Humans are far better at identifying data pattern changes audibly than they are graphically in two dimensions, researchers exploring a radical concept say. They think that servers full of big data would be far more understandable if the numbers were all moved off the computer screens or hardcopies and sonified, or converted into sound.That's because when listening to music, nuances, can jump out at you — a bad note, for example. And researchers at Virginia Tech say the same thing may apply with number crunching. Data-set anomaly spotting, or comprehension overall, could be enhanced.Also read: How tech giants are putting big data to work | Sign up: Receive daily Network World news updates
The team behind a project to prove this is testing the theory with a recently built 129-loudspeaker array installed in a giant immersive cube in Virginia Tech’s performance space/science lab, the school's Moss Arts Center.To read this article in full, please click here
Internal tests from a leading industry vendor have shown that fixes applied to servers running Linux or Windows Server aren’t as detrimental as initially thought, with many use cases seeing no impact at all.The Meltdown and Spectre vulnerabilities, first documented in January, seemed like a nightmare for virtualized systems, but that is overblown. There are a lot of qualifiers, starting with what you are doing and what generation processor you are using.The tests were done on servers running Xeons of the Haswell-EP (released in 2014), Broadwell-EP (released in 2016), and Skylake-EP (released in 2017). Haswell and Broadwell were the same microarchitecture, with minor tweaks. The big change there was Broadwell was a die shrink. Skylake, though, was a whole new architecture, and as it turns out, that made the difference.To read this article in full, please click here
[Note: The author of this article is not a lawyer and this article should not be considered legal advice. Please consult a privacy specialist.]The basic news
The GDPR covers all personal data your company stores on data subjects in the EU – whether or not your company has nexus in the EU. Personal data is defined as data that can be used to identify a person. It’s similar to the concept of personally identifiable information (PII) that we have in the US, but it is broader. PII typically includes actual identifying elements like your name, social security number, and birthday, focusing mainly on the data required to fake your identity with a lender. Personal data includes what the US calls PII, plus any data that can be used to identify you in any way, which includes things as basic as an email address, online personality (e.g. twitter handle), or even the IP address where you transmitted a message from.To read this article in full, please click here
It was time to get a handle on BACnet traffic at Penn State.BACnet is a communications protocol for building automation and control (BAC) systems such as heating, ventilating and air conditioning (HVAC), lighting, access control and fire detection. Penn State standardized on BACnet because of its openness.[ For more on IoT see tips for securing IoT on your network, our list of the most powerful internet of things companies and learn about the industrial internet of things. | Get regularly scheduled insights by signing up for Network World newsletters. ]
“Any device, any manufacturer – as long as they talk BACnet, we can integrate them,” says Tom Walker, system design specialist in the facility automation services group at Penn State. “It’s a really neat protocol, but you have to know the quirks that come with deploying it, especially at scale.”To read this article in full, please click here
IBM and Hewlett Packard Enterprise this week introduced new servers optimized for artificial intelligence, and the two had one thing in common: Nvidia technology.HPE this week announced Gen10 of its HPE Apollo 6500 platform, running Intel Skylake processors and up to eight Pascal or Volta Nvidia GPUs connected by NVLink, Nvidia’s high-speed interconnect.A fully loaded V100s server will get you 66 peak double-precision teraflops of performance, which HPE says is three times the performance of the previous generation.The Apollo 6500 Gen10 platform is aimed at deep-learning workloads and traditional HPC use cases. The NVLink technology is up to 10 times faster than PCI Express Gen 3 interconnects.To read this article in full, please click here
Fundamental to harnessing the full potential of the Internet of Things (IoT) is the need for decisions to be made in real time, and it’s in addressing this that discussions have turned to the subject of edge computing over recent years.Before the data generated by myriad of connected IoT devices is sent to the centralized cloud, edge computing sees it stored and processed locally, in distributed micro-clouds at the edge of the network, closer to where the devices are placed, and the data produced. Doing so cuts down on the need for data traffic to be back-hauled to and from a remote data center, thus making it ideal for supporting the real time data delivery required by the IoT.To read this article in full, please click here