[ Check out our corporate guide to addressing IoT security. ]
HPE's pledge to pump billions of dollars into developing edge systems shines a light on the company's ambition to be the leading end-to-end computing infrastructure provider.CEO Antonio Neri made the investment announcement at the company's Discover conference Tuesday in Las Vegas, in his first appearance at the company's annual event as chief executive. He took over the CEO role from Meg Whitman in February.To read this article in full, please click here
With its new GreenLake Hybrid Cloud offering, HPE's message to the enterprise is simple: Your cloud, your way.HPE is adding Microsoft Azure and Amazon Web Services capabilities to its GreenLake pay-per-use offerings, providing a turnkey, managed service to deploy public and on-premises clouds.[ Check out What is hybrid cloud computing and learn what you need to know about multi-cloud. | Get regularly scheduled insights by signing up for Network World newsletters. ]
The company debuted the new HPE GreenLake Hybrid Cloud service Tuesday at its Discover conference in Las Vegas, saying that it can manage enterprise workloads in public and private clouds using automation and remote services, eliminating the need for new skilled staff to oversee and manage cloud implementations.To read this article in full, please click here
Agility and speed are of paramount importance for most organizations as they try to innovate and differentiate themselves from the competition. The need for flexibility and rapid scalability is driving more and more companies into the cloud, as traditional data centers are no longer proving to be competitive, agile or robust enough.It should come as no surprise that Cisco predicts 94 percent of workloads and compute instances will be processed by cloud data centers by 2021. But deciding when to take the leap, weighing the costs and risks, and developing a successful strategy is easier said than done. Let’s take a closer look at why companies are ditching those data centers and how they can make the transition as smooth as possible.To read this article in full, please click here
David Goeckeler doesn’t wear all of the hats at Cisco but he certainly wears one of the biggest.Responsible for 20,000 engineers and $32 billion worth of the networking giant’s business, Goeckeler, executive vice president and general manager, masterminds Cisco's network and security strategy which now features ever more emphasis on software. In fact, at the recent Cisco Live, Goeckeler emphasized that notion saying, “all the routers and switches and wireless access points (and in big networks there are going to be tens of thousands of those in a single enterprise network) we're thinking about that as one large software system.”To read this article in full, please click here
Normally venture funding stories don’t get much play here, but when a company scores $250 million, for a grand total of $410 million raised, one has to wonder what all the hoopla is about. Especially given some of the spectacular flameouts we’ve seen over the years.But Cohesity isn’t vapor; it’s shipping a product that it claims helps solve a problem that has plagued enterprises forever: data siloing.Founded in 2013 and led by Nutanix co-founder Mohit Aron, Cohesity just racked up a $250 million investment led by SoftBank Group’s Vision Fund, which includes funding from Cisco Investments, Hewlett Packard Enterprise, Morgan Stanley Expansion Capital, and Sequoia Capital. Those are some big names, to say the least.To read this article in full, please click here
Software and programmable intelligent networks were hot topics at Cisco Live last week, and one of the key components of that discussion was the direction of the company’s SD-WAN strategy.Central to that dialog is how Cisco plans to use and integrate the SD-WAN technology it acquired last year when it bought Viptela for $610 million. For the moment, Cisco says Viptela has brought with it interest to the tune of about 800 new customers in recent months.To read this article in full, please click here
Google is known to fiercely guard its data center secrets, but not Facebook. The social media giant has released two significant tools it uses internally to operate its massive social network as open-source code.The company has released Katran, the load balancer that keeps the company data centers from overloading, as open source under the GNU General Public License v2.0 and available from GitHub. In addition to Katran, the company is offering details on its Zero Touch Provisioning tool, which it uses to help engineers automate much of the work required to build its backbone networks.To read this article in full, please click here
ORLANDO – Cisco made a bold move this week to broaden the use of its DNA Center by opening up the network controller, assurance, automation and analytics system to the community of developers looking to take the next step in network programming.Introduced last summer as the heart of its Intent Based Networking initiative, Cisco DNA Center features automation capabilities, assurance setting, fabric provisioning and policy-based segmentation for enterprise networks.[ Now see What is quantum computing [and why enterprises should care.]
David Goeckeler, executive vice president and general manager of networking and security at Cisco told the Cisco Live customer audience here that DNA Center’s new open platform capabilities mean all its powerful, networkwide automation and assurance tools are available to partners and customers. New applications can use the programmable network for better performance, security and business insights, he said.To read this article in full, please click here
The effort around data center reduction has been to draw down everything, from hardware to facilities. Rackspace has an interesting new twist, though: Put your hardware in our data centers.The company announced a new data center colocation business this week, offering space, power, and network connectivity to customers who provide their own hardware. The facilities are in 10 locations around the world.It’s not a bad idea. The servers are the cheapest expense compared to facility costs, such as the physical building, power, and cooling.[ Learn how server disaggregation can boost data center efficiency and how Windows Server 2019 embraces hyperconverged data centers. | Get regularly scheduled insights by signing up for Network World newsletters. ]
'Lift and shift' to the cloud
The new business, dubbed Rackspace Colocation, is positioned as a way for enterprises to kick off their cloud journey by getting out of their self-managed data center to lower their expenses as they move to the cloud.To read this article in full, please click here
ORLANDO – Cisco’s developer program, DevNet, is on a hot streak.Speaking at Cisco Live 2018, DevNet CTO Susie Wee said the group, which was founded in 2014, now has 500,000 registered members."That’s a pretty cool milestone, but what does it mean? It means that we've hit critical mass with a developer community who can program the network," Wee said. "Our 500,000 strong community is writing code that can be leveraged and shared by others. DevNet is creating a network innovation ecosystem that will be the hub of the next generation of applications and the next generation of business."At Cisco Live the company also announced it has expanded the DevNet world to include:To read this article in full, please click here
CIOs and data center managers who run large hybrid clouds worldwide have a good chance of hearing IBM knock on their doors in the next few months.That's because IBM is opening 18 new "availability zones" for its public cloud across the U.S., Europe, and Asia-Pacific. An availability zone is an isolated physical location within a cloud data center that has its own separate power, cooling and networking to maximize fault tolerance, according to IBM.Along with uptime service level agreements and high-speed network connectivity, users have gotten used to accessing corporate databases wherever they reside, but proximity to cloud data centers is important. Distance to data centers can have an impact on network performance, resulting in slow uploads or downloads.To read this article in full, please click here
The team designing Oak Ridge National Laboratory's new Summit supercomputer correctly predicted the rise of data-centric computing – but its builders couldn't forecast how bad weather would disrupt the delivery of key components.Nevertheless, almost four years after IBM won the contract to build it, Summit is up and running on schedule. Jack Wells, Director of Science for Oak Ridge Leadership Computing Facility (OLCF), expects the 200-petaflop machine to be fully operational by early next year.[ Now see who's developing quantum computers.]
"It's the world's most powerful and largest supercomputer for science," he said.To read this article in full, please click here
2018 is shaping up to be a banner year for all things Ethernet.First of all, the ubiquitous networking technology is having a banner year already in the data center where in the first quarter alone, the switching market recorded its strongest year-over-year revenue growth in over five years, and 100G Ethernet port shipments more than doubled year-over-year, according to a report by Dell’Oro Group researchers.[ Now see who's developing quantum computers.]
The 16-percent switching growth was, "driven by the large-tier cloud hyperscalers such as Amazon, Google, Microsoft and Facebook but also by enterprise customers,” said Sameh Boujelbene, senior director at Dell’Oro.To read this article in full, please click here
Before autonomous data correction software met the mainframe, a day in my life as a DBA looked like this:2 a.m. – Diagnose a critical maintenance utility failure for a panicked night operator, re-submit the REORG job, and head back to bed.8 a.m. – Leverage a database tool to pull pertinent data for an emergency report on an internal customer’s sales region.9 a.m. – Use various database tools and review performance-related data to improve data access for developers alarmed their application performance is slowly degrading.12 p.m. – As lunch approaches, identify where I can save data for a scheduled backup, having noticed unforeseen space problems, and successfully capture my backup.To read this article in full, please click here
Intel formally introduced the Optane DC persistent memory modules late last week, an entirely new class of memory and storage technology designed to sit between storage and memory and provide expanded memory capacity and faster access to data.Unlike SSDs, which plug into a PCI Express slot, Optane DC is built like a thick memory DIMM and plugs into the DIMM slots. Many server motherboards offer as many as eight DIMM slots per CPU, so some can be allocated to Optane and some to traditional memory.That’s important because Optane serves as a cache of sorts, storing frequently accessed data in its memory rather than forcing the server to fetch it from a hard disk. So, server memory only has to access Optane memory, which is sitting right next to it, and not a storage array over Fibre Channel.To read this article in full, please click here
IBM continues to mold the Big Iron into a cloud and devops beast.This week IBM and its long-time ally CA teamed up to link the mainframe and its Cloud Managed Services on z Systems, or zCloud, software with cloud workload-development tools from CA with the goal of better performing applications for private, hybrid or multicloud operations.[ For more on mainframes, read: The (mostly) cool history of the IBM mainframe and Why are mainframes still in the enterprise data center? | Get regularly scheduled insights by signing up for Network World newsletters. ]
IBM says zCloud offers customers a way to move critical workloads into a cloud environment with the flexibility and security of the mainframe. In addition, the company offers the IBM Services Platform with Watson that provides another level of automation within zCloud to assist clients with their moves to cloud environments.To read this article in full, please click here
In a previous blog post, 5 reasons to buy refurbished Cisco equipment, I talked about five facts to keep in mind as you consider how to proceed with your Cisco hardware solutions.Well, my engineering group reminded me of something else to consider for any hardware solution, not just a Cisco solution.Cabling![ Read also: Getting grounded in intent-based networking ]
It seems that cabling can be an afterthought. Sure, you just used a blended solution of new and pre-owned hardware, where each makes the most sense in your infrastructure and creates a unique and potentially game-changing opportunity to maximize value in your investments.To read this article in full, please click here
Options. Everyone needs options. Whenever I travel somewhere with my wife, Christine, even if it’s for a weekend, she needs to check a bag. When I ask her why, she says, “A girl needs options,” hence the oversize luggage.While it’s been easy for someone like my wife to have options, network engineers have never really had the same luxury. Network switches were typically built with fixed functionality, so an organization would need to purchase a wide range of equipment to meet all their needs. Network professionals need greater flexibility from the network
Recently, the chip manufacturers have been building more programmable, flexible products. One of the examples of this is the Cavium XPliant processor that is the silicon that powers Arista’s 7160 switch. Another example is the Barefoot Networks Tofino processor. In addition to being one of the most scenic places on the planet, Tofino is a powerful system on a chip with a fully programmable parser and pipeline. The chip supports 256x 25 Gig-E Serializer/Deserializer (SerDes) at speeds of 1, 10, 25, 40, 50, and 100 Gig-E.To read this article in full, please click here
Many if not most large enterprises run hybrid computing environments and are looking for management software flexible enough to run in and manage assets across private and public clouds.Against this backdrop, BMC has rebuilt its venerable IT service-management product suite to run on a range of cloud platforms while incorporating machine learning to enhance predictive-analysis capabilities.[ Now see who's developing quantum computers.]
The BMC Helix Cognitive Service Management is a software-as-a-service (SaaS) offering that runs on Amazon Web Services as well as BMC's own cloud. It will be available for Azure in the fourth quarter and for Google Public Cloud at the end of the year or beginning of next year, BMC said.To read this article in full, please click here
When Windows Server 2019 is released this fall, the updates will include features that enterprises can use to leverage software-defined networking (SDN).SDN for Windows Server 2019 has a number of components that have attracted the attention of early adopters including security and compliance, disaster recovery and cusiness continuity, and multi-cloud and hybrid-cloud[ For more on SDN see where SDN is going and learn the difference between SDN and NFV. | Get regularly scheduled insights by signing up for Network World newsletters. ]
Virtual-network peering
The new virtual networking peering functionality in Windows Server 2019 allows enterprises to peer their own virtual networks in the same cloud region through the backbone network. This provides the ability for virtual networks to appear as a single network. To read this article in full, please click here