Lenovo has introduced a new high-density server “tray” for high-performance computing (HPC) environments with the newest generation of water cooling technology it co-developed with a German HPC firm.Unlike your typical water-cooled system, where the water is chilled almost to a drinkable state, the ThinkSystem SD650 high-density server tray — so called because of its design and shape — is designed to operate using warm water, up to 50°C, or 122°F.Read also: Data center cooling market set to explode in the coming years
There is a mindset that CPUs have to be chilled as though they were cold cuts, when Intel says they can handle much higher temperatures. Xeons can handle temps of up to 75°C without becoming unstable or crashing.To read this article in full, please click here
Billions of devices, lots of opportunityImage by ThinkstockThe predictions are getting a bit lurid – the Internet of Things will expand to around 20 billion connected devices by 2020, according to Gartner. (Other estimates range as high as ten times that figure.) MarketsandMarkets says that the market will expand from $170 billion last year to over half a trillion dollars by 2022. So who will be the biggest players in this huge and growing market? Find out here. (Note: Companies are listed in alphabetical order.) To read this article in full, please click here
Performance is critical when evaluating data center intrusion-prevention systems (DCIPS), which face significantly higher traffic volumes than traditional IPSes.A typical IPS is deployed at the corporate network perimeter to protect end-user activity, while a DCIPS sits inline, inside the data center perimeter, to protect data-center servers and the applications that run on them. That requires a DCIPS to keep pace with traffic from potentially hundreds of thousands of users who are accessing large applications in a server farm, says NSS Labs, which recently tested five DCIPS products in the areas of security, performance and total cost of ownership.To read this article in full, please click here
Earlier this month, Cisco updated its Global Cloud Index (GCI), giving rise to a number of news stories that were filled with doom and gloom for corporate IT departments. (Note: Cisco is a client of ZK Research.)For example, one of the articles stated that based on the GCI, cloud computing would virtually replace traditional data centers within three years. While it's true public clouds are growing, private clouds are also increasing. It's a multi-cloud era, as Cisco's Kip Compton writes.To read this article in full, please click here
When it comes to cloud migration, what kind of adopter are you? Did you jump on the cloud bandwagon early? Are you lagging behind, without having tried to virtualize anything yet? Or are you in the mainstream, with a mix of clouds and some systems on premises?In our cloud migration practice, we have found that each of these groups faces its own challenges. Early adopters are often unable to support their ambitious deployments, having discovered the limits of first-generation cloud systems. Laggards may realize the need to transform, but find themselves blocked by costs, resources and time. Most enterprises are in the mainstream. They have cobbled together a hybrid IT environment, but struggle with managing it all and moving forward.To read this article in full, please click here
Intel has launched its brand-new lineup of Xeon processors designed specifically for edge computing needs, where space, heat, and power are all of greater concern than in a traditional data center design.The Xeon D-2100 processors are the successor to the 1000-D series that Intel introduced last year. They are high-powered SoCs with anywhere from four to 18 Skylake-generation cores and sport the full range of Skylake features, including VT-X/VT-d for virtualization, RAS features and the entire TXT, AVX-512, TSX Instruction sets.Also read: What is edge computing and how it’s changing the network
The platform supports up to 512 GB of memory, up to 32 PCI Express 3.0 lanes and up to 20 Flexible High Speed I/O. TDP ranges from 60 to 100 watts, slightly lower than the traditional Xeon design. All told, there are six processors in the Xeon D-2100 family, ranging from four cores to 18 and from 2.3Ghz to 2.8Ghz in speed.To read this article in full, please click here
Standing up a private cloud using technology from multiple vendors is a time-consuming, complex process that involves months of post-deployment tweaking and tuning.In 2009, VMware, Cisco and EMC formed a joint venture called VCE that aimed to solve that problem. (Note: Cisco and VMWare are clients of ZK Research.) They created a converged infrastructure (CI) product called “Vblock” that brought together VMware software, Cisco servers and networking with EMC storage in a preconfigured, turnkey, validated solution so customers could essentially turn the product on and start using it.Also on Network World: Azure Stack: Microsoft’s private-cloud platform and what IT pros need to know about it
Vblock had 90 percent of the heavy lifting done, with the other 10 percent being unique the organization. Customers loved it, with many saying Vblock was the only way to get a private cloud up and running inside a week. To read this article in full, please click here
AMD scored a significant win in its efforts to retake ground in the data center with Dell announcing three new PowerEdge servers aimed at the usual high-performance workloads, like virtualized storage-area networks (VSAN), hybrid-cloud applications, dense virtualization, and big data analytics. The servers will run AMD's Epyc 7000 series processors.What’s interesting is that two of the three new Dell servers, the PowerEdge R6415 and R7415, are single-socket systems. Usually a single-socket server is a small tower stuck in a closet or under a desk and running as a file and print server or departmental server, not something running enterprise workloads. The R7425 is the only dual-socket server being introduced.To read this article in full, please click here
In its latest Cisco Global Cloud Index (2016-2021), the networking giant predicts that by 2021, 94 percent of all workloads will run in some form of cloud environment and that dedicated servers will be a distinct minority.That 94 percent covers both public and private cloud scenarios, which means even in an on-premises scenario, almost all workloads are going to be run in a virtualized environment. The days where a server is dedicated to one workload are rapidly drawing to a close.“We use the definition of one workload or instance with one physical server,” said Thomas Barnett, director, Cisco Service Provider forecast and trends. “In virtual scenarios, we’re seeing one workload with multiple virtual machines and containers. Based on growth in public cloud, we’ve overcome some of the barriers of adoption, such as cost and security and simplicity of deploying of these services.”To read this article in full, please click here
The world is becoming more dynamic and distributed, and that’s having a profound impact on the vendor landscape.Some traditional vendors, such as Microsoft were able to make the shift to the cloud and have thrived, although it required dumping Steve Ballmer. Others are stuck in the legacy world and could have a hard time adjusting the business to meet the demands of their customers. For example, Dell-EMC went private to re-tool and in the midst of transforming itself. Time will tell if it’s successful.One company that I considered to be part of the legacy world is storage management vendor Veritas. It’s essentially still a backup and recovery company. Recently though, the company has made some moves and said some things that make me wonder if there’s something big coming from them.To read this article in full, please click here
When the company unveiled its intent-based network system (IBNS) solution at its “Network. Intuitive.” event in San Francisco last year, that version focused on bringing the concept of a “self-driving” network to the enterprise campus and was dependent on customers having the new Catalyst 9000 switches. Cisco’s solution works as a closed-loop system where the data from the network is collected and then analyzed to turn intent into commands that can be orchestrated.To accomplish that, Cisco’s IBNS requires two components: translation to capture intent, translate it into policy, and check integrity, and activation to orchestrate the policies and configure the systems.To read this article in full, please click here
For the third straight year, IT organizations are keeping tight control over their IT budgets, but not because of economic uncertainty. Instead, the hesitancy to spend is because of the transition to the cloud.That’s the findings from IT market research firm Computer Economics, which published the report Worldwide IT Spending and Staffing Outlook for 2018 (paywall), and it echoes a common finding that on-premises computing continues to fall out of favor as IT shops look to migrate as much work as possible to the public cloud.“Typically, before the cloud transition, companies would grow IT budgets roughly to match expected revenue growth,” said David Wagner, vice president of research for Computer Economics in a statement. “This is no longer true in regions of higher cloud adoption, such as the U.S. and Canada, where IT budgets are not keeping pace with revenue growth.”To read this article in full, please click here
Microsegmentation is a method of creating secure zones in data centers and cloud deployments that allows companies to isolate workloads from one another and secure them individually. It’s aimed at making network security more granular. Microsegmentation vs. VLANs, firewalls and ACLs
Network segmentation isn’t new. Companies have relied on firewalls, virtual local area networks (VLAN) and access control lists (ACL) for network segmentation for years. With microsegmentation, policies are applied to individual workloads for greater attack resistance.To read this article in full, please click here
Cisco has advanced its intent-based networking gear so now it can both verify that networks are actually running according to the intentions set by admins and also so it can help to find and resolve network problems faster on both wired and wireless networks.The company says this is a new phase in the evolution of its IBN in which it is addressing assurance – the ability to assess whether the intentions that have been translated into policies and orchestrated throughout the network by configuring individual devices are carrying out the intentions they are supposed to.+DON'T MISS:Getting grounded in intent-based networking; A deep dive into Cisco's intent-based networking; What is intent-based networking?+To read this article in full, please click here
When February rolls around each year, every football fan knows what’s right around the corner – it’s time for the Super Bowl! This game brings together the two best teams in the National Football League to compete for the title, with all 32 teams battling throughout the year to earn that top spot. Believe it or not, this same competition is very similar to the data center industry.In fact, the NFL and data centers have many similarities. From the fundamental skills needed to be successful, to the strong team-centric leadership required and the same competition always at the top of their league or industry, below are a few examples of how the NFL and data centers have more in common than you may think.To read this article in full, please click here
With the software fixes for the Spectre and Meltdown chip vulnerabilities slowing servers down by unacceptable amounts, a hardware fix is clearly what is needed, and Intel’s boss says one is coming this year.Intel CEO Brian Krzanich told analysts during the company's Q4 2017 earnings call earlier this week that "silicon-based" fixes for Spectre and Meltdown would arrive by the end of 2018. Intel has several launches set for this year and he did not specify which."We're working to incorporate silicon-based changed to future products that will directly address the Spectre and Meltdown threats in hardware. And those products will begin appearing later this year," were his exact words.To read this article in full, please click here
The panel talks about serverless systems, which spin up a snippet of code that runs on demand to perform a business operation. It's a step toward a more developer-friendly approach to code development.
Billions of terabytes of data could be stored in one small flask of liquid, a group of scientists believe. The team from Brown University says soon it will be able to figure out a chemical-derived way of storing and manipulating mass-data by loading it onto molecules and then dissolving the molecules into liquids.If the method is successful, large-scale, synthetic molecule storage in liquids could one day replace hard drives. It would be a case of the traditional engineering that we’ve always pursued for storage being replaced by chemistry in our machines and data centers.Also on Network World: The future of storage: Pure Storage CEO Charlie Giancarlo shares his predictions
The U.S. Department of Defense’s Defense Advanced Research Projects Agency (DARPA) has awarded the Brown team $4.1 million to work out how move the concept forward.To read this article in full, please click here
It’s been a busy hyperconverged infrastructure (HCI) week for Cisco. Yesterday it announced its intent to acquire secure HCI vendor, Skyport Systems. Today it announced HyperFlex 3.0, which is the biggest update Cisco has had to the product since it introduced the product years ago. Cisco’s driving vision is a business that can run any workload on any cloud that can easily scale up as required. This latest release is entirely dedicated to fulfilling that vision.The cloud is the future, and the majority of businesses will adopt hybrid clouds. In announcing HyperFlex 3.0, Cisco cited an IDC data point that states that 87 percent of businesses are using or plan to use a hybrid environment, and 94 percent plan to use multiple clouds — meaning that hybrid, multi-clouds will be the norm.To read this article in full, please click here
The father of the Linux operating system has once again blasted Intel for its handling of the Spectre and Meltdown chip vulnerabilities due to the sloppiness of some of the patches. While he has a point, he’s also being a bit unfair, as well as unreasonable.Linus Torvalds is known for his blistering comments on the Linux mailing lists and frequently expresses his dissatisfaction with high levels of acidity. In this case, he was responding to an Amazon engineer on the Linux kernel mailing list regarding recent patches that have resulted in some systems randomly rebooting.To read this article in full, please click here