Microsoft is bringing artificial intelligence to the task of sorting through millions of servers to determine what can be recycled and where.The new initiative calls for the building of so-called Circular Centers at Microsoft data centers around the world, where AI algorithms will be used to sort through parts from decommissioned servers or other hardware and figure out which parts can be reused on the campus.
READ MORE: How to decommission a data center
Microsoft says it has more than three million servers and related hardware in its data centers, and that a server's average lifespan is about five years. Plus, Microsoft is expanding globally, so its server numbers should increase.To read this article in full, please click here
Privacy is one of the big holdups to a world of ubiquitous, seamless data-sharing for artificial intelligence-driven learning. In an ideal world, massive quantities of data, such as medical imaging scans, could be shared openly across the globe so that machine learning algorithms can gain experience from a broad range of data sets. The more data shared, the better the outcomes.That generally doesn't happen now, including in the medical world, where privacy is paramount. For the most part, medical image scans, such as brain MRIs, stay at the institution level for analysis. The result is then shared, but not the original patient scan data.
READ MORE: Cisco challenge winners use AI, IoT to tackle global problemsTo read this article in full, please click here
Another study finds that the data center is far from dying. That's not surprising to learn from the Uptime Institute's annual data center survey. However one trend that did stand out in the research is that power efficiency has "flatlined" in recent years.Uptime says big improvements in energy efficiency were achieved between 2007 and 2013 using mostly inexpensive or easy methods, such as simple air containment. But moving beyond those gains involves more difficult or expensive changes. Since 2013, improvements in power usage effectiveness (PUE) have been marginal, according to the group.To read this article in full, please click here
Visible light communications (VLC) systems are an alternative to radio-based wireless networks and serve a dual purpose: They provide in-building lighting, and they use light waves for data transmission. VLC uses modulated light as a data carrier, while the visible spectrum provides light.Using VLC for data transmission has some advantages. It offers decent bandwidth; it offers security because walls, floors and roofs obstruct the data-carrying wavelengths, which reduces the risk of eavesdropping; and it's inexpensive since it's simply incorporated into light fixtures or, in emerging developments, worked into displays and other surfaces.To read this article in full, please click here
Cisco has issued a number of critical security advisories for its data center manager and SD-WAN offering customers should deal with now.On the data center side, the most critical – with a threat score of 9.8 out of 10 – involves a vulnerability in the REST API of Cisco Data Center Network Manager (DCNM) could let an unauthenticated, remote attacker bypass authentication and execute arbitrary actions with administrative privileges on an affected device.Cisco DCNM lets customers see and control network connectivity through a single web-based management console for the company’s Nexus, Multilayer Director Switch, and Unified Computing System products.To read this article in full, please click here
Data center provider Switch has selected Tesla as the battery supplier for a massive solar project at its northern Nevada data-center facilities.It's a geographically easy alliance as Switch's campus is right near Tesla's Gigafactory Nevada manufacturing facility. While best known for its cars, Tesla has also made quite an entry in the battery space with products such as the Powerwall, Powerpack, and Megapack energy storage products.To read this article in full, please click here
The growth in data use and consumption means the needs of IT managers are changing, and a survey from Omdia (formerly IHS Markit) found data-center operators are looking for intelligence of all sorts, not just the artificial kind.Omdia analysts recently surveyed IT leaders from 140 North American enterprises with at least 101 employees working in North American offices and data centers and asked them what features they wanted the most in their networking technology.The results say respondents expect to more than double their average number of data-center sites between 2019 and 2021, and the average number of servers deployed in data centers is expected to double over the same timeline.To read this article in full, please click here
A British chip startup has launched what it claims is the world's most complex AI chip, the Colossus MK2 or GC200 IPU (intelligence processing unit). Graphcore is positioning its MK2 against Nvidia's Ampere A100 GPU for AI applications.The MK2 and its predecessor MK1 are designed specifically to handle very large machine-learning models. The MK2 processor has 1,472 independent processor cores and 8,832 separate parallel threads, all supported by 900MB of in-processor RAM.
SEE ALSO: Nvidia unleashes new generation of GPU hardwareTo read this article in full, please click here
Mainframe users looking to bring legacy applications into the public or private cloud world have a new option: LzLabs, a mainframe software migration vendor.Founded in 2011 and based in Switzerland, LzLabs this week said it's setting up shop in North America to help mainframe users move legacy applications – think COBOL – into the more modern and flexible cloud application environment.Read also: How to plan a software-defined data-center network
At the heart of LzLabs' service is its Software Defined Mainframe (SDM), an open-source, Eclipse-based system that's designed to let legacy applications, particularly those without typically available source code, such as COBOL, run in the cloud without recompilation.To read this article in full, please click here
While the previously hot SD-WAN market has slowed and IT budgets overall are under pressure, the COVID-19 pandemic has created demand for other network capabilities such as improved network-management and collaboration tools, according to IDC.The virus has caused recessionary economy that has forced enterprises across the globe to rapidly and dramatically shift their operations, according to Rohit Mehra, vice president, Network Infrastructure at IDC. “The reality of that is we have seen two years of IT digital transformation in two months,” Mehra told the online audience of an IDC webinar about the impact of the pandemic on enterprise networking.To read this article in full, please click here
About a year and a half ago, some Texas employees of the Federal National Mortgage Association (Fannie Mae) were leaving work early to work at home over the enterprise VNP because it gave them better application performance and less congestion than the office network.That’s also when the agency started moving toward a cloud-first environment and away from its legacy hub-and-spoke WAN.More about SD-WAN: How to buy SD-WAN technology: Key questions to consider when selecting a supplier • How to pick an off-site data-backup method • SD-Branch: What it is and why you’ll need it • What are the options for security SD-WAN?To read this article in full, please click here
Between the pandemic and the subsequent economic upheaval, these are challenging times for everyone. But the networking industry has some elements in its favor. Technologies such as Wi-Fi, VPNs, SD-WAN, videoconferencing and collaboration are playing an essential role in maintaining business operations and will play an even greater role in the reopening and recovery phase.To read this article in full, please click here(Insider Story)
The economic devastation of the global COVID-19 pandemic has many businesses fighting for survival, but dealing with chaos and uncertainty comes with the territory for a certain category of business: Startups.They thrive on disruption (or at least that’s the message they pitch to investors), but is the lean, move-fast-and-break-things model one that can survive global disruptions?Unlike retail, travel, and tourism that have been hammered by the downturn, data-center and networking businesses have fared better, with some such as teleconferencing seeing spikes in demand.To read this article in full, please click here
Ampere – the semiconductor startup founded by former Intel executive Renee James and not to be confused with the new line of Nvidia cards – just introduced a 128-core, Arm-based server processor to complement its 80-core part.The new processor, Altra Max, comes just three months after the company launched its first product, the 80-core Altra. Ampere says the new processor will be fully socket compatible with the existing part so customers can just do a chip swap if they want. Ampere
"This is not replacing Altra," says Jeff Wittich, senior vice president of products at Ampere and one of many ex-Intel executives at Ampere. "I expect there will be workloads and customers who will use Altra and Altra Max together for a long time. Anything suited for Max will be suited for Altra for a long time."To read this article in full, please click here
The newly crowned fastest supercomputer in the world according to the current TOP500 list of the world’s fastest buried the reigning champion by turning in speeds that were 2.8 times faster.To read this article in full, please click here(Insider Story)
Intel formally unveiled the third generation of its Xeon Scalable processor family, developed under the codename "Cooper Lake." This generation is aimed at the high end of the performance line for functions such as high-performance computing (HPC) and artificial intelligence (AI).The Cooper Lake line is targeted at four- and eight-socket servers. Xeons based on the Ice Lake architecture are due later this year and will target one- and two-socket servers. The latest announcement includes 11 new SKUs with between 16 and 28 cores, running at up to 3.1 Ghz base clock (and up to 4.3 Ghz with Turbo Boost), plus support for up to six memory channels.
READ MORE: Data center sales dip amid COVID-19 fallout, but public cloud growsTo read this article in full, please click here
Most things with a measurable temperature – human beings going about their daily routines, inert objects – generate terahertz waves, radiation that is sandwiched between infrared and microwave on the electromagnetic spectrum.So far, these waves haven’t proved very useful, but now scientists at MIT are trying to harness them with devices that use them to generate electricity that could charge the batteries of cellphones, laptops, even medical implants.[Get regularly scheduled insights by signing up for Network World newsletters.]
If successful, the charging devices would passively gather the waves and generate DC current at room temperature, something that hasn’t been accomplished before. Previous devices that can turn terahertz waves – T-rays – to electricity only work in ultracold environments, according to an MIT News article about the project.To read this article in full, please click here
Cisco has added features to is flagship network control platform, DNA Center, that introduce new analytics and problem-solving capabilities for enterprise network customers.DNA Center is the heart of Cisco’s Intent Based Networking initiative and is the core-networking control platform that features myriad services from analytics, network management and automation capabilities to assurance setting, fabric provisioning and policy-based segmentation for enterprise networks. The company extended DNA Center’s AI Endpoint Analytics application by adding the ability to analyze the data gathered from Cisco packages such as its Identity Services Engine, Software Defined Application Visibility and Control, wireless LAN controllers or third part third-party components.To read this article in full, please click here
This should come as no surprise, but spending on data-center hardware and software dipped in Q1 and cloud sales grew, but neither as much as you would think.Q1 figures from Synergy Research Group show that spending on enterprise software and hardware shrank globally by a modest 2% year on year to $35.8 billion, with the biggest non-cloud players, such as Microsoft, Dell, HPE, Cisco and VMware, down 4%.[Get regularly scheduled insights by signing up for Network World newsletters.]
When it comes to public cloud infrastructure, sales rose 3% to $9.66 billion year on year. The top vendors were Chinese ODMs that hyperscalers like: Dell, Microsoft, Inspur and Cisco.To read this article in full, please click here
Something as simple as how you tell your backup product which files and databases to backup can have a massive impact on your recoverability. Proper backup selection is essentially a balance between ensuring that everything that should be backed up is indeed backed up, while also trying not to backup worthless data.Physical server inclusion
Virtually all backup products require some initial installation and configuration at the level of a physical server. This means that for any of the tactics mentioned in this article to work, one must first install the appropriate software and authorization on each physical server in the data center. This means every VMware or Hyper-V server (not to be confused with each VM on those servers), every physical UNIX or Windows server, and any cloud services that are being backed up. Someone must make that initial connection and authentication before the backup system can perform its magic.To read this article in full, please click here