Archive

Category Archives for "Network World Data Center"

Server sales projected to slow, while memory prices drop

The global server market grew about 5 percent in 2018, but it will slow in the first half of 2019, according to market researcher TrendForce. However, the company also projects a buyer’s market for DRAM, as a glut of memory hits and memory manufacturers slow down production.Enterprise servers continue to account for the majority of the global shipments, but the percentage of servers used for internet data centers, such as hyperscale data centers from Amazon and Facebook, grew to nearly 35 percent of total sales. [ Read also: How to plan a software-defined data-center network ] While total server sales were up 5 percent, with Q2 of 2018 being especially strong with more than 10 percent quarter-over-quarter growth in global server shipments, the shipment growth is expected to slow down to 2 percent in the first half of the year.To read this article in full, please click here

Poor data-center configuration leads to severe waste problem

All of the monstrous data centers popping up globally are having multiple negative impacts on the planet, the EPA notes.First, there is the obvious effect, power consumption. Data centers account for 3 percent of the global electricity supply and consume more power than the entire United Kingdom.But beyond that is the waste caused by disposal. With Amazon and the like deploying more than a million physical servers per year globally, the old server equipment they replace have to go somewhere. The same goes for your old servers.[ Read also: Chip-cooling breakthrough will reduce data-center power costs | Get regularly scheduled insights: Sign up for Network World newsletters ] E-waste 70 percent of toxic waste The EPA estimates e-waste, disposed electronics, now accounts for 2 percent of all solid waste and 70 percent of toxic waste, thanks to the use of chemicals such as lead, mercury, cadmium and beryllium, as well as hazardous chemicals such as brominated flame retardants. A lot of that is old servers and components.To read this article in full, please click here

Network management must evolve in order to scale container deployments

Applications used to be vertically integrated, monolithic software. Today, that’s changed, as modern applications are composed of separate micro-services that can be quickly brought together and delivered as a single experience. Containers allow for these app components to be spun up significantly faster and run for a shorter period of time providing the ultimate in application agility.  The use of containers continues to grow. A recent survey from ZK Research found that 64 percent of companies already use containers, with 24 percent planning to adopt them by the end of 2020. (Note: I am an employee of ZK Research.) This trend will cause problems for network professionals if the approach to management does not change.To read this article in full, please click here

Want to use AI and machine learning? You need the right infrastructure

Artificial intelligence (AI) and machine learning (ML) are emerging fields that will transform businesses faster than ever before. In the digital era, success will be based on using analytics to discover key insights locked in the massive volume of data being generated today.In the past, these insights were discovered using manually intensive analytic methods.  Today, that doesn’t work, as data volumes continue to grow as does the complexity of data. AI and ML are the latest tools for data scientists, enabling them to refine the data into value faster.[ Also read: Network operations: A new role for AI and ML | Get regularly scheduled insights: Sign up for Network World newsletters ] Data explosion necessitates the need for AI and ML Historically, businesses operated with a small set of data generated from large systems of record. Today’s environment is completely different where there are orders of magnitude more devices and systems that generate their own data that can be used in the analysis. The challenge for businesses is that there is far too much data to be analyzed manually. The only way to compete in an increasingly digital world is to use AL and ML.To read Continue reading

Cisco bets $660M on silicon-photonics firm Luxtera

Cisco says it is buying optical-semiconductor firm Luxtera for $660 million and will build its silicon photonics into future enterprise data-center, webscale, and service-provider networking gear.This photonic technology is essential to keep up with projected massive increases in IP traffic volume over the next four years, according to Cisco's networking chief."Optics is a fundamental technology to enable this future. Coupled with our silicon and optics innovation, Luxtera will allow our customers to build the biggest, fastest and most efficient networks in the world," said David Goeckeler, executive vice president and general manager, Networking and Security Business at CiscoTo read this article in full, please click here

New chip techniques are needed for the new computing workloads

Over the next two to three years, we will see an explosion of new complex processors that not only do the general-purpose computing we commonly see today (scalar and vector/graphics processing), but also do a significant amount of matrix and spatial data analysis (e.g., augmented reality/virtual reality, visual response systems, artificial intelligence/machine learning, specialized signal processing, communications, autonomous sensors, etc.).In the past, we expected all newer-generation chips to add features/functions as they were being designed. But that approach is becoming problematic. As we scale Moore’s Law closer to the edge of physical possibility (from 10nm to 7, then 5), it becomes increasingly lengthy and costly to perfect the new processes. What was generally about 12 months between processing improvement steps now is closer to two years, and newer process factories can cost upwards of $10 billion or more.To read this article in full, please click here

Investigator finds no evidence of spy chips on Super Micro motherboards

An investigation by an outside firm that specializes in all manner of corporate investigations has found no evidence that motherboards sold by Super Micro Computer but made in China had secret chips implanted in them for spying or backdoor access.Like every other OEM, Super Micro, based in San Jose, California, sources many of its components from China. There have been issues raised in the past about Chinese-owned hardware companies. IBM faced some initial resistance when it sold its x86 server business to Lenovo, especially since many government agencies — including the Defense Department — used IBM hardware.But Super Micro was rocked last October when Bloomberg BusinessWeek ran a lengthy feature article alleging that tiny chips were being secretly stashed on Super Micro motherboards for the purpose of providing backdoors for hackers to illegally access the servers.To read this article in full, please click here

Computers could soon run cold, no heat generated

It’s pretty much just simple energy loss that causes heat build-up in electronics. That ostensibly innocuous warming up, though, causes a two-fold problem:Firstly, the loss of energy, manifested as heat, reduces the machine’s computational power — much of the purposefully created and needed, high-power energy disappears into thin air instead of crunching numbers. And secondly, as data center managers know, to add insult to injury, it costs money to cool all that waste heat.For both of those reasons (and some others, such as ecologically related ones, and equipment longevity—the tech breaks down with temperature), there’s an increasing effort underway to build computers in such a way that heat is eliminated — completely. Transistors, superconductors, and chip design are three areas where major conceptual breakthroughs were announced in 2018. They’re significant developments, and consequently it might not be too long before we see the ultimate in efficiency: the cold-running computer.To read this article in full, please click here

IBM and Nvidia announce turnkey AI system

IBM and Nvidia further enhanced their hardware relationship with the announcement of a new turnkey AI solution that combines IBM Spectrum Scale scale-out file storage with Nvidia’s GPU-based AI server.The name is a mouthful: IBM SpectrumAI with Nvidia DGX. It combines Spectrum Scale, a high performance Flash-based storage system, with Nvidia’s DGX-1 server, which is designed specifically for AI. In addition to the regular GPU cores, the V100 processor comes with special AI chips called Tensor Cores optimized to run machine learning workloads. The box comes with a rack of nine Nvidia DGX-1 servers, with a total of with 72 Nvidia V100 Tensor Core GPUs.To read this article in full, please click here

Qualcomm makes it official; no more data center chip

A layoff of 269 people in a company of 33,000 usually isn’t noteworthy, but given where the layoffs hit, it’s notable. Qualcomm has signaled the end of the road for Centriq, its ARM-based server processor, which never got out of the starting gate.U.S. companies have to notify their state employment of layoffs 60 days before they happen, making these events less of a surprise as reporters get wind of them. A letter from Qualcomm to its home city of San Diego said 125 people would be let go on February 6, while a note to officials in Raleigh, North Carolina, says 144 people also will be cut loose.The news is a repeat of what happened last June, right down to the number of people let go and cities impacted. The cuts target several divisions, one of which is the company's data center division, which was barely staffed to begin with. The Information, which first reported on the layoffs, says the data center group will be down to just 50 people after a peak of more than 1,000. That includes the head of the group, Anand Chandrasekher, a former Intel executive.To read this article in full, please click here

How we selected 10 hot data-center virtualization startups to watch

The selection process for our roundup of 10 data-center virtualization startups to watch began with 33 recommendations and nominations that were sent via HARO, LinkedIn, Twitter, and subscribers to the Startup50 email newsletter.Several of those startups had to be eliminated right off the bat not because they wouldn’t be a good fit for this roundup – they would be – but because they had already been covered in previous roundups, including those focused on storage, hybrid cloud and business continuity.To read this article in full, please click here

2019: Look for improvements to software-defined data-center networks

IDG To help IT pros attain top performance for their software-defined data-center networks (SDDCN), we have identified 10 crucial technology areas to watch and evaluate during 2019.SDDCN performance requires advanced network software to provision, manage and secure high-speed traffic flows, and network administrators need automated solutions to monitor and deliver reliable quality of service to critical applications.To read this article in full, please click here

What is an SSD? How solid state drives work

That whirring you hear when you boot your computer or when it wakes from sleep mode is the sound of your hard drive’s magnetic disks beginning to spin. Conceptually not dissimilar to a record player, a hard disk drive (HDD) is an electromechanical device with an actuator arm that positions itself over spinning disks, called platters, in order to read or write information.While record players top out at 78 rpm, today’s enterprise-grade HDDs can spin at 15,000 rpm. Even at that speed, however, there are unavoidable delays associated with heads finding the spot on the drive that contains the data being requested. And sometimes a drive may need to read from multiple locations in order to complete a command, multiplying wait times.To read this article in full, please click here

Edge-chips could render some networks useless

Hardware processing should replace a device’s dependency on networks, some scientists say. Making machines more efficient, saving power and resilience are behind the reasoning.“Devices like drones depend on a constant Wi-Fi signal. If the Wi-Fi stops, the drone crashes,” an article about researchers at Binghamton University in Binghamton, New York, says.[ Now read: What is quantum computing (and why enterprises should care) ] But if you make a device independent of any linking, it could become more resilient, the researchers say. Plus, the more processing work one can do on the machine the more energy you’ll save because you won’t have to come up with power to communicate.To read this article in full, please click here

Edge-chips could render some networks unnecessary

Hardware processing should replace a device’s dependency on networks, some scientists say. Making machines more efficient, saving power and resilience are behind the reasoning.“Devices like drones depend on a constant Wi-Fi signal. If the Wi-Fi stops, the drone crashes,” an article about researchers at Binghamton University in Binghamton, New York, says.[ Now read: What is quantum computing (and why enterprises should care) ] But if you make a device independent of any linking, it could become more resilient, the researchers say. Plus, the more processing work one can do on the machine the more energy you’ll save because you won’t have to come up with power to communicate.To read this article in full, please click here

What’s hot in enterprise networking for 2019

With 2019 just around the corner there's a lot of  developments and advances coming that will affect how enterprises make use of technologies for SD-WAN, IoT, Wi-Fi, data centers, the cloud and more. Here's a selection of some of the coming trends that industry pros see coming next year.What will be hot for Cisco in 2019? Getty Images Software, software and more software.  That seems to be the mantra for Cisco in 2019 as the company pushes software-defined WANs, cloud partnerships, improved application programs and its over-arching drive to sell more subscription-based software licenses. Check it out.To read this article in full, please click here

10 predictions for the data center and the cloud in 2019

IDG It’s that time of year again, where vacations are planned, going to the mall looks like something out of “Braveheart,” package theft from doorsteps is rampant, and people try their best not to offend. In other words, it’s Christmastime.This leads to an inevitable tradition of looking back at the year and at what will come. For some time, I’ve done general looks back, but this year we are narrowing the focus to the data center and cloud, since the real battle these days is to find a balance between the cloud and on-premises implementations.To read this article in full, please click here

IDC: Expect 175 zettabytes of data worldwide by 2025

IDC has released a report on the ever-growing datasphere, what it calls the collective world’s data, and just like the recent Cisco study, the numbers are staggering. IDC predicts that the collective sum of the world’s data will grow from 33 zettabytes this year to a 175ZB by 2025, for a compounded annual growth rate of 61 percent.The 175ZB figure represents a 9 percent increase over last year’s prediction of data growth by 2025. IDC's “Data Age 2025” whitepaper, sponsored by Seagate, says the datasphere has three locations. First is the core, which includes traditional and cloud data centers, second is the edge, which includes things like cell towers and branch offices, and third is endpoints, which include PCs, smartphones, and Internet of Things (IoT) devices.To read this article in full, please click here

AWS, Red Hat move to shore up hybrid cloud environments

Two more signs it’s a hybrid cloud world: This week, Red Hat, in the process of being bought by IBM, acquired a startup that specializes in managing storage across multi-cloud environments. And Amazon launched a raft of hybrid storage services, as well as a service that allows customers to run Amazon Web Services (AWS) cloud in your own data center.[ Read also: AWS does hybrid cloud with on-prem hardware, VMware help ] Red Hat acquires data storage startup NooBaa IBM is not expected to close the planned $34 billion purchase of Red Hat until the second half of 2019, so in the meantime, Red Hat continues on its way, grabbing cloud startups it sees as strategic. This week, it announced its acquisition of NooBaa, which specializes in managing data storage services across hybrid and multi-cloud deployments.To read this article in full, please click here

1 62 63 64 65 66 172