Network World Data Center

Author Archives: Network World Data Center

How to wait for things to happen on Linux

There are always things to wait for on a Linux system – for upgrades to complete, for a process to finish, for coworkers to log in and help resolve a problem, or for a status report to be ready. Fortunately, you don’t have to sit twiddling your thumbs. Instead, you can get Linux to do the waiting and let you know when the work is done. You can do this by crafting the waiting and the condition for which you are waiting in a script, or you can use the wait command – a bash builtin that will watch for a process running in the background to complete.Crafting waiting within scripts There are many ways to craft waiting within a script. Here's a simple example of simply waiting for a period of time before moving on to the next task:To read this article in full, please click here

Data center developers turn to emerging US markets

Data center developers are under pressure to expand their horizons when it comes to choosing sites for new construction. Land prices, availability of power and bandwidth, and pushbacks from neighbors are among the factors that are driving developers to seek new regions.Northern Virginia, for example, is home to more data centers than any other part of the world, with 275 and more on the way. But the region is running out of space and available power, and residents are running out of patience for these resource-intensive facilities that consume growing amounts of power and water, according to the Washington Post. To read this article in full, please click here

Intel looking likely to manufacture Nvidia chips

More than a year ago, Nvidia CEO Jensen Huang said he was open to the possibility of having Intel manufacture Nvidia’s GPUs through Intel's foundry services program.At the time, Huang was noncommittal beyond saying that Nvidia was looking at the possibility. Now things are getting more concrete. During a question-and-answer session at the Computex tradeshow in Taipei, Taiwan, Huang said he had recently received good results for an Intel test chip based on the company's next-generation process node."You know that we also manufacture with Samsung, and we're open to manufacturing with Intel. [Intel CEO Pat Gelsinger] has said in the past that we're evaluating the process, and we recently received the test chip results of their next-generation process, and the results look good," Huang said.To read this article in full, please click here

Inside Nvidia’s new AI supercomputer

With Nvidia’s Arm-based Grace processor at its core, the company has introduced a supercomputer designed to perform AI processing powered by a CPU/GPU combination.The new system, formally introduced at the Computex tech conference in Taipei the DGX GH200 supercomputer is powered by 256 Grace Hopper Superchips, technology that is a combination of Nvidia’s Grace CPU, a 72-core Arm processor designed for high-performance computing and the Hopper GPU. The two are connected by Nvidia’s proprietary NVLink-C2C high-speed interconnect.To read this article in full, please click here

Nvidia’s new Grace Hopper superchip to fuel its DGX GH200 AI supercomputer

Nvidia has unveiled a new DGX GH200 AI supercomputer, underpinned by its new Grace Hopper superchip and targeted toward developing and supporting large language models.“DGX GH200 AI supercomputers integrate Nvidia’s most advanced accelerated computing and networking technologies to expand the frontier of AI,” Nvidia CEO Jensen Huang said in a blog post.The supercomputer, according to Huang, combines the company’s GH200 Grace Hopper superchip and Nvidia’s NVLink and Switch System, to allow the development of large language models for generative AI language applications, recommender systems, and data analytics workloads.To read this article in full, please click here

Nvidia’s new Grace Hopper superchip to fuel its DGX GH200 AI supercomputer

Nvidia has unveiled a new DGX GH200 AI supercomputer, underpinned by its new Grace Hopper superchip and targeted toward developing and supporting large language models.“DGX GH200 AI supercomputers integrate Nvidia’s most advanced accelerated computing and networking technologies to expand the frontier of AI,” Nvidia CEO Jensen Huang said in a blog post.To read this article in full, please click here

Resizing images on the Linux command line

The convert command from the ImageMagick suite of tools provides ways to make all sorts of changes to image files. Among these is an option to change the resolution of images. The syntax is simple, and the command runs extremely quickly. It can also convert a image from one format to another (e.g., jpg to png) as well as blur, crop, despeckle, dither, flip and join images and more.Although the commands and scripts in this post mostly focus on jpg files, the convert command also works with a large variety of other image files, including png, bmp, svg, tiff, gif and such.Basic resizing To resize an image using the convert, you would use a command like this:To read this article in full, please click here

Intel revises its XPU strategy

Intel has announced a shift in strategy that impacts its XPU and data-center product roadmap.XPU is an effort by Intel to combine multiple pieces of silicon into one package. The plan was to combine CPU, GPU, networking, FPGA, and AI accelerator and use software to choose the best processor for the task at hand.That’s an ambitious project, and it looks like Intel is admitting that it can’t do it, at least for now.Jeff McVeigh, corporate vice president and general manager of the Super Compute Group at Intel, provided an update to the data-center processor roadmap that involves taking a few steps back. Its proposed combination CPU and GPU, code-named Falcon Shores, will now be a GPU chip only.To read this article in full, please click here

Nvidia joins with Dell to target on-prem generative AI

Dell Technologies and Nvidia are jointly launching an initiative called Project Helix that will help enterprises to build and manage generative AI models on-premises, they said Tuesday.The companies will combine their hardware and software infrastructure in the project to support the complete generative AI lifecycle from infrastructure provisioning through modeling, training, fine-tuning, application development, and deployment, to deploying inference and streamlining results, they said in a joint statement.Dell will contribute its PowerEdge servers, such as the PowerEdge XE9680 and PowerEdge R760xa, which are optimized to deliver performance for generative AI training and AI inferencing, while Nvidia contribution to Project Helix, will be its H100 Tensor Core GPUs and Nvidia Networking to form the infrastructure backbone for generative AI workloads.To read this article in full, please click here

Now on sale at Bed Bath & Beyond: One slightly used data center

With Bed Bath & Beyond filing for bankruptcy last month, it’s liquidation-sale time. That doesn’t mean just  blankets and cookware; it also includes its data center in North Carolina. Not just its servers but the whole facility.The data center in Claremont, N.C., was built in 2013 with a total of 47,500 square feet, 9,500 feet of which is raised floor space, with the ability to double the amount of raised floor space and boost the total power from 1MW to 3.5MW.It is rated a Tier III on the data-center ranking scale of I through IV. Tier III data centers have redundant components and infrastructure for power and cooling, with a guaranteed 99.982% availability.To read this article in full, please click here

Frontier still reigns as world’s fastest supercomputer

For the third time in a row, Frontier is ranked number one among the world’s fastest supercomputers, and it remains the only whose fastest speed exceeds one exaFLOPS.At 1.194 quintillion floating point operations per second (FLOPS), Frontier kept its ranking with more than double the top speed of its nearest competitor, according to the list compiled by TOP500, which issues the rankings twice a year. A quintillion is 1018 or one exaFLOPS (EFLOPS).The number two machine, Fugaku, maxed out at 442.01petaFLOPS. A petaFLOPS is 1015 FLOPS.Two competitors in the top 10 improved their speeds since the last ranking period that ended in November 2022, but not nearly enough to even draw close. Those two—LUMI and Leonardo—rank third and fourth, respectively.To read this article in full, please click here

Meta is working on its own chip, data center design for AI workloads

Facebook parent company Meta has revealed plans for the development of its own custom chip for running artifical intelligence models, and a new data center architecture for AI workloads.“We are executing on an ambitious plan to build the next generation of Meta’s AI infrastructure and today, we’re sharing some details on our progress. This includes our first custom silicon chip for running AI models, a new AI-optimized data center design and the second phase of our 16,000 GPU supercomputer for AI research,” Santosh Janardhan, head of infrastructure at Meta, wrote in a blog post Thursday.To read this article in full, please click here

How to quickly make minor changes to complex Linux commands

When working in the Linux terminal window, you have a lot of options for moving on the Linux command line; backing up over a command you’ve just typed is only one of them.Using the Backspace key We likely all use the backspace key fairly often to fix typos. It can also make running a series of related commands easier. For example, you can type a command, press the up arrow key to redisplay it and then use the backspace key to back over and replace some of the characters to run a similar command. In the examples below, a single character is backed over and replaced.To read this article in full, please click here

Ampere launches 192-core AmpereOne server processor

Ampere has announced it has begun shipping its next-generation AmpereOne processor, a server chip with up to 192 cores and special instructions aimed at AI processing.It is also the first generation of chips from the company using homegrown cores rather than cores licensed from Arm. Among the features of these new cores is support for bfloat16, the popular instruction set used in AI training and inferencing.“AI is a big piece [of the processor] because you need more compute power,” said Jeff Wittich, chief products officer for Ampere. ”AI inferencing is one of the big workloads that is driving the need for more and more compute, whether it’s in your big hyperscale data centers or the need for more compute performance out at the edge.”To read this article in full, please click here

DOE funds $40 million for advanced data-center cooling

The Department of Energy has awarded $40 million to 15 vendors and university labs as part of a government program that aims to reduce the portion of data centers' power usage that's used for cooling to just 5% of their total energy consumption.The DOE's Advanced Research Projects Agency–Energy (ARPA-E) is providing the funding to jumpstart a program called COOLERCHIPS, an acronym for Cooling Operations Optimized for Leaps in Energy, Reliability, and Carbon Hyperefficiency for Information Processing Systems.For chip cooling to account for just 5% of total energy consumption, that would translate to a PUE of 1.05. (Power usage effectiveness, or PUE, is a metric to measure data center efficiency. It’s the ratio of the total amount of energy used by a data center facility to the energy delivered to computing equipment.)To read this article in full, please click here

AWS to invest $12.7B to expand its cloud infra in India by 2030

Amazon Web Services (AWS) on Thursday said it is committing $12.7 billion to expand its cloud infrastructure in India by 2030 in order to meet growing customer demand for its cloud services.“Today we’re announcing an additional planned investment of $12.7 billion for cloud infrastructure in India. That will bring our total investment to $16.4 billion by 2030 — boosting the country’s GDP, supporting tens of thousands of jobs, and continuing to help customers innovate,” AWS CEO Adam Selipsky said in a tweet.The new investment, according to the company, is expected to add $23.3 billion to the country’s GDP by 2030, generating 131,700 jobs annually for the next seven years.To read this article in full, please click here

AWS to invest $12.7B to expand its cloud infrastructure in India by 2030

Amazon Web Services (AWS) on Thursday said it is committing $12.7 billion to expand its cloud infrastructure in India by 2030 in order to meet growing customer demand for its cloud services.“Today we’re announcing an additional planned investment of $12.7 billion for cloud infrastructure in India. That will bring our total investment to $16.4 billion by 2030 — boosting the country’s GDP, supporting tens of thousands of jobs, and continuing to help customers innovate,” AWS CEO Adam Selipsky said in a tweet.The new investment, according to the company, is expected to add $23.3 billion to the country’s GDP by 2030, generating 131,700 jobs annually for the next seven years.To read this article in full, please click here

Startup NEO Semiconductor promises 8x increase in memory density

System memory is a complex problem. More memory means more performance, especially in a virtualized environment. But more memory also requires more power, and that can add up as you start to get into thousands of memory sticks.Plus, you can only put so many memory sticks in a server, depending on the number of slots available. So how do you increase memory capacity? By increasing memory density on the chips, which is easier said than done. However, a startup called NEO Semiconductor is claiming it will be able to increase memory density by up to eight times over standard memory with a breakthrough 3D design.It’s not a new concept; 3D memory stacking has been used in NAND flash to increase capacity for a decade now. Memory transistors can only be so large to fit in the confines of a DRAM chip. Rather than an increase the number of transistors laid out side by side, memory makers began stacking it on top of each other, thus increasing capacity in the same physical space. In the 10 years since 3D stacking began, NAND flash DRAM has reached the 170-layer mark, and SSDs have seen a significant increase in capacity without Continue reading

1 5 6 7 8 9 20