Archive

Category Archives for "Network World Data Center"

HPE tests gear at the ultimate edge: Space

You want edge computing? How about 250 miles straight up? HP Enterprise has announced that the Spaceborne Computer-2 (SBC-2) on the International Space Station (ISS) has successfully completed 24 research experiments in less than a year.The SBC-2 is the first in-space commercial edge computing and AI-enabled system to run on the ISS, according to HPE, and was installed in May 2021. The experiments involved real-time data processing and testing of new applications to prove reliability in space to increase autonomy for astronauts. HPE said the experiments reduced the time-to-insight from days and months to minutes.How to choose an edge gateway SBC-2 consists of HPE’s Edgeline Converged EL4000 Edge system, which is designed to perform in harsher edge environments, including space. SBC-2 is also made up of the HPE ProLiant DL360 high-performance server designed for workloads like HPC, and AI.To read this article in full, please click here

9 hot jobs in the evolving data center

As data centers evolve, the skills needed to run them change as well, creating both a challenge and opportunity for current data-center workers.By necessity, modern data centers are at the forefront of efficiency for energy consumption, space utilization and automation. That efficiency extends to the personnel that staffs them and who must swiftly implement hardware, software, and architecture changes as best practices improve. That calls for new roles in administration, management, and planning.Existing career fields and legacy skill sets won’t cut it in many cases, but IT pros will be able to augment their existing skills to fill new roles as they open up—jobs with a more forward-looking focus.To read this article in full, please click here

Intel claims sustainability for its custom chip that mines bitcoins faster than GPUs

Intel this week announced details of its new Blockscale ASIC chip designed specifically for more efficient blockchain computing than CPUs or GPUs. It first said it was making such a chip just two months ago.Blockscale is specifically designed to process the Secure Hash Algorithm-256 (SHA-256), which is used by blockchain, and the performance is phenomenal, at least on paper. Blockscale has a hash rate operating speed of up to 580GH/s, or gigahashes per second.To read this article in full, please click here

Intel claims sustainability for its custom chip that mines bitcoins faster than GPUs

Intel this week announced details of its new Blockscale ASIC chip designed specifically for more efficient blockchain computing than CPUs or GPUs. It first said it was making such a chip just two months ago.Blockscale is specifically designed to process the Secure Hash Algorithm-256 (SHA-256), which is used by blockchain, and the performance is phenomenal, at least on paper. Blockscale has a hash rate operating speed of up to 580GH/s, or gigahashes per second.To read this article in full, please click here

2 ways to remove duplicate lines from Linux files

There are many ways to remove duplicate lines from a text file on Linux, but here are two that involve the awk and uniq commands and that offer slightly different results.Remove duplicate lines with awk The first command we'll examine in this post is a very unusual awk command that systematically removes every line in the file that is encountered more than once. It leaves the first instance of the line intact, but "remembers" it and removes any duplicates encountered afterwards.Here's an example. Initially, the file looks like this:To read this article in full, please click here

7 emerging network jobs that could boost your career

The relatively stable world of enterprise networking has undergone quite a bit of upheaval over the past few years. As a result, networking professionals with traditional job titles have assumed new responsibilities, and entirely new job titles have emerged.Key trends reshaping the jobs of network professionals include increased adoption of cloud services; the push for more automation of business processes; and the rise of technologies such as software-defined networking (SDN), SD-WAN, Internet of Things (IoT) , secure access service edge (SASE), Zero Trust Network Access (ZTNA) and edge computing.To read this article in full, please click here

Microsoft launches Azure VMs powered by new Ampere Altra Arm-based chips

Microsoft has announced the public preview of its new Azure virtual machines powered by the Arm-based Ampere Altra server processors.The VM series includes the general-purpose Dpsv5 and memory-optimized Epsv5 virtual machines, which Microsoft claims can deliver up to 50% better price-performance than comparable IBM x86-based VMs.The new VMs have been specifically engineered to efficiently run scale-out workloads, web servers, application servers, open-source databases, cloud-native and rich .NET applications, Java applications, gaming servers, and media servers.To read this article in full, please click here

AMD grabs DPU maker Pensando for a cool $1.9B

Advanced Micro Devices took a big step toward competing in data-center networking with its announced agreement to acquire Pensando for approximately $1.9 billion.   AMD wants the DPU-based architecture and technology Pensando is developing that includes intelligent, programmable software to support software-defined cloud, compute, networking, storage, and security services that could be rolled out quickly in edge, colocation, or service-provider networks.What is SDN and where it’s going “There are a wide range of use cases—such as 5G and IoT—that need to support lots of low latency traffic,” Soni Jiandani, Pensando co-founder and chief business office told Network World last November:“We’ve taken a ground-up approach to giving enterprise customers a fully programmable system with the ability to support multiple infrastructure services without dedicated CPUs.”To read this article in full, please click here

IBM’s game-changing mainframe moments

With the advent of the latest generation of IBM mainframes – the z16 – Big Blue is furthering one of the most successful technology systems the IT world has known.  Here are a few key moments in the history of the Big Iron:IBM 360 In 1964 IBM began what many would consider the first true series of mainframes, the IBM 360. At the time IBM said its central processors included 19 combinations of graduated speed and memory capacity. The system also included more than 40 types of peripheral equipment  for entering, storing, and retrieving information. Built-in communications capability made the System/360 available to remote terminals, regardless of distance.To read this article in full, please click here

IBM’s game-changing mainframe moments

With the advent of the latest generation of IBM mainframes – the z16 – Big Blue is furthering one of the most successful technology systems the IT world has known.  Here are a few key moments in the history of the Big Iron:IBM 360 In 1964 IBM began what many would consider the first true series of mainframes, the IBM 360. At the time IBM said its central processors included 19 combinations of graduated speed and memory capacity. The system also included more than 40 types of peripheral equipment  for entering, storing, and retrieving information. Built-in communications capability made the System/360 available to remote terminals, regardless of distance.To read this article in full, please click here

What is a network operations center (NOC)?

NOC (pronounced “knock,”) stands for a network operations center, and if the term conjures up images of a NASA-like control room, you would not be too far off from reality – at least at some organizations.While the role of a NOC can vary, the general idea is to create a room or centralized facility where information technology (IT) professionals can constantly monitor, maintain and troubleshoot all aspects of a network. The NOC must also be equipped with all of the technology required in order to support those operations, including monitors, computers, telecommunications equipment and a fast connection to network resources.NOCs were created for two main reasons. The first was to give IT staffers a central location to work from, instead of having them run around trying to fix problems or perform preventative maintenance, like patching systems, from different locations.To read this article in full, please click here

New Oak Ridge supercomputer outperforms the old in a fraction of the space

The conventional wisdom is that you should update your IT gear, namely the servers, every three-to-five years, which is usually when service warranties run out. However, some companies hold onto their gear for longer than that for a variety of reasons: lack of funds, business uncertainty, on-premises versus the cloud, and so forth.And for a while, the CPU guys were helping. New generations of processors were only incrementally faster than the old ones making it hard to justify an upgrade. The result was longer lifecycles for server hardware. A 2020 survey by IDC found 20.3% of respondents holding on to servers for six years and 12.4% keeping servers for seven years or more.To read this article in full, please click here

Nvidia CEO says he is open to using Intel for chip fabrication

The old saying “adversity makes for strange bedfellows” has been proven true, with Nvidia saying it is now willing to work with Intel’s foundry business to manufacture its chips.Nvidia CEO Jen-Hsun Huang dropped the news on a press call when he was asked . about diversifying the company’s supply chain, which relies on TSMC for its chip manufacturing, and TSMC is both overloaded with orders and in a politically unstable region of the world (Taiwan).Huang said his company realized it needed more resilience going forward, and so over the last couple years has added to the number of process nodes it uses, and is in more fabs than ever. “So we've expanded our supply chain, supply base, probably four-fold in the last two years,” Huang said.To read this article in full, please click here

Using the btrfsck file-checing command on Linux

The btrfsck command is a filesystem-check command like fsck, but it works with the btrfs file system.First a little bit about btrfs. As the name implies, btrfs uses a B-tree data structure that is self-balancing and maintains sorted data, facilitating searches, sequential access, insertions, and deletions. It is also often referred to as the “better file system”. Oracle developed it and first used it about 15 years ago. By November 2013, it was declared adequately stable and began to be used by other distributions as well, and now its use is quite common.Benefits of btrfs The benefits of btrfs are impressive, although it’s still a work in progress and some concerns have kept it from playing a more dominant role on Linux systems. It keeps 2 copies of metadata on a volume, allowing for data recovery if and when the hard drive is damaged or suffers from bad sectors. It uses checksums and verifies them with each read. In addition, compared to ext4 volumes, btrfs does not require double the storage space to accommodate file versioning and history data.To read this article in full, please click here

Data center infrastructure spending still growing as cloud providers keep buying

Public cloud providers are quickly becoming the biggest buyers of data center infrastructure equipment, as purchasing of hardware and software both rebounded sharply in 2021, according to a recent report by Synergy Research Group.Overall spending grew by roughly 10% in year-on-year terms, reaching a total of $185 billion in 2021. The lion’s share of that spending was on hardware, according to Synergy, with 77% of the total spend going towards servers, storage and networking gear. Software, including operating systems, cloud management, virtualization and network security, made up the rest of the total.To read this article in full, please click here

Nvidia announces server ‘superchips,’ with and without GPUs

At its GPU technology conference (GTC) last year, Nvidia announced it would come out with its own server chip called Grace based on the Arm Neoverse v9 server architecture. At the time, details were scant, but this week Nvidia revealed the details, and they are remarkable.With Grace, customers have two options, both dubbed superchips by Nvidia. The first is the Grace Hopper Superchip that was formally introduced last year, but only broadly described. It consists of a 72-core CPU, and a Hopper H100 GPU tightly connected by Nvidia’s new high-speed NVLink-C2C chip-to-chip interconnect, which has 900GB/s of transfer speed.To read this article in full, please click here

Nvidia introduces Spectrum-4 platform for AI, HPC over Ethernet

Nvidia is known for its GPUs, but has introduced Spectrum-4, a combination of networking technologies that reinforces its commitment not only to graphics processors, but also to systems designed to handle the demanding network workloads of AI and high-performance computing.The latest Nvidia Spectrum products rely on the new Spectrum-4 Ethernet-switch ASIC that boasts 51.2 Tb/s switching and routing capacity. The chip underpins the latest members of the company’s Spectrum switches, which are available later this year. The switches are part of a larger Spectrum-4 platform that integrates Nvidia’s ConnectX-7 smartNIC, its new BlueField-3 DPU, and its DOCA software-development platform.To read this article in full, please click here