Archive

Category Archives for "Network World Data Center"

IBM set to deliver mainframe AI services, support

As it previewed in March, IBM is set to deliver an AI-infused, hybrid-cloud oriented version of its z/OS mainframe operating system.Set for delivery on Sept. 29, z/OS 3.1, the operating system grows IBM’s AI portfolio to let customers securely deploy AI applications co-located with z/OS applications and data, as well as a variety of new features such as container extensions for Red Hat and Linux applications that better support hybrid cloud applications on the Big Iron.In this release of the mainframe’s OS, AI support is implemented in a feature package called AI System Services for IBM z/OS version 1.1. that lets customers build an AI Framework that IBM says is designed to support initial and future intelligent z/OS management capabilities.To read this article in full, please click here

Pipes and more pipes on Linux

Most people who spend time on the Linux command line move quickly into using pipes. In fact, pipes were one of the things that really got me excited when I first used the command line on a Unix system. My appreciation of their power and convenience continues even after decades of using Linux. Using pipes, I discovered how much I could get done by sending the output of one command to another command, and sometimes a command after that, to further tailor the output that I was looking for. Commands incorporating pipes – like the one shown below – allowed me to extract just the information that I needed without having to compile a program or prepare a script.To read this article in full, please click here

Nvidia teams with Accenture and ServiceNow for AI program

An interesting alliance has been struck, with Nvidia partnering with IT consultancy Accenture and helpdesk vendor ServiceNow to offer what the vendors are calling the AI Lighthouse, a program designed to help ServiceNow customers quickly adopt generative AI tools.The IT service management and customer service markets seem a natural fit for generative AI. When customers or employees need help with something, that’s where generative AI can shine.To read this article in full, please click here

Dell announces generative AI solutions

Dell Technologies is the latest IT vendor to jump on the generative AI bandwagon with a range of new AI offerings that span its hardware, software and services lineup.In May, Dell announced plans to develop integrated AI services in partnership with Nvidia. That service has come to fruition with this portfolio, dubbed Dell Generative AI Solutions. As part of the program, the company announced validated designs with Nvidia that are aimed at helping enterprises deploy AI workloads on premises. The new offerings also include professional services to help enterprises determine where and how to best use generative AI services.Typically, Nvidia GPUs go into servers for AI functions. But Dell's news isn't limited to servers. Dell is also announcing Precision workstations with expanded Nvidia GPU configurations to help users accelerate generative AI workloads locally on their devices.To read this article in full, please click here

Gen-AI HPC infrastructure provider CoreWeave scores $2.3 billion financing deal

CoreWeave, a specialist cloud provider offering high performance computing services to meet growing corporate demand for generative AI workloads, announced Thursday that it has received a $2.3 billion debt financing package from several asset management firms.The key to CoreWeave’s focus on the AI market is in its hardware. The company sells primarily GPU-based virtual machines, which are particularly well-suited for AI workloads. According to Gartner vice president and analyst Arun Chandrasekaran, CoreWeave’s advertised low cost is a function of its ties to Nvidia, with which, CoreWeave has said, it has a preferred supplier arrangement, enabling it to pass on savings.To read this article in full, please click here

Schneider and Compass partner to streamline modular data center deployments

Schneider Electric and Compass Datacenters have announced a partnership that's aimed at expanding the two companies' production capabilities for modular data centers. They're building a 110,000 square-foot facility where they'll integrate Schneider’s power management equipment with Compass’s prefabricated data center modules in an effort to speed deployments across the US.It’s an ideal match. Schneider makes the infrastructure that runs data centers, such as power generators and HVAC systems, and Compass designs and builds data centers for hyperscalars and cloud service providers worldwide. Compass builds standard-design data centers as well as the newer modular type, which is gaining in popularity.To read this article in full, please click here

Moving tasks from foreground to background and back again

When working on the Linux command line, you can start a task, move it to the background, and, when you’re ready to reverse the process, bring it back to the foreground. When you run a command or script in the foreground, it occupies your time on the command line – until it’s finished. When you need to do something else while still allowing that first task to complete, you can move it to the background where it will continue processing and spend your time working on something else.The easiest way to do this is by typing ^z (hold the Ctrl key and press “z”) after starting the process. This stops the process. Then type “bg” to move it to the background. The jobs command will show you that it is still running.To read this article in full, please click here

Power availability stymies data center growth

The chief obstruction to data center growth is not the availability of land, infrastructure, or talent. It's local power, according to commercial real estate services company CBRE In its 2023 global data center trends report, CBRE says the market is growing steadily and demand is constantly rising, but data center growth has been largely confined to a few select areas, and those areas are running out of power.No region embodies this more than Northern Virginia, which is the world's largest data center market with 2,132 megawatts (MW) of total inventory. Its growth happened for a couple of reasons. First, proximity to the US federal government. Second, because there's a major undersea cable to Europe in Northern Virginia, and data centers want to be as close to it as possible to minimize latency.To read this article in full, please click here

GigaIO introduces single-node AI supercomputer

Installation and configuration of high-performance computing (HPC) systems can be a considerable challenge that requires skilled IT pros to set up the software stack, for example, and optimize it for maximum performance – it isn't like building a PC with parts bought off NewEgg.GigaIO, which specializes in infrastructure for AI and technical computing, is looking to simplify the task. The vendor recently announced a self-contained, single-node system with 32 configured GPUs in the box to offer simplified deployment of AI and supercomputing resources.Up to now, the only way to harness 32 GPUs would require four servers with eight GPUs apiece. There would be latency to contend with, as the servers communicate over networking protocols, and all that hardware would consume floor space.To read this article in full, please click here

Is your data center ready for generative AI?

Enterprise adoption of generative artificial intelligence (AI), which is capable of generating text, images, or other media in response to prompts, is in its early stages, but is expected to increase rapidly as organizations find new uses for the technology.“The generative AI frenzy shows no signs of abating,” says Gartner analyst Frances Karamouzis.  “Organizations are scrambling to determine how much cash to pour into generative AI solutions, which products are worth the investment, when to get started and how to mitigate the risks that come with this emerging technology.”To read this article in full, please click here

Data centers grapple with staffing shortages, pressure to reduce energy use

Reducing energy use and keeping qualified staff are top of mind for data center operators, according to Uptime Institute’s latest annual global data center survey.“Digital infrastructure managers are now most concerned with improving energy performance and dealing with staffing shortfalls, while government regulations aimed at improving data center sustainability and visibility are beginning to require attention, investment, and action,” said Andy Lawrence, executive director, Uptime Intelligence.To read this article in full, please click here

Lenovo all-flash arrays aimed at optimizing AI workloads

AI is nothing without lots of data, so high-speed, high-capacity storage is a must. Lenovo is the latest vendor to come out with new storage systems that are optimized for read-intensive enterprise AI workloads and large dataset workloads.Lenovo’s ThinkSystem DG enterprise storage arrays use all-flash storage and quad-level cell (QLC) architecture, the densest flash storage available. They’re capable of up to six times faster performance and up to 50% less cost compared to HDD arrays, Lenovo asserts. Its ThinkSystem DM3010H array is aimed at SMB customers and designed to offer better scalability and flexibility for a wide range of workloads, including file services, virtualization, backup and archive and other I/O applications, according to Lenovo.To read this article in full, please click here

Assigning sudo privilege to users on Linux

The sudo command is a very important command on Linux systems. You might say that it allows users to run privileged commands without logging in as root, and that is true. However, the more important point is that it allows individuals to manage Linux systems – adding accounts, running updates, installing applications and backing up the system – without requiring these things be done using the root account. This is consistent with the policy that says root privilege should only be used as needed and that no one should simply log in as root and run all of their commands. Doing routine work using the root account is considered dangerous because any typos or commands run in the wrong location can have very serious consequences.To read this article in full, please click here

UK competition agency provisionally OKs Broadcom’s $6B VMware acquisition

The UK’s Competition Market Authority (CMA) has provisionally cleared Broadcom’s proposed acquisition of VMWare, paving the way for the $61 billion deal to go ahead.In November 2022, the CMA announced it was launching an in-depth investigation into the proposed deal, looking into whether the proposed merger “may be expected to result in a substantial lessening of competition within any market or markets in the United Kingdom for goods or services.”In particular, the CMA was concerned that the deal could harm the ability of Broadcom’s rivals to compete with VMware’s server virtualisation software, and if there would be a potential financial benefit to Broadcom and VMware if they were to make rival products work less well with VMware’s softwareTo read this article in full, please click here

How to determine your Linux system’s filesystem types

Linux systems use a number of file system types – such as Ext, Ext2, Ext3, Ext4, JFS, XFS, ZFS, XFS, ReiserFS and btrfs. Fortunately, there are a number of commands that can look at your file systems and report on the type of each of them. This post covers seven ways to display this information.To begin, the file system types that are used on Linux systems are described below.File system types Ext4 is the fourth generation of the ext file system, released in 2008 and pretty much the default since 2010. It supports file systems as big as 16 terabytes. It also supports unlimited subdirectories where ext3 only supports 32,000. Yet it’s backward compatible with both ext3 and ext2, thus allowing them to be mounted with the same driver. Ext4 is also very stable, widely supported and compatible with solid state drives.To read this article in full, please click here

Memory prices may have bottomed out

If you've been considering a memory upgrade for your systems, now might be the time to do it. The lengthy decline of memory prices has nearly stopped, and while that doesn’t mean prices are going to go up just yet, it's likely to happen down the road.DRAM and NAND flash memory makers have had to endure a severe downturn in average selling prices over the past six months, as part of the typical cyclical nature of memory sales. But a new report by technology industry analyst firm TrendForce says price declines for some forms of memory have slowed to almost zero.To read this article in full, please click here

Startup UniFabriX uses CXL memory technology to boost rack density

Israeli startup UniFabriX is aiming to give multi-core CPUs the memory and memory bandwidth needed to run compute- and memory-intensive AI and machine-learning workloads.UniFabriX is pitching its Smart Memory Node technology as an alternative to socket-connected DRAM, which restricts memory capacity and bandwidth in CPUs. UniFabriX's technology is based on CXL (Compute Express Link), an industry-supported interconnect for processors, memory expansion, and accelerators. CXL technology maintains memory coherency between the CPU memory space and memory on attached devices, which allows resource sharing for higher performance, reduced software stack complexity, and lower overall system cost.To read this article in full, please click here

New ML benchmarks show best algorithms for training chatbots

MLCommons, a group that develops benchmarks for AI technology training algorithms, revealed the results for a new test that determines system speeds for training algorithms specifically used for the creation of chatbots like ChatGPT.MLPerf 3.0 is meant to provide an industry-standard set of benchmarks for evaluating ML model training. Model training can be a rather lengthy process, taking weeks and even months depending on the size of a data set. That requires an awful lot of power consumption, so training can get expensive.The MLPerf Training benchmark suite is a full series of tests that stress machine-learning models, software, and hardware for a broad range of applications. It found performance gains of up to 1.54x compared to just six months ago and between 33x and 49x compared to the first round in 2018.To read this article in full, please click here

1 3 4 5 6 7 172