Network World Data Center

Author Archives: Network World Data Center

Gigabyte spins off its enterprise business to better serve enterprises

Gigabyte has split in two, breaking off its enterprise business as a subsidiary called Giga Computing Technology that's focused on sales and support for its data-center products.The Taiwanese company is well known for its motherboards and GPU cards for gaming, but also for several form factors of servers. Breaking out Giga Computing into a separate unit enables it to better cater to the needs of enterprise customers, according to Daniel Hou, CEO of the new business. “This is just another extension of our long-term plan that will allow our enterprise solutions better react to market forces and to better tailor products to various markets,” Hou said in a statement.To read this article in full, please click here

Microsoft to acquire Fungible for augmenting Azure networking, storage

Microsoft on Monday said it is acquiring composable infrastructure services provider Fungible for an undisclosed amount in an effort to augment its Azure networking and storage services.Microsoft’s Fungible acquisition is aimed at accelerating networking and storage performance in datacenters with high-efficiency, low-power data processing units (DPUs), Girish Bablani, corporate vice president, Azure Core, wrote in a blog post.  Data processing units or DPUs are an evolved format of smartNIC that are used to offload server CPU duties onto a separate device to free up server cycles, akin to hardware accelerators such as graphics processing units (GPUs) and field-programmable gate arrays (FPGA).To read this article in full, please click here

Supermicro launches Arm-powered servers

Supermicro is the latest OEM to offer Arm-based servers with the launch of its Mt. Hamilton platform. The new servers will be sold under the MegaDC brand name and run the Altra line of Arm-based CPUs from Ampere Computing.While the servers can be used on-premises, at the edge, or in the cloud, Supermicro is emphasizing a cloud-performance angle. The Mt. Hamilton platform is designed to target cloud-native applications, such as video-on-demand, IaaS, databases, dense VDI, and telco edge, and it addresses specific cloud-native workload objectives, such as performance per watt and very low latency responses.The Mt. Hamilton platform is modular and supports a variety of storage and PCI-Express configurations. It includes support for up to four double-width GPUs or two dozen 2.5-inch U.2 NVM-Express SSDs. For networking, the motherboards use Nvidia’s ConnectX4 SmartNICs. The systems are available in 1U and 2U single-socket configurations, supporting up to 4TB of memory.To read this article in full, please click here

IT supply issues have organizations shifting from just-in-time to just-in-case buying

The past three years have been an unprecedented period of disruption in the semiconductor industry. The Covid pandemic and ensuing lockdowns shut down manufacturing, there were interruptions in shipping, and then the war in Ukraine adversely impacted supplies of critical raw materials.The first half of 2022 saw 46% more supply chain disruptions than the first half of 2021, according to a research report released this fall by Resilinc, a supply chain resiliency company.To read this article in full, please click here

Arista floats its answer to the strain AI puts on networks

If networks are to deliver the full power of AI they will need a combination of high-performance connectivity and no packet lossThe concern is that today’s traditional network interconnects cannot provide the required scale and bandwidth to keep up with AI requests, said Martin Hull, vice president of Cloud Titans and Platform Product Management with Arista Networks. Historically, the only option to connect processor cores and memory have been proprietary interconnects such as InfiniBand, PCI Express and other protocols that connect compute clusters with offloads but for the most part that won’t work with AI and its workload req uirements.Arista AI Spine To address these concerns, Arista is developing a technology it calls AI Spine, which calls for data-center switches with deep packet buffers and networking software that provides real-time monitoring to hep manage the buffers and efficiently control traffic.To read this article in full, please click here

AI is coming to the network

AI-enabled management platforms and infrastructure are beginning to make their way into enterprise networks. I say “beginning” because despite lots of AI-washing marketing efforts over the last few years, a lot of what has been characterized as “AI-driven” or “powered by AI” hasn’t really materialized. It's not that these systems don’t do what the marketers say, so much as they don't do it in the way they imply.Even some tools that do truly employ AI in meaningful ways, and with visibly different results than are possible without it, don’t feel qualitatively different from what has come before. They may be better, for example by dramatically reducing the number of false positives in alert traffic, but not different.To read this article in full, please click here

Using the Linux locale command

The locale settings in Linux systems help ensure that information like dates and times are displayed in a format that makes sense in the context of where you live and what language you speak. Here's how to use them.NOTE: None of the commands described in this post will change your locale settings. Some merely use a different locale setting to display the response you might be seeing from a different location.List your settings If you’re in the US, you should see something like this when you use the locale command to list your settings:$ locale LANG=en_US.UTF-8 LC_CTYPE="en_US.UTF-8" LC_NUMERIC="en_US.UTF-8" LC_TIME="en_US.UTF-8" LC_COLLATE="en_US.UTF-8" LC_MONETARY="en_US.UTF-8" LC_MESSAGES="en_US.UTF-8" LC_PAPER="en_US.UTF-8" LC_NAME="en_US.UTF-8" LC_ADDRESS="en_US.UTF-8" LC_TELEPHONE="en_US.UTF-8" LC_MEASUREMENT="en_US.UTF-8" LC_IDENTIFICATION="en_US.UTF-8" LC_ALL= The en_US.UTF-8 settings in the above output all represent US English. If you’re in France, this response is more likely:To read this article in full, please click here

Creating and removing directory structures on Linux

Managing directories on Linux is easy, but the process gets more complex when you need to create, empty or remove large, complex directory structures. This post will take you from the most basic commands to some fairly complex ones that can help make the process easier.mkdir The mkdir command can create a single directory like this:$ mkdir newdir It can also create a complex directory and subdirectory structure with a command like the one below. The -p argument tells the command to create the base directory if it doesn't already exist.Each group of directory names that appears in the command shown – like {1,2,3} and {docs,script} – will result in a series of subdirectories being created at that level.To read this article in full, please click here

Server supply chain undergoes shift due to geopolitical risks

Geopolitical tensions among the US, China, and Taiwan are forcing a notable change to server manufacturing, according to Asian market research firm TrendForce, which predicts that core parts of the server supply chain will eventually shift to southeast Asia and the Americas. According to TrendForce’s research, Taiwan-based original design manufacturers (ODM) currently account for about 90% of global server motherboard production. A notable exception is Supermicro, which has a 1.5 million square-foot factory in Fremont, California. It also has an 800,000 square-foot facility in Taiwan.Ever since the start of the trade dispute between the US and China beginning in 2018, server ODMs began looking at moving their production lines from mainland China to Taiwan. Then, due to the explosion in construction of data centers across the Asia-Pacific region, motherboard makers began looking at southeast Asian countries such as Malaysia and Thailand for capacity expansion.To read this article in full, please click here

Nvidia still crushing the data center market

Nvidia is playing some serious games.When Jensen Huang and his two partners established Nvidia in 1993, the graphics chip market had many more competitors than the CPU market, which had just two. Nvidia’s competitors in the gaming market included ATI Technologies, Matrox, S3, Chips & Technology, and 3DFX.A decade later, Nvidia had laid waste to every one of them except for ATI, which was purchased by AMD in 2006. For most of this century, Nvidia has shifted its focus to bring the same technology it uses to render videogames in 4k pixel resolution to power supercomputers, high-performance computing (HPC) in the enterprise, and artificial intelligence.To read this article in full, please click here

Intel splits GPU group into two separate units

Intel announced plans to split its AXG graphics group and move the resources into two existing business units to better serve their respective markets.The consumer/gaming end of the GPU business will move to Intel’s Client Compute Group (CCG), which develops consumer computing platforms based on the company’s CPU products. The teams responsible for data center and supercomputing products such as the Ponte Vecchio and Rialto Bridge will move to the Data Center and AI (DCAI) business unit.The GPU SoC and IP design teams will also fall under the DCAI umbrella, but they will continue to support the client graphics team. Jeff McVeigh, currently the vice president and general manager of the Super Compute Group, will serve as the interim leader of this team until a permanent leader is found.To read this article in full, please click here

Using the ss command on Linux to view details on sockets

The ss command is used to dump socket statistics on Linux systems. It serves as a replacement for the netstat command and is often used for troubleshooting network problems.What is a socket? To make the best use of the ss command, it’s important to understand what a socket is. A socket is a type of pseudo file (i.e., not an actual file) that represents a network connection. A socket identifies both the remote host and the port that it connects to so that data can be sent between the systems. Sockets are similar to pipes except that pipes only facilitate connections between processes on the same system where sockets work on the same or different systems. Unlike pipes, sockets also provide bidirectional communication.To read this article in full, please click here

EU Commission opens antitrust inquiry into Broadcom’s $61B VMware acquisition

A month after the UK’s Competition Market’s Authority (CMA) announced it was investigating Broadcom’s proposed acquisition of VMware, European antitrust regulators have launched its own probe into the $61 billion deal.In the US, the Federal Trade Commission (FTC) is five months into its own investigation of the deal.Meanwhile, the EU Commision said in a statement published on December 20 that it  “is particularly concerned that the transaction would allow Broadcom to restrict competition in the market for certain hardware components which interoperate with VMware's software.”To read this article in full, please click here

HPE expands GreenLake private cloud offerings

HPE has announced new features for its GreenLake for Private Cloud Enterprise, including Kubernetes support and workload-optimized instances.HPE launched GreenLake for Private Cloud Enterprise in June. It's an automated private cloud offering for enterprises looking to deploy both traditional workloads and cloud-native applications inside their data centers. The service includes virtual machines, bare metal workloads, and containers, all running on GreenLake’s on-premises consumption model.Among the new services HPE announced is the option to deploy Kubernetes container services through Amazon Elastic Kubernetes Service (EKS) Anywhere. Customers can now run the same container runtimes on-premises that they use in the public cloud, with a consistent experience across both public and private clouds.To read this article in full, please click here

More work for admins: When labor-saving management tools don’t ease workloads

If not deployed properly, today’s whiz-bang network management tools wind up making more work for network admins rather than saving them time and reducing their overload.Wait, labor saving devices don’t save labor? Not really, at least when it comes to freeing up time for more important or rewarding activities.It’s not unlike the "labor saving appliance" revolution in the American home, especially in the post-WW2 era.I’m referring, of course, to Ruth Schwartz Cowan’s classic history of technology book, More Work for Mother, which explored in depth how various supposedly labor-saving advances in household technology did not reduce the amount of time those women who kept house spent on housekeeping. On the contrary, because they mainly mechanized or automated work previously done by servants, children, or (occasionally) men, these tech advances shifted women’s efforts from organizing such work to doing it. At the same time, with some kinds of work around food preparation and clothes washing, they also brought back “in-house” work that had been effectively outsourced to commercial laundries, bakeries, etc.To read this article in full, please click here

Data center networking trends to watch for 2023

Hybrid and multicloud initiatives will continue to shape enterprise IT in 2023, and the impact on data-center networking will be felt across key areas including security, management, and operations. Network teams are investing in technologies such as SD-WAN and SASE, expanding automation initiatives, and focusing on skills development as more workloads and applications span cloud environments.“The most important core trend in data centers is the recognition that the hybrid cloud model – which combines current transaction processing and database activities with a cloud-hosted front-end element for the user interface – is the model that will dominate over time,” said Tom Nolle, president of CIMI Corp. and a Network World columnist. The industry is seeing a slow modernization of data center applications to support the hybrid-cloud model, Nolle says, “and included in that is greater componentization of those applications, a larger amount of horizontal traffic, and a greater need to manage security within the hosted parts of the application.”To read this article in full, please click here

Equinix’s fix for high power bills? Hotter data centers

Data-center giant Equinix has found a low-tech solution to high data-center electric bills: turn up the thermostat.Guidance from the American Society of Heat, Refrigerating, and Air Conditioning Engineers (ASHRAE) recommends a temperature range for data-center servers from 59°F (15°C) to as high as 89°F (31.6°C). Equinix is looking at setting the temperature at 80°F (26.6°C), up from the current setting of 73°F (22.7°C).To read this article in full, please click here

Meta considers liquid to cool its hard drives

A joint effort by immersion cooling firm Iceotope and Meta, the parent company of Facebook, found cooling hard drives with a dielectric liquid is safe and a more effective means of cooling than using fans.Hyperscalers like Meta deploy thousands of HDDs in their data centers, and while the heat given off on an individual basis is tiny, it adds up, especially since the drives are in constant use and are close together. The drives are stored in server racks that hold nothing but dozens of hard drives and are referred to as a JBOD (Just a Bunch Of Disks).A JBOD can overheat without cooling, which up to now has been done with fans, but some drives were further away from fans than others, causing uneven cooling.To read this article in full, please click here

1 11 12 13 14 15 20