The netstat command provides a tremendous amount on information on network activity. With the -s option (netstat -s), it will display summaries for various protocols such as packets received, active connections, failed connections and a lot more. While the data is extensive enough to make you dizzy, the more you get used to what the command's output looks like, the more you'll become familiar with what to expect and maybe even get better at spotting what's unusual. In this post, we're going to look at various portions of the netstat -s command's output using crafted aliases to make it easier.To read this article in full, please click here
Intel had a busy week. A trio of news announcements revealed its chiplets progress, a manufacturing agreement with Arm, and the shedding of another non-core line of business.Prototype multi-die chips heading to DoD
The biggest news is that Intel has begun to ship prototype multi-die chips to the U.S. Department of Defense more than a year ahead of schedule. The DoD project known as State-of-the-Art Heterogeneous Integrated Packaging (SHIP) is an ambitious plan that will connect Intel’s CPUs, FPGAs, ASICs and government-developed chiplets all within the same processor packaging, as opposed to multiple separate dies.AMD was the first to pursue the chiplet design, but AMD took a different approach in that it broke up large, monolithic CPUs into smaller chips. So, instead of one physical piece of silicon with 32 cores, it created four chiplets with eight cores each connected by high-speed interconnects. The idea is that it’s much easier to manufacture an eight-core chip than a 32-core chip.To read this article in full, please click here
AI and machine learning systems are working with data sets in the billions of entries, which means speeds and feeds are more important than ever. Two new announcements reinforce that point with a goal to speed data movement for AI.For starters, Nvidia just published new performance numbers for its H100 compute Hopper GPU in MLPerf 3.0, a prominent benchmark for deep learning workloads. Naturally, Hopper surpassed its predecessor, the A100 Ampere product, in time-to-train measurements, and it’s also seeing improved performance thanks to software optimizations.MLPerf runs thousands of models and workloads designed to simulate real world use. These workloads include image classification (ResNet 50 v1.5), natural language processing (BERT Large), speech recognition (RNN-T), medical imaging (3D U-Net), object detection (RetinaNet), and recommendation (DLRM).To read this article in full, please click here
The European Commission has informed Broadcom of its objections to the company’s proposed $61 billion acquisition of VMware — the latest hurdle the company needs to clear after regulatory agencies in the UK and US also raised concerns. “Broadcom is the leading supplier of Fiber Channel host bus adapters (FC HBAs) and storage adapters. The markets are very concentrated. If the competitors of Broadcom are hampered in their ability to compete in these markets, this could in turn lead to higher prices, lower quality and less innovation for business customers, and ultimately consumers,” the Commission said in a statement.To read this article in full, please click here
Edge server maker Stratus Technologies today announced that the 12th generation of its ftServer line is now on sale, bringing new hardware upgrades, improved resiliency for mission-critical workloads and, in time, support for a broader range of operating systems.The latest ftServers come in four main configurations. The 6920 platform, designed for rigorous data- and transaction-intensive work in large data centers or similar, is the largest, while the 6910 is designed to fit into smaller facilities. The 4920 and 2920, respectively, scale back size and capability to fit into medium-size facilities and remote offices, and running individual applications on shop floors or in industrial plants.To read this article in full, please click here
Failed hard disk drives ran for an average of 25,233 hours before their demise, which translates to a lifespan of two years and 10 months.That’s according to Secure Data Recovery, which has a specific perspective on the matter. It specializes in salvaging data from failed hard drives, so pretty much every hard drive that it sees isn’t working properly, which gives it the opportunity to spot some patterns in hard drive longevity. (Secure Data Recovery’s analysis is different from the quarterly hard-drive report from cloud storage vendor Backblaze, which focuses on the few hard drives that fail out of the hundreds of thousands that it uses.)To read this article in full, please click here
The ncdu command provides a fast and very easy-to-use way to see how you are using disk space on your Linux system. It allows you to navigate through your directories and files and review what file content is using up the most disk space. If you’ve never used this command, you’ll likely have to install it before you can take advantage of the insights it can provide with a command like one of these:$ sudo dnf install ncdu
$ sudo apt install ncdu
The name “ncdu” stands for “NCurses disk usage. .It uses an ncurses interface to provide the disk usage information. “Curses”, as you probably know, has no connection to foul language. Instead, when related to Linux, “curses” is a term related to “cursor” – that little marker on your screen that indicates where you are currently working. Ncurses is a terminal control library that lends itself to constructing text user interfaces.To read this article in full, please click here
The ncdu command provides a fast and very easy-to-use way to see how you are using disk space on your Linux system. It allows you to navigate through your directories and files and review what file content is using up the most disk space. If you’ve never used this command, you’ll likely have to install it before you can take advantage of the insights it can provide with a command like one of these:$ sudo dnf install ncdu
$ sudo apt install ncdu
The name “ncdu” stands for “NCurses disk usage. .It uses an ncurses interface to provide the disk usage information. “Curses”, as you probably know, has no connection to foul language. Instead, when related to Linux, “curses” is a term related to “cursor” – that little marker on your screen that indicates where you are currently working. Ncurses is a terminal control library that lends itself to constructing text user interfaces.To read this article in full, please click here
Use of video streaming encoder cards in the data center is on the rise, and AMD is the latest to tackle the demands of high-volume streaming.Even before the pandemic forced everyone to work from home, videoconferencing usage was climbing. Once schools and businesses became dependent on Zoom calls, video streams started clogging data centers and network pipes across the country. Reliance on video among consumers also took off as TikTok, Twitch, and Facebook became broadcast platforms.With users demanding broadcast-quality video – no one wants blurry, blocky, poor resolution – enterprises are left to deal with increased strain on server CPUs.To read this article in full, please click here
Japan will increase the financial support it's giving to semiconductor maker Rapidus — established with the aim of making cutting-edge, 2-nanometer chips — in order to further support domestic production, according to Japanese trade and industry minister Yasutoshi Nishimura.“The government is ready to continue and beef up financial support to the company,” Nishimura said in an interview with Bloomberg. He added that the plan will require the government to invest trillions of yen in the project.The Tokyo-based manufacturer was established in 2022 with the aim of making 2nm chips in Japan by 2025. To date, it has received ¥70 billion (US$532 million) from the Japanese government, in addition to investments from Toyota, Sony, and telecom giant NT&T.To read this article in full, please click here
Many bash scripts use arguments to control the commands that they will run and the information that will be provided to the people running them. This post examines a number of ways that you can verify arguments when you prepare a script and want to make sure that it will do just what you intend it to do – even when someone running it makes a mistake.Displaying the script name, etc.
To display the name of a script when it’s run, use a command like echo $0. While anyone running a script will undoubtedly know what script they just invoked, using the script name in a usage command can help remind them what command and arguments they should be providing.To read this article in full, please click here
A new white paper from Google details the company’s use of optical circuit switches in its machine learning training supercomputer, saying that the TPU v4 model with those switches in place offers improved performance and more energy efficiency than general-use processors.Google’s Tensor Processing Units — the basic building blocks of the company’s AI supercomputing systems — are essentially ASICs, meaning that their functionality is built in at the hardware level, as opposed to the general use CPUs and GPUs used in many AI training systems. The white paper details how, by interconnecting more than 4,000 TPUs through optical circuit switching, Google has been able to achieve speeds 10 times faster than previous models while consuming less than half as much energy.To read this article in full, please click here
Cisco has amped-up its support for 800G capacity networks with an eye toward helping large enterprises, cloud and service providers handle the expected demand from AI, video, and 5G services.At the core of its recently developments is a new 28.8Tbps / 36 x 800G line card and improved control software for its top-of-the-line Cisco 8000 Series routers.The 28.8T line card is built on Cisco’s Silicon One P100 ASIC, and brings 800G capability to the modular Cisco 8000 Series Router, which can scale to 230Tbps in a 16 RU form factor with the eight-slot Cisco 8808, and up to 518Tbps in the 18-slot chassis, according to Cisco.To read this article in full, please click here
A software firm in Singapore claims it would cost more than $400 million over three years if it were to migrate from its existing colocation setup and move its workloads to the Amazon Web Services (AWS) cloud. Notably, the firm runs a very compute-intensive environment, and high density computing can be very expensive to duplicate in cloud environments.Ahrefs, which develops search engine optimization tools, made the $400 million claim in a March 9 blog post by one of the company’s data center operations executives, Efim Mirochnik. Mirochnik compared the cost of acquiring and running its 850 Dell servers in a colocation provider’s data center with the cost of running a similar configuration in AWS.To read this article in full, please click here
Oracle on Tuesday said it is planning to add a second cloud region in Singapore to meet the growing demand for cloud services across Southeast Asia.“Our upcoming second cloud region in Singapore will help meet the tremendous upsurge in demand for cloud services in South East Asia,” Garrett Ilg, president, Japan & Asia Pacific at Oracle, said in a statement.Public cloud services market across Asia Pacific, excluding Japan, is expected to reach $153.6 billion in 2026 from $53.4 billion in 2021, growing at a compound annual growth rate of 23.5%, according to a report from IDC.To read this article in full, please click here
Within months of adding a second region in Melbourne, Amazon Web Services (AWS) on Tuesday said it would invest $8.93 billion (AU$13.2 billion) to spruce up infrastructure across its cloud regions in Australia through 2027.The majority share of the investment, about $7.45 billion, will be invested in the company’s cloud region in Sydney through the defined time period. The remaining $1.49 billion will be used to expand data center infrastructure in Melbourne, the company said.The $8.93 billion investment includes a $495 million investment in network infrastructure to extend AWS cloud and edge infrastructure across Australia, including partnerships with telecom partners to facilitate high-speed fiber connectivity between Availability Zones, AWS said.To read this article in full, please click here
IBM has significantly reduced the size of some its Big Iron z16 mainframes and given them a new operating system that emphasizes AI and edge computing.The new configurations—which include Telum processor-based, 68-core IBM z16 Single Frame and Rack Mounted models and a new IBM LinuxONE Rockhopper 4 and LinuxONE Rockhopper Rack Mount boxes—are expected to offer customers better data-center configuration options while reducing energy consumption. Both new Rack Mount boxes are 18U compared to the current smallest Single Frame models, which are 42U.To read this article in full, please click here
The severity of data-center outages appears to be falling, while the cost of outages continues to climb. Power failures are “the biggest cause of significant site outages.” Network failures and IT system glitches also bring down data centers, and human error often contributes.Those are some of the problems pinpointed in the most recent Uptime Institute data-center outage report, which analyzes types of outages, their frequency, and what they cost both in money and consequences.Unreliable data is an ongoing problem
Uptime cautions that data relating to outages should be treated skeptically given the lack of transparency of some outage victims and the quality of reporting mechanisms. “Outage information is opaque and unreliable,” said Andy Lawrence, executive director of research at Uptime, during a briefing about Uptime’s Annual Outages Analysis 2023.To read this article in full, please click here
Recording the commands that you run on the Linux command line can be useful for two important reasons. For one, the recorded commands provide a way to review your command line activity, which is extremely helpful if something didn't work as expected and you need to take a closer look. In addition, capturing commands can make it easy to repeat the commands or to turn them into scripts or aliases for long-term reuse. This post examines two ways that you can easily record and reuse commands.Using history to record Linux commands
The history command makes it extremely easy to record commands that you enter on the command line because it happens automatically. The only thing you might want to check is the setting that determines how many commands are retained and, therefore, how long they're going to stay around for viewing and reusing. The command below will display your command history buffer size. If it's 1,000 like that shown, it will retain the last 1,000 commands that you entered.To read this article in full, please click here