By now you’ve undoubtedly heard the complaining about computing parts shortages, particularly from gamers who can’t get modern GPU cards and from car makers, since new cars these days are a rolling data center.The problem is also affecting business IT but in a different way, and there are steps you can take to address the problem. The first step, though, is patience. This shortage isn’t due to staffing or fabs being out of commission, it’s that demand is so high that it’s leading to very long lead times.Chip shortage will hit hardware buyers for months to years
That delay can mean 36 weeks, according to Mario Morales, program vice president for the semiconductor and enabling technologies team at IDC, with the demand for components “seeing untempered demand.”To read this article in full, please click here
This week the TOP500 list of the world’s fastest supercomuters found that, once again, Fugaku is number one, benchmarking at 442 Pflop/sec, making it three times faster than the second place machine. Impressive, but also indicative that it might also be the first to break the exaflop barrier if it’s working on the right kind of problem.TOP500 pointed out that Fugaku’s score (and everyone else’s) is based on double-precision benchmarks, the most accurate floating point math calculation you can do. But much of AI and machine learning is single-precision, which can be less than half the compute power of double precision.To read this article in full, please click here
Linux systems can report on a lot more configuration details than you likely ever knew were available. The trick is using the getconf command and having an idea what you are looking for. Watch out though. You might find yourself a bit overwhelmed with how many settings are available.To start, let's look at a few settings that you might have expected to see to answer some important questions.Summarizing your command-line usage on Linux
For starters, how long can a filename be?You can check this setting by looking at the NAME_MAX setting.To read this article in full, please click here
When the pandemic hit last spring, employees suddenly began working from home, enterprises quickly shifted applications to the cloud, and secure remote access became critical. As we move (hopefully) beyond the pandemic, it’s clear that enterprise networking has been changed forever.Companies are looking at new technologies like SASE to combine networking and edge security into one manageable platform. Zero-trust network access has moved from the back burner to the hotseat as companies seek a more effective way to fight cyberattacks in a world where the traditional perimeter no longer exists. The lines between security and networking are blurring, with traditional security companies moving into the networking realm, and networking companies upping their security game.To read this article in full, please click here
Any file on a Linux system that isn't a text file is considered a binary file--from system commands and libraries to image files and compiled programs. But these files being binary doesn't mean that you can't look into them. In fact, there are quite a few commands that you can use to extract data from binary files or display their content. In this post, we'll explore quite a few of them.file
One of the easiest commands to pull information from a binary file is the file command that identifies files by type. It does this in several ways--by evaluating the content, looking for a "magic number" (file type identifier), and checking the language. While we humans generally judge a file by its file extension, the file command largely ignores that. Notice how it responds to the command shown below.To read this article in full, please click here
Fugaku, the supercomputer built by Fujitsu, remains at number one in the TOP500 list of the fastest supercomputers in the world, where it is still three times faster than the nearest competition.The contest for the fastest remains tight, with only one new entry into the top 10 on the latest list—Perlmutter, at the National Energy Research Scientific Computing (NERSC) Center at Lawrence Berkeley National Laboratory, which is part of the US Department of Energy. It joins the list at number five and bumps down numbers six through 10 from the previous list published in November 2020.(A system called Dammam-7 dropped off the top 10.)To read this article in full, please click here
In this Linux tip, learn how to use the bzcat and zcat commands. They allow you to look at the contents of files compressed with the bzip2 and gzip commands without having to uncompress the files first. Instead, these commands uncompress the files and send the output to standard out while the compressed files are left intact.
A new CEO invariably means a reorganization around his/her vision of things and an attempt to address perceived problems in the company’s organizational structure. In hindsight, that’s another clue that Bob Swan wasn’t long for the CEO’s job at Intel, since he never did a reorg.Pat Gelsinger, who has been Intel’s CEO for just over four months, on the other hand, completely flipped the table with a major reorganization that creates two new business units, promoted several senior technologists to leadership roles, and saw the departure of a major Intel veteran.Now see "How to manage your power bill while adopting AI"
The two new units: one for software and the other on high performance computing and graphics. Greg Lavender will serve as Intel’s chief technology officer and lead the new Software and Advanced Technology Group. As CTO, he will head up research programs, including Intel Labs. Lavender comes to Intel from VMware, where he was also CTO, and has held positions Citigroup, Cisco, and Sun Microsystems.To read this article in full, please click here
Data-center operators, perhaps stung by accusations that they're power hogs, are making a major push to be green. Both Microsoft and Google have promised their data centers will be carbon neutral by 2030, while AWS is targeting 2040.To read this article in full, please click here
Hewlett Packard Enterprise announced several expansions of its managed GreenLake services during its HPE Discover conference this week.GreenLake is HPE’s consumption model for hardware and services. Rather than make an outright purchase, customers determine the configuration they will need and HPE installs it, with a slight overprovisioning just in case. If the customer ends up needing more hardware capacity, it’s just turned on. Until then, it just sits there, unused, and at no charge.To read this article in full, please click here
As data center workloads spiral upward, a growing number of enterprises are looking to artificial intelligence (AI), hoping that technology will enable them to reduce the management burden on IT teams while boosting efficiency and slashing expenses.AI promises to automate the movement of workloads to the most efficient infrastructure in real time, both inside the data center as well as in a hybrid-cloud setting comprised of on-prem, cloud, and edge environments. As AI transforms workload management, future data centers may look far different than today's facilities. One possible scenario is a collection of small, interconnected edge data centers, all managed by a remote administrator.To read this article in full, please click here
As data center workloads spiral upward, a growing number of enterprises are looking to artificial intelligence (AI), hoping that technology will enable them to reduce the management burden on IT teams while boosting efficiency and slashing expenses.AI promises to automate the movement of workloads to the most efficient infrastructure in real time, both inside the data center as well as in a hybrid-cloud setting comprised of on-prem, cloud, and edge environments. As AI transforms workload management, future data centers may look far different than today's facilities. One possible scenario is a collection of small, interconnected edge data centers, all managed by a remote administrator.To read this article in full, please click here
Sar is a system utility that gives us many ways to examine performance on a Linux system. It provides details on all aspects of system performance including system load, CPU usage, memory use, paging, swapping, disk usage, device load, network activity, etc.The name "sar" stands for "system activity report," and it can display current performance, provide reports that are based on log files stored in your system's /var/log/sa (or /var/log/sysstat) folder, or be set up to automatically produce daily reports. It's part of sysstat – a collection of system performance monitoring tools.To check if sar is available on your system, run a command like this:To read this article in full, please click here
NetApp is making a major effort to support hybrid cloud with a batch of software announcements around storage products, converged infrastructure, and cloud-management services.The main news was the release of the latest version of its flagship ONTAP software, as well as updates to other products designed to help organizations build a better hybrid-cloud strategies.ONTAP is the operating system for NetApp’s FAS (hybrid flash-disk) and AFF (all-flash) storage arrays. The latest version, 9.9, adds automatic backup and tiering of on-premises data to NetApp’s StorageGRID object storage as well as to public clouds. It enhances multilevel file security and remote access management, supports continuous data availability for two-times larger MetroCluster configurations, and more replication options for backup and disaster recovery for large data containers for NAS workloads. It can attain up to four times the performance for single LUN applications such as VMware datastores.To read this article in full, please click here
The more you know about how Linux works, the better you'll be able do some good troubleshooting when you run into a problem. In this post, we're going to dive into a problem that a contact of mine, Chris Husted, recently ran into and what he did to determine what was happening on his system, stop the problem in its tracks, and make sure that it was never going to happen again.Disaster strikes
It all started when Chris' laptop reported that it was running out of disk space--specifically that only 1GB of available disk space remained on his 1TB drive. He hadn't seen this coming. He also found himself unable to save files and in a very challenging situation since it is the only system he has at his disposal and he needs the system to get his work done.To read this article in full, please click here
Virtualization software is dated and does not take full advantage of modern hardware, making it extremely power-inefficient and forcing data centers to overprovision hardware to avoid poor performance.That’s the pitch of Sunlight, a virtualization-software vendor whose products take advantage of technologies that didn’t exist when Xen, KVM, VMware and Hyper-V were first developed.[Get regularly scheduled insights by signing up for Network World newsletters.]
“The cloud infrastructure or virtualization stacks have been designed and built 15 to 20 years ago,” said Kosten Metreweli, chief strategy officer of Sunlight. “So the big problem here is that back then, I/O, and particularly storage, was really slow. So fast forward, and we now have NVMe storage, which pushes millions of IOPS from a single device, which is orders of magnitude better than was possible just a few years ago.”To read this article in full, please click here
Cisco brought new features to its DNA Center network-control platform that promise to improve performance, management analytics and security for its enterprise network customers.The new software features integration of a ThousandEyes agent that bulks-up the platform’s network-intelligence monitoring, a two-fold increase in the number of clients the system can support, and improved security and operational capabilities.NaaS is the future but it's got challenges
DNA Center is the heart of Cisco’s intent-based networking strategy and is the vendor’s core-networking control platform supporting myriad services from analytics, network management and automation to assurance setting, fabric provisioning, and policy-based segmentation for wired and wireless enterprise networks. To read this article in full, please click here
The new NVM Express 2.0 has been released and with it a surprise: The non-volatile memory express protocol—best known for handling SSD speeds—is now offering full-blown support for traditional hard-disk drives.This is quite unexpected because SSDs are orders of magnitude faster than traditional HDDs. [ Read also: How to plan a software-defined data-center network ]
The first flash-based SSDs used SATA/SAS physical interfaces borrowed from existing hard drive-based enterprise server/ storage systems. However, none of these interfaces and protocols were designed for high-speed storage media and the SATA/SAS bus became a bottleneck for the much faster SSD.To read this article in full, please click here
Sometimes it’s hard to see gradual changes in technology paradigms because they’re gradual. Sometimes it helps to play “Just suppose…” and see where it leads. So, just suppose that the cloud did what some radical thinkers say, and “absorbed the network”. That’s sure an exciting tag line, but is this even possible, and how might it come about?Companies are already committed to a virtual form of networking for their WAN services, based on VPNs or SD-WAN, rather than building their own WANs from pipes and routers. That was a big step, so what could be happening to make WANs even more virtual, to the point where the cloud could subsume them? It would have to be a data-center change.To read this article in full, please click here