Network World

Author Archives: Network World

Verifying bash script arguments

Many bash scripts use arguments to control the commands that they will run and the information that will be provided to the people running them. This post examines a number of ways that you can verify arguments when you prepare a script and want to make sure that it will do just what you intend it to do – even when someone running it makes a mistake.Displaying the script name, etc. To display the name of a script when it’s run, use a command like echo $0. While anyone running a script will undoubtedly know what script they just invoked, using the script name in a usage command can help remind them what command and arguments they should be providing.To read this article in full, please click here

3 ways network teams can influence SASE decisions

Secure access service edge (SASE) has gotten a lot of attention during the past two years from enterprises interested in improving their security posture, specifically as part of an effort to adopt Zero Trust frameworks.That puts a lot of energy behind cybersecurity initiatives, but what about the network?The fact is, the network is central to Zero Trust and to SASE. When coined by analysts, the concept of SASE rested on several functional pillars including SD-WAN, secure Web gateway, cloud-access security broker, next-generation firewall, and Zero Trust Network Access. SD-WAN is the most foundational, though—so fundamental that, whereas a SASE solution might legitimately omit other pillars and still be classed as SASE, omitting the SD-WAN turns it into something else: a secure service edge solution.To read this article in full, please click here

Google claims AI supercomputer speed superiority with new Tensor chips

A new white paper from Google details the company’s use of optical circuit switches in its machine learning training supercomputer, saying that the TPU v4 model with those switches in place offers improved performance and more energy efficiency than general-use processors.Google’s Tensor Processing Units — the basic building blocks of the company’s AI supercomputing systems — are essentially ASICs, meaning that their functionality is built in at the hardware level, as opposed to the general use CPUs and GPUs used in many AI training systems. The white paper details how, by interconnecting more than 4,000 TPUs through optical circuit switching, Google has been able to achieve speeds 10 times faster than previous models while consuming less than half as much energy.To read this article in full, please click here

UK regulator slams AWS, Microsoft for cloud interoperability hurdles

UK communications regulator Ofcom has announced a provisional plan to refer Amazon Web Services (AWS) and Microsoft to the country's Competition and Markets Authority (CMA) over “significant concerns” that they are allegedly harming competition in online cloud services and abusing their market positions with practices that make interoperability difficult.A market study carried out by Ofcom has provisionally identified features and practices that make it difficult for customers to switch or use multiple cloud suppliers, the regulator wrote on its website, adding that it was “particularly concerned” about the practices of Amazon and Microsoft because of their market position.To read this article in full, please click here

Cisco lays groundwork for 800G networks as AI, 5G and video traffic demands grow

Cisco has amped-up its support for 800G capacity networks with an eye toward helping large enterprises, cloud and service providers handle the expected demand from AI, video, and 5G services.At the core of its recently developments is a new 28.8Tbps / 36 x 800G line card and improved control software for its top-of-the-line Cisco 8000 Series routers.The 28.8T line card is built on Cisco’s Silicon One P100 ASIC, and brings 800G capability to the modular Cisco 8000 Series Router, which can scale to 230Tbps in a 16 RU form factor with the eight-slot Cisco 8808, and up to 518Tbps in the 18-slot chassis, according to Cisco.To read this article in full, please click here

HPE unveils a new storage initiative

HP Enterprise this week introduced what it calls “the future of storage,” an array of new hardware and software supported and sold through its GreenLake service that leases hardware on a consumption basis.HPE's new Alletra Storage MP platforms are attached to an NVMe data fabric, delivering file or block storage using a controller that can be configured for either performance or capacity, HPE said. It breaks down into two service categories: HPE GreenLake for Block Storage, which HPE promises scale-out block storage with a 100% data-availability guarantee, and HPE GreenLake for File Storage, which HPE claims will offer hundreds of gigabytes per second of throughput.To read this article in full, please click here

Colocation vs. cloud: SEO firm finds cloud to be cost prohibitive for its high density computing

A software firm in Singapore claims it would cost more than $400 million over three years if it were to migrate from its existing colocation setup and move its workloads to the Amazon Web Services (AWS) cloud. Notably, the firm runs a very compute-intensive environment, and high density computing can be very expensive to duplicate in cloud environments.Ahrefs, which develops search engine optimization tools, made the $400 million claim in a March 9 blog post by one of the company’s data center operations executives, Efim Mirochnik. Mirochnik compared the cost of acquiring and running its 850 Dell servers in a colocation provider’s data center with the cost of running a similar configuration in AWS.To read this article in full, please click here

Fortinet consolidates SD-WAN and SASE management

Tighter integration between Fortinet's SASE and SD-WAN offerings is among the new features enabled by the latest version of the company's core operating system.FortiOS version 7.4 also includes better automation across its Security Fabric environment, and improved management features.FortiOS is the operating system for the FortiGate family hardware and virtual components, and it implements Fortinet Security Fabric and includes firewalling, access control, Zero Trust, and authentication in addition to managing SD-WAN, switching, and wireless services. To read this article in full, please click here

Oracle plans second cloud region in Singapore to meet growing demand

Oracle on Tuesday said it is planning to add a second cloud region in Singapore to meet the growing demand for cloud services across Southeast Asia.“Our upcoming second cloud region in Singapore will help meet the tremendous upsurge in demand for cloud services in South East Asia,” Garrett Ilg, president, Japan & Asia Pacific at Oracle, said in a statement.Public cloud services market across Asia Pacific, excluding Japan, is expected to reach $153.6 billion in 2026 from $53.4 billion in 2021, growing at a compound annual growth rate of 23.5%, according to a report from IDC.To read this article in full, please click here

AWS to invest $8.9 billion across its regions in Australia by 2027

Within months of adding a second region in Melbourne, Amazon Web Services (AWS) on Tuesday said it would invest $8.93 billion (AU$13.2 billion) to spruce up infrastructure across its cloud regions in Australia through 2027.The majority share of the investment, about $7.45 billion, will be invested in the company’s cloud region in Sydney through the defined time period. The remaining $1.49 billion will be used to expand data center infrastructure in Melbourne, the company said.The $8.93 billion investment includes a $495 million investment in network infrastructure to extend AWS cloud and edge infrastructure across Australia, including partnerships with telecom partners to facilitate high-speed fiber connectivity between Availability Zones, AWS said.To read this article in full, please click here

IBM targets edge, AI use cases with new z16 mainframes

IBM has significantly reduced the size of some its Big Iron z16 mainframes and given them a new operating system that emphasizes AI and edge computing.The new configurations—which include Telum processor-based, 68-core IBM z16 Single Frame and Rack Mounted models and a new IBM LinuxONE Rockhopper 4 and LinuxONE Rockhopper Rack Mount boxes—are expected to offer customers better data-center configuration options while reducing energy consumption. Both new Rack Mount boxes are 18U compared to the current smallest Single Frame models, which are 42U.To read this article in full, please click here

10 things to know about data-center outages

Data-center outage severity appears to be falling, while the cost of outages continues to climb.Power failures are “the biggest cause of significant site outages”.Network failures and IT system glitches also bring down data centers, and human error often contributes.Those are some of the problems pinpointed in the most recent Uptime Institute data-center outage report that analyzes types of outages, their frequency, and what they cost both in money and consequences.Unreliable data is an ongoing problem Uptime cautions that data relating to outages should be treated skeptically given the lack of transparency of some outage victims and the quality of reporting mechanisms. “Outage information is opaque and unreliable,” said Andy Lawrence, executive director of research at Uptime, during a briefing about Uptime’s Annual Outages Analysis 2023.To read this article in full, please click here

Recording your commands on the Linux command line

Recording the commands that you run on the Linux command line can be useful for two important reasons. For one, the recorded commands provide a way to review your command line activity, which is extremely helpful if something didn't work as expected and you need to take a closer look. In addition, capturing commands can make it easy to repeat the commands or to turn them into scripts or aliases for long-term reuse. This post examines two ways that you can easily record and reuse commands.Using history to record Linux commands The history command makes it extremely easy to record commands that you enter on the command line because it happens automatically. The only thing you might want to check is the setting that determines how many commands are retained and, therefore, how long they're going to stay around for viewing and reusing. The command below will display your command history buffer size. If it's 1,000 like that shown, it will retain the last 1,000 commands that you entered.To read this article in full, please click here

Google picks Qatar for second Middle Eastern cloud region

Google is adding a second cloud availability region in the Middle East, at Doha, to cater to demand from Qatar’s government and enterprises in the region, it said on Friday.The new cloud region will help the Qatari government achieve its Qatar National Vision 2030 plan to sustain development and provide a high standard of living for its people, according to Google Cloud’s country manager for Qatar, Ghassan Kosta.“This new region is a strong step towards building regional capacity that meets the needs of the Qatari digital economy, from availability and data residency, to digital sovereignty and sustainability,” Kosta wrote in a blog post.To read this article in full, please click here

Data center fires raise concerns about lithium-ion batteries

Fire is to blame for a small but significant number of data-center outages including a March 28 fire that caused severe damage to a data center in France, and an analysis of global incidents highlights ongoing concerns about the safety of lithium-ion batteries and their risk of combustion.The use of lithium ion (Li-ion) batteries in data centers is growing. Now commonly used in uninterruptible power supplies, they are expected to account for 38.5% of the data-center battery market by 2025, up from 15% in 2020, according to consulting firm Frost & Sullivan.To read this article in full, please click here

Kyndryl lays off staff in search of efficiency

Kyndryl, the managed IT services provider that spun out of IBM, has announced layoffs that could affect its own internal IT services.“We are eliminating some roles globally — a small percentage — to become more efficient and competitive,” said a Kyndryl spokesperson, without giving the exact number of employees affected due to the layoffs.“These actions will enable us to focus our investments in areas that directly benefit our customers and position Kyndryl for profitable growth,” the spokesperson said, adding that the company was in the process of undergoing transformation to streamline and simplify its processes and systems.Bloomberg first reported about the layoffs.To read this article in full, please click here

Intel announces 144 core Xeon processor

Intel has announced a new processor with 144 cores designed for simple data-center tasks in a power-efficient manner.Called Sierra Forest, the Xeon processor is part of the Intel E-Core (Efficiency Core) lineup that that forgoes advanced features such as AVX-512 that require more powerful cores. AVX-512 is Intel Advanced Vector Extensions 512, “a set of new instructions that can accelerate performance for workloads and usages such as scientific simulations, financial analytics, artificial intelligence (AI)/deep learning, 3D modeling and analysis, image and audio/video processing, cryptography and data compression,” according to Intel.Sierra Forest signals a shift for Intel that splits its data-center product line into two branches, the E-Core and the P-Core (Performance Core), which is the traditional Xeon data-center design that uses high-performance cores.To read this article in full, please click here

Supermicro has a new liquid-cooled server for AI

With data center servers running hotter and hotter, the interest in liquid cooling is ramping up with vendors announcing servers that feature self-contained systems and businesses with expertise in related technologies jumping in.Liquid cooling is more efficient than traditional air cooling, and Supermicro is using it to cool the hottest processors in a new server designed as a platform to develop and run AI software.The SYS-751GE-TNRT-NV1 server runs hot. It features four NVIDIA A100 GPUs that draw 300W each and are liquid-cooled by a self-contained system.Some liquid cooling systems rely on water that is piped into the data center. The self-contained system doesn’t require that, so it makes the servers more widely deployable.The system is quiet, too; its running noise level is 30dB.To read this article in full, please click here

10-year server lifespan? That’s what one cloud service provider plans

A trend to extend the lifespan of servers beyond the typical three- to five-year range has companies such as Microsoft looking to add a few years of use to hardware that would otherwise be retired.The latest company to adopt this strategy is Paris-based Scaleway, a European cloud services provider that's sharing details about how it plans to get a decade of use out of its servers through a mix of reuse and repair.Scaleway decided the carbon footprint of new servers is just too large – server manufacturing alone accounts for 15% to 30% of each machine’s carbon impact. Reusing existing machines, rather than buying new ones, could significantly reduce e-waste.To read this article in full, please click here

Predictive network technology promises to find and fix problems faster.

With the assistance of artificial intelligence (AI) and machine learning (ML), predictive network technology alerts administrators to possible network issues as early as possible and offers potential solutions.The AI and ML algorithms used in predictive network technology have become critical, says Bob Hersch, a principal with Deloitte Consulting and US lead for platforms and infrastructure. "Predictive network technology leverages artificial neural networks and utilizes models to analyze data, learn patterns, and make predictions," he says. "AI and ML significantly enhance observability, application visibility, and the ability to respond to network and other issues."To read this article in full, please click here

1 18 19 20 21 22 37