As demand for generative AI grows, cloud service providers such as Microsoft, Google and AWS, along with large language model (LLM) providers such as OpenAI, have all reportedly considered developing their own custom chips for AI workloads.Speculation that some of these companies — notably OpenAI and Microsoft — have been making efforts to develop their own custom chips for handling generative AI workloads due to chip shortages have dominated headlines for the last few weeks. To read this article in full, please click here
Immersion cooling specialist LiquidSack has introduced a pair of modular data center units using immersion cooling for edge deployments and advanced cloud computing applications.The units are called the MicroModular and MegaModular. The former contains a single 48U DataTank immersion cooling system (the size of a standard server rack) and the latter comes with up to six 48U DataTanks. The products can offer between 250kW to 1.5MW of IT capacity with a PUE of 1.02. (Power usage effectiveness, or PUE, is a metric to measure data center efficiency. It’s the ratio of the total amount of energy used by a data center facility to the energy delivered to computing equipment.)To read this article in full, please click here
Fortinet has expanded its campus network portfolio with two new switches that feature integration with Fortinet’s security services and AIops management tool.The FortiSwitch 600 is a multi-gigabit secure campus access switch that supports up to 5GE access and 25GE uplinks. The FortiSwitch 2000 is a campus core switch designed to support larger, more complex campus environments by aggregating high-performance access switches, including the FortiSwitch 600.The new switches are integrated with Fortinet’s FortiGuard AI-Powered Security Services and FortiAIOps management tool, which lets customers utilize security and operations features such as malware protection, device profiling and role-based access control.To read this article in full, please click here
Dell Technologies is expanding its generative AI products and services offerings.The vendor introduced its generative AI lineup at the end of July, but that news was centered around validating existing hardware designs for training and inferencing. Dell's new products are models made for customization and tuning.The name is a mouthful: Dell Validated Design for Generative AI with NVIDIA for Model Customization. The solutions are designed to help customers more quickly and securely extract intelligence from their data.There may be a race to move anything and everything to the cloud, but that doesn’t include generative AI, according to Dell's research. Among enterprises surveyed by Dell, 82% prefer an on-premises or hybrid solution to AI processing, said Carol Wilder, Dell's vice president for cross portfolio software and solutions.To read this article in full, please click here
By default, processes run on the Linux command line are terminated as soon as you log out of your session. However, if you want to start a long-running process and ensure that it keeps running after you log off, there are a couple ways that you can make this happen. The first is to use the nohup command.Using nohup
The nohup (no hangup) command will override the normal hangups (SIGHUP signals) that terminate processes when you log out. For example, if you wanted to run a process with a long-running loop and leave it to complete on its own, you could use a command like this one:% nohup long-loop &
[1] 6828
$ nohup: ignoring input and appending output to 'nohup.out'
Note that SIGHUP is a signal that is sent to a process when the controlling terminal of the process is closed.To read this article in full, please click here
Today's network engineers have to be flexible and adaptable, understand the new infrastructure-as-code paradigm, and stay on top of the latest developments in cloud, security, and AI.Organizations aren’t necessarily looking for someone who is limited to a single vendor’s technology; they’re looking for employees who have skills across a wide variety of technologies and are constantly looking to broaden their areas of expertise.Jeff Sangillo is vice-president of technology engineering and operations for QTS Data Centers. He manages both internal network connectivity between the company's 30-plus data center locations, as well as customer-facing networking services and products.To read this article in full, please click here
IBM is rolling out AI-based managed services that promise to help network and security operations teams more quickly and effectively respond to enterprise cyber threats.Managed by the IBM Consulting group, the Threat Detection and Response (TDR) Services offering promises 24x7 monitoring, investigation, and automated remediation of security alerts from existing security tools as well as cloud, on-premises, and operational technology systems utilizing the enterprise network. The services can integrate information from more than 15 security event and incident management (SIEM) tools and multiple third-party endpoint and network detection and response packages, for example.To read this article in full, please click here
Kyndryl continues to fill out its stable of network security partners, most recently inking an alliance with Palo Alto Networks for cybersecurity, SD-WAN and secure access service edge (SASE) services.As part of the deal, Kyndryl will integrate Palo Alto's security products and services into its own managed security services, which include security monitoring, incident response, and threat intelligence. Palo Alto brings a wide range of security offerings, including a family of next-generation firewalls, the Prisma Cloud security platform for cloud-based applications and workloads, endpoint security, and operational automation support.To read this article in full, please click here
Schneider Electric is warning the demands of power and cooling for AI are beyond what standard data center designs can handle and says new designs are necessary.That may be expected from a company like Schneider, which makes power and cooling systems used in data centers. But it doesn’t mean Schneider isn't correct. AI is a different kind of workload than standard server-side applications, such as databases, and the old ways just don’t cut it anymore.Schneider's white paper notes that AI needs ample supply of three things: power, cooling, and bandwidth. GPUs are the most popular AI processors and the most power intensive. Whereas CPUs from Intel and AMD draw about 300 to 400 watts, Nvidia’s newest GPUs draw 700 watts per processor and they are often delivered in clusters of eight at a time.To read this article in full, please click here
The Japanese government on Tuesday said that it had invested $1.3 billion in Micron’s Hiroshima factory as subsidy for manufacturing more advanced chips that support or power AI and quantum workloads.The investment is expected to cover the cost of installing ASML Holding’s extreme ultraviolet lithography equipment at the factory, according to a Bloomberg report. Lithography machines are used to draw patterns on silicon chips using light, and Dutch company ASML Holdings is one of the top producers of these lithography machines.To read this article in full, please click here
Microsoft on Tuesday moved its Cloud for Sovereignty offering from private preview to public preview and said the offering is likely to be made generally available this December.Microsoft Cloud for Sovereignty, which is aimed at helping government bodies meet specific compliance, security, and policy requirements, was first introduced in July of last year. Since then the company has released two private releases of the offering.The public preview version of the offering includes new features such as the Sovereign Landing Zone, support for two country-specific requirements, transparency logs, and automated workload templates.The Sovereign Landing Zone and policy initiative, which is now available on GitHub, instantiates guardrails for sovereign cloud environments for customer workloads, enabling customers to leverage best practices for secure and consistent environments while supporting their efforts to meet evolving local regulations, the company said.To read this article in full, please click here
SambaNova Systems, maker of dedicated AI hardware and software systems, has launched a new AI chip, the SN40, that will be used in the company’s full-stack large language model (LLM) platform, the SambaNova Suite.First introduced in March, the SambaNova Suite uses custom processors and operating systems for AI inference training. It's designed to be an alternative to power-hungry and expensive GPUs.To upgrade the hardware so soon after launch means that there ought to be a big jump in performance, and there is. The SN40L serves up to a 5 trillion parameter LLM with 256K+ sequence length possible on a single system node, according to the vendor.To read this article in full, please click here
When it comes to archiving data, there are three different approaches, generally speaking. Selecting the right system hinges on technical capabilities as well as external factors such as budget constraints. Enterprise storage pros need to balance data- preservation, accessibility, and resource-optimization requirements as they weigh the various archive systems available in the market. Let's take a deeper look into the different types of archive systems.Traditional batch archive
With a traditional batch archive, data serves its purpose for a certain period before being tucked away in a safe repository, awaiting the possibility of being of some use in the future. The main idea behind this type of archive is to preserve data over an extended timeframe, while keeping costs at a minimum and ensuring that retrieval remains a breeze even years down the line. In this kind of archive system, each collection of data selected for archiving is given one or more identities, stored as metadata alongside the archived data. This metadata plays a pivotal role in locating and retrieving the archived information, with details such as project names, tools used for to create the data, the creator’s name, and the creation timeframe all forming part of this digital fingerprint. Continue reading
Creating energy-efficient and sustainable data centers makes a lot of sense from a business standpoint. Aside from the obvious environmental impact of lower carbon emissions, the potential business benefits include lower operating costs, reduced space requirements and a positive brand image.There’s another good reason for building more sustainable and energy-efficient data centers: Regulations and standards are emerging around the world that will require or recommend such actions.IT and networking executives and their teams need to get up to speed on a host of sustainability regulations and standards that are going to require a response on their part. Energy efficiency and sustainability are not just issues for facilities teams anymore. They are a concern for IT teams that will be asked to provide metrics, so the need for reporting will become more urgent. They will also need to select more energy-efficient hardware.To read this article in full, please click here
France’s competition watchdog has raided the local offices of chipmaker Nvidia while investigating anticompetitive practices in the graphics cards sector with a focus on cloud computing.While the competition watchdog did not confirm the identity of the entity being investigated or the practice in question, a report from The Wall Street Journal cited sources saying that the raids targeted Nvidia.The watchdog, however, confirmed that the operation was a result of it trying to investigate the graphics cards sector as part of an expanded study to understand anticompetitive practices in the cloud computing sector. The study, according to the watchdog, was started in January 2022.To read this article in full, please click here
SD-WAN deployments continue to grow at an impressive clip, and the leading adoption drivers include cloud connectivity requirements, interest in SASE packages, the promise of simpler WAN management, and cost savings potential, according to IDC.The research firm evaluated 12 SD-WAN vendors for its newly published IDC MarketScape: Worldwide SD-WAN Infrastructure 2023 Vendor Assessment, which found the SD-WAN infrastructure market grew 25% in 2022. Looking ahead, IDC is forecasting a compound annual growth rate of 10% and expects the market to reach $7.5 billion by 2027. To read this article in full, please click here
The benefits of moving workloads into a cloud-based environment cannot be overstated. AWS, for instance, is designed for flexibility, allowing users to select the operating system, programming language, web application platform, database, and other services to suit their own specific needs. This adaptability not only simplifies the migration process for existing applications but also provides a strong foundation for building new solutions.However, there is a flip side. With this flexibility comes the potential for an AWS customer to unknowingly introduce risks into their cloud environment. One of the most significant risks is the formation of attack paths, which can be used by malicious actors to infiltrate and compromise cloud resources. These exposed paths emerge through a combination of factors that are often easy to overlook in a complex and fast-moving cloud environment. They are best summarized in three distinct categories:To read this article in full, please click here
Attacks related to Domain Name System infrastructure – such as DNS hijacking, DNS tunneling and DNS amplification attacks – are on the rise, and many IT organizations are questioning the security of their DNS infrastructure.Most IT organizations maintain a variety of DNS infrastructure for public services (websites and internet-accessible services) and private services (Active Directory, file sharing, email). Securing both internal and external DNS infrastructure is critical due to a growing number of threats and vulnerabilities that malicious actors use to target them. Unfortunately, very few organizations are confident in their DNS security.Enterprise Management Associates (EMA) recently examined the issue of DNS security in its newly published research report, “DDI Directions: DNS, DHCP and IP Address Management Strategies for the Multi-Cloud Era.” Based on a survey of 333 IT professionals responsible for DNS, DHCP and IP address management (DDI), the research found that only 31% of DDI managers are fully confident in the security of their DNS infrastructure.To read this article in full, please click here
Intel refreshed its FPGA line-up with cost-optimized offerings, released its FPGA software stack as open source, and added a new processor design based on the RISC-V architecture.The first of the new products is the Agilex 3 family of power- and cost-optimized FPGAs available in compact form factors. Agilex follows the same product-naming convention as the desktop Core series; 3 is the lowest end of the performance spectrum, followed by 5, 7, and 9 series in ascending order.The Agilex 3 family will come with two branches: the B-Series and C-Series. The B-Series FPGAs have higher I/O density in smaller form factors at lower power than other Intel FPGAs. B-Series FPGAs are targeted for board and system management, including server platform management (PFM) applications.To read this article in full, please click here