Power availability stymies data center growth

The chief obstruction to data center growth is not the availability of land, infrastructure, or talent. It's local power, according to commercial real estate services company CBRE In its 2023 global data center trends report, CBRE says the market is growing steadily and demand is constantly rising, but data center growth has been largely confined to a few select areas, and those areas are running out of power.No region embodies this more than Northern Virginia, which is the world's largest data center market with 2,132 megawatts (MW) of total inventory. Its growth happened for a couple of reasons. First, proximity to the US federal government. Second, because there's a major undersea cable to Europe in Northern Virginia, and data centers want to be as close to it as possible to minimize latency.To read this article in full, please click here

Power availability stymies data center growth

The chief obstruction to data center growth is not the availability of land, infrastructure, or talent. It's local power, according to commercial real estate services company CBRE In its 2023 global data center trends report, CBRE says the market is growing steadily and demand is constantly rising, but data center growth has been largely confined to a few select areas, and those areas are running out of power.No region embodies this more than Northern Virginia, which is the world's largest data center market with 2,132 megawatts (MW) of total inventory. Its growth happened for a couple of reasons. First, proximity to the US federal government. Second, because there's a major undersea cable to Europe in Northern Virginia, and data centers want to be as close to it as possible to minimize latency.To read this article in full, please click here

IBM debuts AI-powered carbon calculator for the cloud

IBM debuted an AI-powered dashboard for tracking carbon emissions used by its cloud computing services, saying that the new Cloud Carbon Calculator can be used to help enterprises with compliance and reduce harmful carbon emissions.The calculator can be accessed via IBM’s cloud dashboard, where it provides a range of graphs and charts to track total carbon emissions created by a customer’s use of IBM’s cloud, breaking it down on a per-service, per-department and per-location basis.The ability to identify carbon emissions in a granular way should let customers identify particularly CO2-heavy workloads, areas or departments and change their cloud profile in order to minimize emissions, according to IBM. The main idea is to identify emissions “hot spots,” which the calculator does via machine learning and algorithmic functions developed in partnership with Intel.To read this article in full, please click here

Cybernews Expert Interview with Tigera President and CEO, Ratan Tipirneni

The challenges companies face regarding private and professional data protection are more important today than ever. In the modern enterprise, cloud computing and the use of cloud-native architectures enable unmatched performance, flexibility, velocity, and innovation. However, as digitalization pushes applications and services to the cloud, cyber criminals’ intrusion techniques have become increasingly sophisticated. To stay current with advancing technologies, doubling or tripling security measures is a must.

To understand the critical need for advanced cybersecurity measures, we turned to an expert in the industry, Ratan Tipirneni, President and CEO of Tigera – a company providing active, zero-trust-based security for cloud-native applications running on containers and Kubernetes.

 

Q: How did the idea of Tigera originate? What has your journey been like so far?

It was over six years ago that Tigera created Project Calico, an open-source container networking and security project.

As containers and Kubernetes adoption grew and organizations started using Kubernetes at scale, Tigera recognized the industry’s need for more advanced security and observability. Tigera has since grown from the Project Calico open-source project to a container security innovator that now supports many Fortune 100 companies across the globe.

Tigera’s continued success comes from listening to customers’ needs, understanding Continue reading

Dynatrace boosts observability platform with generative AI

Dynatrace has incorporated generative AI into its Davis AI engine to let customers more quickly create dashboards, determine the root cause of incidents, and speed mean time to repair.Davis AI features causal and predictive AI capabilities now, and with the addition of generative AI, Dynatrace says it will offer customers a third mode of AI that applies natural language capabilities to more easily create dashboards, automate workflows, and complete tasks. Davis CoPilot generative AI will work in collaboration with its causal AI technology, which analyzes real-time data, and its Davis predictive AI models that anticipate future behavior based on past data and observed patterns in the environment.To read this article in full, please click here

Routing information now on Cloudflare Radar

Routing information now on Cloudflare Radar
Routing information now on Cloudflare Radar

Routing is one of the most critical operations of the Internet. Routing decides how and where the Internet traffic should flow from the source to the destination, and can be categorized into two major types: intra-domain routing and inter-domain routing. Intra-domain routing handles making decisions on how individual packets should be routed among the servers and routers within an organization/network. When traffic reaches the edge of a network, the inter-domain routing kicks in to decide what the next hop is and forward the traffic along to the corresponding networks. Border Gateway Protocol (BGP) is the de facto inter-domain routing protocol used on the Internet.

Today, we are introducing another section on Cloudflare Radar: the Routing page, which focuses on monitoring the BGP messages exchanged to extract and present insights on the IP prefixes, individual networks, countries, and the Internet overall. The new routing data allows users to quickly examine routing status of the Internet, examine secure routing protocol deployment for a country, identify routing anomalies, validate IP block reachability and much more from globally distributed vantage points.

It’s a detailed view of how the Internet itself holds together.

Routing information now on Cloudflare Radar

Collecting routing statistics

The Internet consists of tens of thousands of interconnected Continue reading

Q2 2023 Internet disruption summary

Q2 2023 Internet disruption summary

This post is also available in Deutsch, Français, 日本語, 简体中文, 繁體中文 and 한국어.

Q2 2023 Internet disruption summary

Cloudflare operates in more than 300 cities in over 100 countries, where we interconnect with over 12,000 network providers in order to provide a broad range of services to millions of customers. The breadth of both our network and our customer base provides us with a unique perspective on Internet resilience, enabling us to observe the impact of Internet disruptions.

The second quarter of 2023 was a particularly busy one for Internet disruptions, and especially for government-directed Internet shutdowns. During the quarter, we observed many brief disruptions, but also quite a few long-lived ones. In addition to the government-directed Internet shutdowns, we also observed partial or complete outages due to severe weather, cable damage, power outages, general or unspecified technical problems, cyberattacks, military action, and infrastructure maintenance.

As we have noted in the past, this post is intended as a summary overview of observed disruptions, and is not an exhaustive or complete list of issues that have occurred during the quarter.

Government directed

Late spring often marks the start of a so-called “exam season” in several Continue reading

Unleashing the Potential of Multi-Cloud Automation with Ansible and Terraform

In today's rapidly evolving digital landscape, businesses are dependent on streamlined processes and efficient systems more than ever. One such revolutionary pathway towards a more efficient and flexible IT infrastructure is multi-cloud automation. In this blog, we will look at how to employ Ansible, a powerful automation tool, to tap into the immense potential of multi-cloud environments. We take you on a journey behind the scenes of our interactive labs, where our customers and prospects acquire hands-on experience with Ansible while exploring its newest features. In our labs, public clouds such as Google Cloud, AWS, and Microsoft Azure are showcased. Using Ansible we can orchestrate a symphony of seamless provisioning and optimal multi-cloud management. So, buckle up for a deep dive into the realm of multi-cloud automation, where complexity is simplified, and potential is unleashed.

The Ansible Technical Marketing team uses a variety of tools to create training labs and technical sales workshops for our field teams and customers. One of our training platforms includes Instruqt, an as-a-service learning platform, to help us create sandbox environments that can be run in your browser window. For technical tools behind the scenes, we use a combination of Ansible and Packer to build Continue reading

GigaIO introduces single-node AI supercomputer

Installation and configuration of high-performance computing (HPC) systems can be a considerable challenge that requires skilled IT pros to set up the software stack, for example, and optimize it for maximum performance – it isn't like building a PC with parts bought off NewEgg.GigaIO, which specializes in infrastructure for AI and technical computing, is looking to simplify the task. The vendor recently announced a self-contained, single-node system with 32 configured GPUs in the box to offer simplified deployment of AI and supercomputing resources.Up to now, the only way to harness 32 GPUs would require four servers with eight GPUs apiece. There would be latency to contend with, as the servers communicate over networking protocols, and all that hardware would consume floor space.To read this article in full, please click here

GigaIO introduces single-node AI supercomputer

Installation and configuration of high-performance computing (HPC) systems can be a considerable challenge that requires skilled IT pros to set up the software stack, for example, and optimize it for maximum performance – it isn't like building a PC with parts bought off NewEgg.GigaIO, which specializes in infrastructure for AI and technical computing, is looking to simplify the task. The vendor recently announced a self-contained, single-node system with 32 configured GPUs in the box to offer simplified deployment of AI and supercomputing resources.Up to now, the only way to harness 32 GPUs would require four servers with eight GPUs apiece. There would be latency to contend with, as the servers communicate over networking protocols, and all that hardware would consume floor space.To read this article in full, please click here

Day Two Cloud 204: Deploying Cloud-Delivered Security With Cisco Secure Access (Sponsored)

On today's Day Two Cloud we get inside Cisco Secure Access, a new set of cloud-delivered security services from Cisco. We discuss the security capabilities on offer, the service's architecture and components, how Cisco addresses concerns around user experience and performance, and more. This is a sponsored episode.

The post Day Two Cloud 204: Deploying Cloud-Delivered Security With Cisco Secure Access (Sponsored) appeared first on Packet Pushers.

Micron Revs Up Bandwidth And Capacity On HBM3 Stacks

As we have seen with various kinds of high bandwidth, stacked DRAM memory to compute engines in the past decade, just adding this wide, fast, and expensive memory to a compute engine can radically improve the effective performance of the device.

The post Micron Revs Up Bandwidth And Capacity On HBM3 Stacks first appeared on The Next Platform.

Micron Revs Up Bandwidth And Capacity On HBM3 Stacks was written by Timothy Prickett Morgan at The Next Platform.