Archive

Category Archives for "Networking"

Migration Coordinator – Lift and Shift Migration Modes

In the first part of this blog series, takes a high-level view of all the modes that are available with Migration Coordinator, a fully supported tool built into NSX that enables migrating from NSX from vSphere to NSX (NSX-T). The second blog in this series, takes a closer look at the available options for in-place migrations. This third blog in this series, will take  the options available for lift and shift type of migration.

Distributed Firewall

Distributed Firewall mode is one of the first lift and shift modes that was introduced with NSX 3.1 release. This mode allows migrating only the firewall configuration over to NSX running on its own dedicated HW.

Locating the mode: This mode is part of the three advanced migration modes and is found by expanding the “Advanced Migration Modes” highlighted in red below:

NSX Prep:

  1. Installation: NSX manager and Edges. Optionally bridges.
  2. Configuration:
    1. Configure the N/S network connectivity
    2. Create and configure T0s all the way down to the Segments
      1. Segment ID must match the VNI of the NSX for vSphere logical switches
    3. Optionally configure bridges if the migration is expected to take long time and network connectivity between the workloads on NSX for vSphere Continue reading

Power availability stymies data center growth

The chief obstruction to data center growth is not the availability of land, infrastructure, or talent. It's local power, according to commercial real estate services company CBRE In its 2023 global data center trends report, CBRE says the market is growing steadily and demand is constantly rising, but data center growth has been largely confined to a few select areas, and those areas are running out of power.No region embodies this more than Northern Virginia, which is the world's largest data center market with 2,132 megawatts (MW) of total inventory. Its growth happened for a couple of reasons. First, proximity to the US federal government. Second, because there's a major undersea cable to Europe in Northern Virginia, and data centers want to be as close to it as possible to minimize latency.To read this article in full, please click here

Power availability stymies data center growth

The chief obstruction to data center growth is not the availability of land, infrastructure, or talent. It's local power, according to commercial real estate services company CBRE In its 2023 global data center trends report, CBRE says the market is growing steadily and demand is constantly rising, but data center growth has been largely confined to a few select areas, and those areas are running out of power.No region embodies this more than Northern Virginia, which is the world's largest data center market with 2,132 megawatts (MW) of total inventory. Its growth happened for a couple of reasons. First, proximity to the US federal government. Second, because there's a major undersea cable to Europe in Northern Virginia, and data centers want to be as close to it as possible to minimize latency.To read this article in full, please click here

IBM debuts AI-powered carbon calculator for the cloud

IBM debuted an AI-powered dashboard for tracking carbon emissions used by its cloud computing services, saying that the new Cloud Carbon Calculator can be used to help enterprises with compliance and reduce harmful carbon emissions.The calculator can be accessed via IBM’s cloud dashboard, where it provides a range of graphs and charts to track total carbon emissions created by a customer’s use of IBM’s cloud, breaking it down on a per-service, per-department and per-location basis.The ability to identify carbon emissions in a granular way should let customers identify particularly CO2-heavy workloads, areas or departments and change their cloud profile in order to minimize emissions, according to IBM. The main idea is to identify emissions “hot spots,” which the calculator does via machine learning and algorithmic functions developed in partnership with Intel.To read this article in full, please click here

Cybernews Expert Interview with Tigera President and CEO, Ratan Tipirneni

The challenges companies face regarding private and professional data protection are more important today than ever. In the modern enterprise, cloud computing and the use of cloud-native architectures enable unmatched performance, flexibility, velocity, and innovation. However, as digitalization pushes applications and services to the cloud, cyber criminals’ intrusion techniques have become increasingly sophisticated. To stay current with advancing technologies, doubling or tripling security measures is a must.

To understand the critical need for advanced cybersecurity measures, we turned to an expert in the industry, Ratan Tipirneni, President and CEO of Tigera – a company providing active, zero-trust-based security for cloud-native applications running on containers and Kubernetes.

 

Q: How did the idea of Tigera originate? What has your journey been like so far?

It was over six years ago that Tigera created Project Calico, an open-source container networking and security project.

As containers and Kubernetes adoption grew and organizations started using Kubernetes at scale, Tigera recognized the industry’s need for more advanced security and observability. Tigera has since grown from the Project Calico open-source project to a container security innovator that now supports many Fortune 100 companies across the globe.

Tigera’s continued success comes from listening to customers’ needs, understanding Continue reading

Dynatrace boosts observability platform with generative AI

Dynatrace has incorporated generative AI into its Davis AI engine to let customers more quickly create dashboards, determine the root cause of incidents, and speed mean time to repair.Davis AI features causal and predictive AI capabilities now, and with the addition of generative AI, Dynatrace says it will offer customers a third mode of AI that applies natural language capabilities to more easily create dashboards, automate workflows, and complete tasks. Davis CoPilot generative AI will work in collaboration with its causal AI technology, which analyzes real-time data, and its Davis predictive AI models that anticipate future behavior based on past data and observed patterns in the environment.To read this article in full, please click here

Routing information now on Cloudflare Radar

Routing information now on Cloudflare Radar
Routing information now on Cloudflare Radar

Routing is one of the most critical operations of the Internet. Routing decides how and where the Internet traffic should flow from the source to the destination, and can be categorized into two major types: intra-domain routing and inter-domain routing. Intra-domain routing handles making decisions on how individual packets should be routed among the servers and routers within an organization/network. When traffic reaches the edge of a network, the inter-domain routing kicks in to decide what the next hop is and forward the traffic along to the corresponding networks. Border Gateway Protocol (BGP) is the de facto inter-domain routing protocol used on the Internet.

Today, we are introducing another section on Cloudflare Radar: the Routing page, which focuses on monitoring the BGP messages exchanged to extract and present insights on the IP prefixes, individual networks, countries, and the Internet overall. The new routing data allows users to quickly examine routing status of the Internet, examine secure routing protocol deployment for a country, identify routing anomalies, validate IP block reachability and much more from globally distributed vantage points.

It’s a detailed view of how the Internet itself holds together.

Routing information now on Cloudflare Radar

Collecting routing statistics

The Internet consists of tens of thousands of interconnected Continue reading

Q2 2023 Internet disruption summary

Q2 2023 Internet disruption summary

This post is also available in Deutsch, Français, 日本語, 简体中文, 繁體中文 and 한국어.

Q2 2023 Internet disruption summary

Cloudflare operates in more than 300 cities in over 100 countries, where we interconnect with over 12,000 network providers in order to provide a broad range of services to millions of customers. The breadth of both our network and our customer base provides us with a unique perspective on Internet resilience, enabling us to observe the impact of Internet disruptions.

The second quarter of 2023 was a particularly busy one for Internet disruptions, and especially for government-directed Internet shutdowns. During the quarter, we observed many brief disruptions, but also quite a few long-lived ones. In addition to the government-directed Internet shutdowns, we also observed partial or complete outages due to severe weather, cable damage, power outages, general or unspecified technical problems, cyberattacks, military action, and infrastructure maintenance.

As we have noted in the past, this post is intended as a summary overview of observed disruptions, and is not an exhaustive or complete list of issues that have occurred during the quarter.

Government directed

Late spring often marks the start of a so-called “exam season” in several Continue reading

GigaIO introduces single-node AI supercomputer

Installation and configuration of high-performance computing (HPC) systems can be a considerable challenge that requires skilled IT pros to set up the software stack, for example, and optimize it for maximum performance – it isn't like building a PC with parts bought off NewEgg.GigaIO, which specializes in infrastructure for AI and technical computing, is looking to simplify the task. The vendor recently announced a self-contained, single-node system with 32 configured GPUs in the box to offer simplified deployment of AI and supercomputing resources.Up to now, the only way to harness 32 GPUs would require four servers with eight GPUs apiece. There would be latency to contend with, as the servers communicate over networking protocols, and all that hardware would consume floor space.To read this article in full, please click here

GigaIO introduces single-node AI supercomputer

Installation and configuration of high-performance computing (HPC) systems can be a considerable challenge that requires skilled IT pros to set up the software stack, for example, and optimize it for maximum performance – it isn't like building a PC with parts bought off NewEgg.GigaIO, which specializes in infrastructure for AI and technical computing, is looking to simplify the task. The vendor recently announced a self-contained, single-node system with 32 configured GPUs in the box to offer simplified deployment of AI and supercomputing resources.Up to now, the only way to harness 32 GPUs would require four servers with eight GPUs apiece. There would be latency to contend with, as the servers communicate over networking protocols, and all that hardware would consume floor space.To read this article in full, please click here

Day Two Cloud 204: Deploying Cloud-Delivered Security With Cisco Secure Access (Sponsored)

On today's Day Two Cloud we get inside Cisco Secure Access, a new set of cloud-delivered security services from Cisco. We discuss the security capabilities on offer, the service's architecture and components, how Cisco addresses concerns around user experience and performance, and more. This is a sponsored episode.

The post Day Two Cloud 204: Deploying Cloud-Delivered Security With Cisco Secure Access (Sponsored) appeared first on Packet Pushers.

SD-WAN Deployment Failures 101: Lessons From The Field

SD-WAN is a cost-effective, flexible alternative to traditional MPLS networks, but the high rate of failed deployments indicates that achieving successful implementation is not straightforward. Organizations must be prepared to embrace new experience-driven approaches to network management, such as the need for visibility into unmanaged networks, to deploy SD-WAN effectively.

The post SD-WAN Deployment Failures 101: Lessons From The Field appeared first on Packet Pushers.