Hedge 118: Integrating New Ideas with William Collins

When vendors build something new—or when you decide to go a different direction in your network—you have to figure out how to integrate these new things. Integration of this type often includes cultural, as well as technical, changes. William Collins joins Tom Ammon and Russ White to discuss his experience in integrating new technologies on Hedge 118.

Connect and Secure your Apps with Antrea and VMware NSX-T 3.2

The release of VMware NSX-T 3.2 and VMware Container Networking with Antrea 1.3.1-1.2.3 delivers on VMware’s vision to have heterogeneous Antrea clusters running anywhere integrate with NSX-T for centralized container policy management and visibility.

A picture containing diagram Description automatically generated

 

NSX-T becomes the single pane of glass for policy management when connected to Antrea clusters. The Antrea clusters could be running on VMware Tanzu platform, RedHat OpenShift or any upstream Kubernetes cluster. Inventory management, tagging, dynamic grouping and troubleshooting can be extended to Antrea clusters along with native Kubernetes network policies and Antrea network policies to be centrally managed by NSX-T.

Integrating Antrea to NSX-T

Diagram Description automatically generated

Antrea to NSX-T interworking Architecture

Antrea NSX Adapter is a new component introduced to the standard Antrea cluster to make the integration possible. This component communicates with K8s API and Antrea Controller and connects to the NSX-T APIs. When a NSX-T admin defines a new policy via NSX APIs or UI, the policies are replicated to all the clusters as applicable. These policies will be received by the adapter which in turn will create appropriate CRDs using K8s APIs. The Antrea Controller which is watching these policies run the relevant computation and sends the results Continue reading

Day Two Cloud 134: Simplifying Infrastructure Access With StrongDM (Sponsored)

Today's Day Two Cloud is a sponsored episode with StrongDM, which helps engineers and IT professionals get access to databases, servers, Kubernetes clusters, switches, Web apps, and more from a desktop or laptop. We dive into StrongDM's proxy model, integrations with directories and ID stores, audit features, and more.

The post Day Two Cloud 134: Simplifying Infrastructure Access With StrongDM (Sponsored) appeared first on Packet Pushers.

Migrating from Python virtual environments to automation execution environments in Ansible Automation Platform 2

Red Hat Ansible Tower (included in Ansible Automation Platform 1.x) used Python virtual environments to manage dependencies and implement consistent automation execution across multiple Red Hat Ansible Automation Platform instances. This method of managing dependencies came with its own set of limitations:

  • Managing Python virtual environments across multiple Ansible Tower instances. 
  • Confirming custom dependencies across Ansible Tower instances grew in complexity as more end-users interacted with it.
  • Python virtual environments were tightly coupled to the control plane, resulting in Operations teams bearing the majority of the burden to maintain them.
  • There were no tools supported and maintained by Red Hat to manage custom dependencies across Ansible Automation Platform deployments.

Ansible Automation Platform 2 introduced automation execution environments. These are container images in which all automation is packaged and run, which includes components such as Ansible Core, Ansible Content Collections, a version of Python, Red Hat Enterprise Linux UBI 8, and any additional package dependencies.

 

Why should you upgrade?

Ansible Automation Platform 2, announced at AnsibleFest 2021, comes with a re-imagined architecture that fully decouples the automation control plane and execution plane. The new capabilities enable easier to scale automation across the globe and allow Continue reading

Infrastructure Privacy Webinar

I’m teaching a three-hour webinar on privacy over at Safari Books on Friday. From the description there—

Privacy is important to every IT professional, including network engineers—but there is very little training oriented towards anyone other than privacy professionals. This training aims to provide a high-level overview of privacy and how privacy impacts network engineers. Information technology professionals are often perceived as “experts” on “all things IT,” and hence are bound to face questions about the importance of privacy, and how individual users can protect their privacy in more public settings.

Please join me for this—it’s a very important topic largely ignored in the infrastructure space.

Running BGP between Virtual Machines and Data Center Fabric

Got this question from one of my readers:

When adopting the BGP on the VM model (say, a Kubernetes worker node on top of vSphere or KVM or Openstack), how do you deal with VM migration to another host (same data center, of course) for maintenance purposes? Do you keep peering with the old ToR even after the migration, or do you use some BGP trickery to allow the VM to peer with whatever ToR it’s closest to?

Short answer: you don’t.

Kubernetes was designed in a way that made worker nodes expendable. The Kubernetes cluster (and all properly designed applications) should recover automatically after a worker node restart. From the purely academic perspective, there’s no reason to migrate VMs running Kubernetes.

Running BGP between Virtual Machines and Data Center Fabric

Got this question from one of my readers:

When adopting the BGP on the VM model (say, a Kubernetes worker node on top of vSphere or KVM or Openstack), how do you deal with VM migration to another host (same data center, of course) for maintenance purposes? Do you keep peering with the old ToR even after the migration, or do you use some BGP trickery to allow the VM to peer with whatever ToR it’s closest to?

Short answer: you don’t.

Kubernetes was designed in a way that made worker nodes expendable. The Kubernetes cluster (and all properly designed applications) should recover automatically after a worker node restart. From the purely academic perspective, there’s no reason to migrate VMs running Kubernetes.

JPMorgan Chase spent $2 billion on brand new data centers last year

JPMorgan Chase & Co. spent $2 billion on new data centers last year, even as the multinational investment banking and financial services company continued to move data and applications to cloud platforms run by AWS, Google, and Microsoft.The $2 billion is part of the firm’s total annual spending on technology, which amounted to more than $12 billion last year, according to details shared in JPMorgan Chase’s fourth-quarter and full-year 2021 earnings presentation. Looking at the current year, the firm expects to increase its tech spending to roughly $15 billion. IT priorities in 2022 will be consistent with prior years and will include increases in cloud capabilities, data centers, digital consumer experience, and data and analytics.To read this article in full, please click here

JPMorgan Chase spent $2 billion on brand new data centers last year

JPMorgan Chase & Co. spent $2 billion on new data centers last year, even as the multinational investment banking and financial services company continued to move data and applications to cloud platforms run by AWS, Google, and Microsoft.The $2 billion is part of the firm’s total annual spending on technology, which amounted to more than $12 billion last year, according to details shared in JPMorgan Chase’s fourth-quarter and full-year 2021 earnings presentation. Looking at the current year, the firm expects to increase its tech spending to roughly $15 billion. IT priorities in 2022 will be consistent with prior years and will include increases in cloud capabilities, data centers, digital consumer experience, and data and analytics.To read this article in full, please click here

Surfing On The Ethernet Bandwidth Waves, Avoiding The Rocks

Any company making any kind of box – a server, a switch, a storage array – has three battles they need to fight here in 2022, one of which they did not have to worry about very much before the coronavirus pandemic and which is of prime importance these days.

Surfing On The Ethernet Bandwidth Waves, Avoiding The Rocks was written by Timothy Prickett Morgan at The Next Platform.

Gigamon Introduces Playbooks, Plus A Full Year Of Data Retention, To Its NDR Service

Gigamon has added new features to its SaaS-based Network Detection and Response (NDR) service, including playbooks that provide context for investigations, and a full year of data retention. In addition, Gigamon hopes to compete with more established NDR vendors by bringing more of a human touch to its service.

The post Gigamon Introduces Playbooks, Plus A Full Year Of Data Retention, To Its NDR Service appeared first on Packet Pushers.

Podcast: Why is data center efficiency important? How to address emissions concerns

Data centers are a critical, but often power-hungry, part of the enterprise. But, why exactly do data centers require so much energy? And how can businesses address emissions concerns as well as cut back on the costs associated with cooling data centers? Ashish Nadkarni, group vice president within IDC's Worldwide Infrastructure Practice, joins Juliet to discuss the status of data center efficiency, what it means within the context of green IT and how technology has advanced to make servers more efficient. To read this article in full, please click here

How to address emissions concerns for power-hungry data centers

Data centers are a critical, but often power-hungry, part of the enterprise. But, why exactly do data centers require so much energy? And how can businesses address emissions concerns as well as cut back on the costs associated with cooling data centers? Ashish Nadkarni, group vice president within IDC's Worldwide Infrastructure Practice, joins Juliet to discuss the status of data center efficiency, what it means within the context of green IT and how technology has advanced to make servers more efficient.