Upcoming Course: Data Center Fabrics

On the 19th and 22nd (Friday and Monday) I’m teaching the two-part series on Data Center Fabrics and Control Planes over at Safari Books Online. This is six hours total training covering everything from Clos fabrics to eVPN.

Register here.

If you register for the course you can access a recording at a later date. From Safari:

This class consists of two three-hour sessions. The first session will focus on the physical topology, including a short history of spine-and-leaf fabrics, the characteristics of fabrics (versus the broader characteristics of a network), and laying out a spine-and-leaf network to support fabric lifecycle and scaling the network out. The first session will also consider the positive and negative aspects of using single- and multi-forwarding engine (FE) devices to build a fabric, and various aspects of fabric resilience. The second session will begin with transport considerations and quality of experience. The session will then consider underlay control planes, including BGP and IS-IS, and the positive and negative aspects of each. Routing to the host and the interaction between the control plane and automation will be considered in this session, as well. EVPN as an overlay control plane will be considered next, and finally Continue reading

Day Two Cloud 194: Unpacking Flexera’s State Of The Cloud Report With Keith Townsend

When you're deep in the trenches of operating your cloud, sometimes it's helpful to step back and get a broader view of what's happening in the industry. On today's Day Two Cloud we explore the results of an annual State of the Cloud survey to get a snapshot of trends impacting the cloud industry, including multicloud adoption, services used, cloud usage and spending, and the challenges of finding and training talent. Our guest to help us unpack the report is Keith Townsend.

Day Two Cloud 194: Unpacking Flexera’s State Of The Cloud Report With Keith Townsend

When you're deep in the trenches of operating your cloud, sometimes it's helpful to step back and get a broader view of what's happening in the industry. On today's Day Two Cloud we explore the results of an annual State of the Cloud survey to get a snapshot of trends impacting the cloud industry, including multicloud adoption, services used, cloud usage and spending, and the challenges of finding and training talent. Our guest to help us unpack the report is Keith Townsend.

The post Day Two Cloud 194: Unpacking Flexera’s State Of The Cloud Report With Keith Townsend appeared first on Packet Pushers.

Learn about Event-Driven Ansible at Red Hat Summit and AnsibleFest 2023

EDA at Fest updated blog

As you may have heard, AnsibleFest will be taking place at Red Hat Summit in Boston May 23-25. This change will allow you to harness everything that Red Hat technology has to offer in a single place and give you even more tools to address your automation needs. Join Ansible and automation-focused audiences to hear from Red Hat and Ansible leaders, customers, and partners while getting the latest on future Ansible product updates, community projects, and what’s coming in IT automation. 

Event-Driven Ansible is a key component to address the complexities of managing varying assets at scale. We announced this product feature as a developer preview last October at AnsibleFest 2022, and we are excited to talk even more about it.  So what can you expect to see about Event-Driven Ansible at AnsibleFest and Red Hat Summit this year? 

  • Red Hat Summit keynote with a customer story around their use of Event-Driven automation
  • AnsibleFest keynote about why the next wave of automation will be event-driven 
  • Breakout sessions from Ansible experts and customers
  • Hands on labs
  • Discovery Theater mini sessions in the expo hall

Do you have questions about Event-Driven Ansible? Bring them to AnsibleFest and take advantage Continue reading

Introducing Object Lifecycle Management for Cloudflare R2

Introducing Object Lifecycle Management for Cloudflare R2
Introducing Object Lifecycle Management for Cloudflare R2

Last year, R2 made its debut, providing developers with object storage while eliminating the burden of egress fees. (For many, egress costs account for over half of their object storage bills!) Since R2’s launch, tens of thousands of developers have chosen it to store data for many different types of applications.

But for some applications, data stored in R2 doesn’t need to be retained forever. Over time, as this data grows, it can unnecessarily lead to higher storage costs. Today, we’re excited to announce that Object Lifecycle Management for R2 is generally available, allowing you to effectively manage object expiration, all from the R2 dashboard or via our API.

Object Lifecycle Management

Object lifecycles give you the ability to define rules (up to 1,000) that determine how long objects uploaded to your bucket are kept. For example, by implementing an object lifecycle rule that deletes objects after 30 days, you could automatically delete outdated logs or temporary files. You can also define rules to abort unfinished multipart uploads that are sitting around and contributing to storage costs.

Getting started with object lifecycles in R2

Cloudflare dashboard

Introducing Object Lifecycle Management for Cloudflare R2
  1. From the Cloudflare dashboard, select R2.
  2. Select your R2 bucket.
  3. Navigate to Continue reading

Is ChatGPT an Efficiency Multiplier?

I got this comment on one of my ChatGPT-related posts:

It does save time for things like converting output to YAML (I do not feed it proprietary information), or have it write scripts in various languages, converting configs from one vendor to another, although often they are not complete or correct they save time so regardless of what we think of it, it is an efficiency multiplier.

I received similar feedback several times, but found that the real answer (as is too often the case) is It Depends.

Is ChatGPT an Efficiency Multiplier?

I got this comment on one of my ChatGPT-related posts:

It does save time for things like converting output to YAML (I do not feed it proprietary information), or have it write scripts in various languages, converting configs from one vendor to another, although often they are not complete or correct they save time so regardless of what we think of it, it is an efficiency multiplier.

I received similar feedback several times, but found that the real answer (as is too often the case) is It Depends.

AT&T, Dell and VMware team to simplify 5G edge deployments

AT&T, Dell and VMware have partnered to create a multi-access edge computing (MEC) solution that includes private 5G wireless deployed on premises. The three vendors combined their experience in 5G communications and edge infrastructure to create an integrated 5G MEC solution – called AT&T MEC with Dell Apex – that’s designed to accelerate enterprise adoption of 5G technology.AT&T provides network connectivity for the solution, while Dell delivers the hardware that AT&T MEC rides on and provides an “as-a-service” capability via Dell Apex Private Cloud. VMware's virtualization and multi-cloud enablement software comes loaded on the Dell Private Cloud VxRail HCI servers.To read this article in full, please click here

AT&T, Dell and VMware team to simplify 5G edge deployments

AT&T, Dell and VMware have partnered to create a multi-access edge computing (MEC) solution that includes private 5G wireless deployed on premises. The three vendors combined their experience in 5G communications and edge infrastructure to create an integrated 5G MEC solution – called AT&T MEC with Dell Apex – that’s designed to accelerate enterprise adoption of 5G technology.AT&T provides network connectivity for the solution, while Dell delivers the hardware that AT&T MEC rides on and provides an “as-a-service” capability via Dell Apex Private Cloud. VMware's virtualization and multi-cloud enablement software comes loaded on the Dell Private Cloud VxRail HCI servers.To read this article in full, please click here

Nutanix’ new multicloud management products aim for simplification

Hybrid cloud integration provider Nutanix is set to release three new features for its multicloud platform, aimed at simplifying complex environments and making application management simpler for IT teams.The first is Project Beacon, a centralization of the basic Nutanix Cloud Platform offering designed to deliver a unified experience for business applications across multiple public clouds. Where most PaaS services are tied to specific public clouds, according to the company, Project Beacon is designed to provide platform services for apps running different services wherever they might be, complete with Kubernetes integration.To read this article in full, please click here

Overcoming Security Gaps with Active Vulnerability Management

Organizations can reduce security risks in containerized applications by actively managing vulnerabilities through scanning, automated image deployment, tracking runtime risk and deploying mitigating controls to reduce risk

Kubernetes and containers have become de facto standards for cloud-native application development due to their ability to accelerate the pace of innovation and codify best practices for production deployments, but such acceleration can introduce risk if not operationalized properly.

In the architecture of containerized applications, it is important to understand that there are highly dynamic containers distributed across cloud environments. Some are ephemeral and short-lived, while others are more long-term. Traditional approaches to securing applications do not apply to cloud-native workloads because most deployments of containers occur automatically with an orchestrator that removes the manual step in the process. This automatic deployment requires that the images be continuously scanned to identify any vulnerabilities at the time of development in order to mitigate the risk of exploit during runtime.

In addition to these challenges, software supply chain adds complexity to vulnerability scanning and remediation. Applications increasingly depend on containers and components from third-party vendors and projects. As a result, it can take weeks or longer to patch the affected components and release new software Continue reading

BrandPost: AIOps for NaaS efficiency and how Aruba Global Services uses it

By: Trent Fierro, Content and Operations at HPE Aruba Networking.At the start of a new year, it’s often time for life-changing decisions. Some that are fun, like vowing to take more time off from work, and some that can make the fun decision come true. Like, looking for ways to better manage your wired, wireless, or SD-WAN deployments via Network as a Service (NaaS) or AI for IT Operations (AIOps) options.To help, we’ve put together a short eBook that walks you through how a large retailer is using the Aruba Global Services team and Aruba Central with built-in AIOps features keep their many remote sites running at their best. In this scenario, the customer chose a NaaS partner that takes advantage of AIOps tools to deliver the insights and efficiency that allows their IT team to focus on more pressing tasks.To read this article in full, please click here

IBM advances its quantum roadmap as competition heats up

IBM reached a quantum-computing milestone in March with the first U.S. deployment of an on-site, private-sector, IBM-managed quantum computer. The IBM Quantum System One, installed at the Cleveland Clinic, is the world's first quantum computer to be specifically dedicated to healthcare research, with the goal of helping the Cleveland Clinic accelerate biomedical discoveries, according to IBM.The announcement didn't surprise Scott Buchholz, global quantum computing lead at enterprise advisory firm Deloitte. "IBM is a leader in the race to build useful, scalable quantum computers," he says. "Their research teams have been working to build the software, hardware, and supplier ecosystem necessary to support the long-term development of these important technologies."To read this article in full, please click here

IBM advances its quantum roadmap as competition heats up

IBM reached a quantum-computing milestone in March with the first U.S. deployment of an on-site, private-sector, IBM-managed quantum computer. The IBM Quantum System One, installed at the Cleveland Clinic, is the world's first quantum computer to be specifically dedicated to healthcare research, with the goal of helping the Cleveland Clinic accelerate biomedical discoveries, according to IBM.The announcement didn't surprise Scott Buchholz, global quantum computing lead at enterprise advisory firm Deloitte. "IBM is a leader in the race to build useful, scalable quantum computers," he says. "Their research teams have been working to build the software, hardware, and supplier ecosystem necessary to support the long-term development of these important technologies."To read this article in full, please click here