In this blog series, we will continue discussing the deployment of Red Hat Ansible Automation Platform on Microsoft Azure.
The first blog covered the deployment process as well as how to access a Red Hat Ansible Automation Platform on Azure deployment that was deployed using the “Public” access option.
This blog we’ll cover how to access the managed application when it’s deployed using the “Private” access option.
There are three ways you can access Red Hat Ansible Automation Platform on Azure if you selected “Private” access.
Let’s assume that you have already configured network peering between the Red Hat Ansible Automation Platform on Azure deployment, on the Azure network and your existing Azure Virtual Networks. Network peering is an Azure action for connecting two or more networks on Azure that route traffic to resources across those networks. See Microsoft Azure documentation for more information on network peering types.
Regardless of whether you selected public or private Continue reading

One of the many reasons engineers should work for a vendor, consulting company, or someone other than a single network operator at some point in their career is to develop a larger view of network operations. What are common ways of doing things? What are uncommon ways? In what ways is every network broken? Over time, if you see enough networks, you start seeing common themes and ideas. Just like history, networks might not always be the same, but the problems we all encounter often rhyme. Ken Calenza joins Tom Ammon, Eyvonne Sharp, and Russ White to discuss these common traits—ten things I know about your network.

Thus far, this series of posts have all been about Layer 2 over Layer 3 models; the customer ethernet frames encapsulated in UDP, traversing L3 networks. The routing has been confined underlay, the customer traffic has stayed within the same network.
No longer! In this post, things start getting a little more interesting, as we look at routing the customer traffic with an EVPN feature called Integrated Routing and Bridging, or IRB.
To define terms, when I say 'intra-subnet', that is L2 traffic transferred between nodes in the same subnet.
'Inter-subnet' refers to a traffic flow that traverses subnet boundaries.
A few weeks ago I wrote about tradeoffs vendors have to make when designing data center switching ASICs, followed by another blog post discussing how to select the ASICs for various roles in data center fabrics.
You REALLY SHOULD read the two blog posts before moving on; here’s the buffer-related TL&DR for those of you ignoring my advice ;)
A few weeks ago I wrote about tradeoffs vendors have to make when designing data center switching ASICs, followed by another blog post discussing how to select the ASICs for various roles in data center fabrics.
You REALLY SHOULD read the two blog posts before moving on; here’s the buffer-related TL&DR for those of you ignoring my advice ;)
After its acquisitions of ATI in 2006 and the maturation of its discrete GPUs with the Instinct line from the past few years and the acquisitions of Xilinx and Pensando here in 2022, AMD is not just a second source of X86 processors. …
Chip Roadmaps Unfold, Crisscrossing And Interconnecting, At AMD was written by Timothy Prickett Morgan at The Next Platform.
Like others in the datacenter infrastructure space, Cisco Systems has had a front-row seat to the rapid changes in enterprise tech, from the accelerating adoption of the multicloud model to the increasing decentralization of the IT environment, rippling out to the network edge. …
Controlling The Network When You Don’t Own All Of It was written by Jeffrey Burt at The Next Platform.
As organizations transition from monolithic services in traditional data centers to microservices architecture in a public cloud, security becomes a bottleneck and causes delays in achieving business goals. Traditional security paradigms based on perimeter-driven firewalls do not scale for communication between workloads within the cluster and 3rd-party APIs outside the cluster. The traditional paradigm also does not provide granular access controls to the workloads and zero-trust architecture, leaving cloud-native applications with a larger attack surface.
Calico Cloud offers an easy 5-step process for fast-tracking your organization’s cloud-native application journey by making security a business enabler while mitigating risk.
Gaining visibility into workload-to-workload communication with all metadata context intact is one of the biggest challenges when it comes to deploying microservices. You can’t apply security controls to what you can’t see. The traffic is not just flowing from a client to a server in this new cloud native distributed architecture but also between namespaces that reside between many nodes, causing flow proliferation. With Calico Cloud, you get a dynamic visualization of all traffic flowing through your network in an easy-to-read UI.
Example 1: You can view all the inside and outside (east-west and north-south) connections directly from Calico’s Continue reading
In today’s sponsored Heavy Networking we talk to Juniper Apstra about how how Apstra delivers on unified data center operations, why fabrics are everywhere, how Apstra differs from other automation and intent solutions, and more. Our guest is Mansour Karam, VP of Product Management.
The post Heavy Networking 635: Unified Network Fabrics With Juniper Apstra (Sponsored) appeared first on Packet Pushers.