Volta Networks Virtualizes Routing on Edgecore White Boxes
Volta Networks aims to squeeze legacy vendors’ service provider business by bringing its virtual...
Volta Networks aims to squeeze legacy vendors’ service provider business by bringing its virtual...
The Kubernetes community recently released v1alpha2 of Cluster API (a monumental effort, congrats to everyone involved!), and with it comes a number of fairly significant changes. Aside from the new Quick Start, there isn’t (yet) a great deal of documentation on Cluster API (hereafter just called CAPI) v1alpha2, so in this post I’d like to explore the structure of the CAPI v1alpha2 YAML manifests, along with links back to the files that define the fields for the manifests. I’ll focus on the CAPI provider for AWS (affectionately known as CAPA).
As a general note, any links back to the source code on GitHub will reference the v0.2.1 release for CAPI and the v0.4.0 release for CAPA, which are the first v1apha2 releases for these projects.
Let’s start with looking at a YAML manifest to define a Cluster in CAPA (this is taken directly from the Quick Start):
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Cluster
metadata:
name: capi-quickstart
spec:
clusterNetwork:
pods:
cidrBlocks: ["192.168.0.0/16"]
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSCluster
name: capi-quickstart
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSCluster
metadata:
name: capi-quickstart
spec:
region: us-east-1
sshKeyName: default
Right off the bat, Continue reading
TL-DR: Dropbox is harassing me about new products. Combined with poor performance of their bloated app and enormous waisting of my disk space, I’m getting close to quitting. I want to use Dropbox to synchronise files between my various devices aka desktop, laptop, tablet and smartphone for quite some time. I like it enough to […]
The post Hey Dropbox, I Don’t Want The Bloat. Or the Constant Ads. appeared first on EtherealMind.
The autumn 2019 webinar season is in full swing ;) We’re almost done with Azure Networking webinar (the last session will take place on October 10th) and the network automation course is nicely chugging along – a few weeks ago Matthias Luft talked about supply-chain security in open-source software and today we’ll enjoy the start with a single source of truth presentation by Damien Garros.
Dinesh Dutt is coming back on October 8th with another installment of EVPN saga, this time focused on running EVPN on Linux hosts, and on October 22nd Donald Sharp will tell us all about the underlying magic box – the Free Range Routing software.
But there are even more open-source goodies waiting for you: on October 15th we’ll have Pete Lumbis describing the new features Cumulus Linux got in the last year, including AutoBGP and AutoMLAG.
Most everything I mentioned above apart is accessible with Standard ipSpace.net Subscription, and you’ll need Expert Subscription to enjoy the automation course contents.
China's Baidu donated the Baetyl seed code, while Dianomic contributed Fledge.
This is the second of the two part EVPN-PIM blog series exploring the feature and network deployment choices. If you missed part one, learn about BUM optimization using PIM-SM here.
Servers in a data-center Clos are typically dual connected to a pair of Top-of-Rack switches for redundancy purposes. These TOR switches are setup as a MLAG (Multichassis Link Aggregation) pair i.e. the server sees them as a single switch with two or more bonded links. Really there are two distinct switches with an ISL/peerlink between them syncing databases and pretending to be one.
The MLAG switches (L11, L12 in the sample setup) use a single VTEP IP address i.e. appear as an anycast-VTEP or virtual-VTEP.
Additional procedures involved in EVPN-PIM with anycast VTEPs are discussed in this blog.
Friend: “So you are working on PIM-MLAG?”
Me: “No, I am implementing EVPN-PIM in a MLAG setup”
Friend: “Yup, same difference”
Me: “No, it is not!”
Friend: “OK, OK, so you are implementing PIM-EVPN with MLAG?”
Me: “Yes!”
Friend: “i.e. PIM-MLAG?”
Me: “Well, now that you put it like that….……..NO, I AM NOT!! Continue reading
Oracle reportedly cut 10% to 15% of its Data Cloud business unit this week amid its ongoing...
Alibaba said it developed a new chip for AI inference that speeds up machine learning tasks on its...
In a win for CloudGenix, Hypercore Networks today announced a partnership with the SD-WAN vendor to...
With the addition of its cloud offerings, Yellowbrick offers enterprise customers a platform to run...
Qualcomm showcased its long history of innovation in wireless technology and contends that it’s...
As told to me:
The post Network Outages and Idiots With Guns appeared first on EtherealMind.
I reviewed the basic setup for building applications in Kubernetes in part 1 of this blog series, and discussed processes as pods and controllers in part 2. In this post, I’ll explain how to configure networking services in Kubernetes to allow pods to communicate reliably with each other.
At this point, we’ve deployed our workloads as pods managed by controllers, but there’s no reliable, practical way for pods to communicate with each other, nor is there any way for us to visit any network-facing pod from outside the cluster. Kubernetes networking model says that any pod can reach any other pod at the target pod’s IP by default, but discovering those IPs and maintaining that list while pods are potentially being rescheduled — resulting in them getting an entirely new IP — by hand would be a lot of tedious, fragile work.
Instead, we need to think about Kubernetes services when we’re ready to start building the networking part of our application. Kubernetes services provide reliable, simple networking endpoints for routing traffic to pods via the fixed metadata defined in the controller that created them, rather than via unreliable pod IPs. For simple applications, Continue reading
D-Wave Systems is getting ready to roll out its next quantum annealing computer, a system that will encompass more than 5,000 low-noise qubits, as well as a new topology to connect them. …
Quantum Annealing Advances Woven Into Next-Gen D-Wave System was written by Michael Feldman at The Next Platform.
The ongoing journey to bring more enterprise high-performance computing (HPC) workloads into the cloud has been a bumpy one with its share of roadblocks and setbacks. …
HPC Eases Its Way Into The Cloud was written by Jeffrey Burt at The Next Platform.