Similar to container-native storage, the container-native network abstracts the physical network infrastructure to expose a flat network to containers. It is tightly integrated with Kubernetes to tackle the challenges involved in pod-to-pod, node-to-node, pod-to-service and external communication.
Kubernetes can support a host of plugins based on the Cloud Native Computing Foundation.
Sponsor Note
KubeCon + CloudNativeCon conferences gather adopters and technologists to further the education and advancement of cloud native computing. The vendor-neutral events feature domain experts and key maintainers behind popular projects like Kubernetes, Prometheus, Envoy, CoreDNS, containerd and more.
Container-native networks go beyond basic connectivity. They provide dynamic enforcement of network security rules. Through a predefined policy, it is possible to configure fine-grained control over communications between containers, pods and nodes.
Choosing the right networking stack is critical to maintain and secure the CaaS platform. Customers can select the stack from open source projects including Contiv, Project CalicoTungsten Fabric and
In the previous article, I discussed how Rancher’s Calico networking software, and the Intel NUCs. The infrastructure is based on K3s, Calico, and Portworx that provide the core building blocks of the Kubernetes cluster.
Solution Architecture
The sensors attached to the fans of the turbine provide the current rotational speed, vibration, temperature, and noise level. This telemetry data stream along with the deviceID from each fan acts as the input to the predictive maintenance solution.
InfluxDB is connected to Mosquitto via Grafana dashboard to InfluxDB to build a beautiful visualization for our AIoT solution.
In the next part of this tutorial, I will discuss the deployment architecture along with the storage and network considerations based on K3s, Calico, and Portworx. Stay tuned.
Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at
If you deal with Kubernetes, you know that storage is one of the core building blocks of the cluster infrastructure. It is as important as the compute building block delivered by the worker nodes. Since the power of the cluster is always measured in terms of the number of worker nodes and their configuration, storage doesn’t get its share of attention.
Imagine this: you configured a powerful bare-metal cluster and want to run a highly available and mission-critical workload on it. Without a solid storage engine, your cluster is only good for running stateless and ephemeral workloads that don’t need persistence. But any enterprise application is a combination of both — stateless and stateful services. You wouldn’t be able to justify the investment made in the brand new Kubernetes cluster if you are unable to run end-to-end applications on it.
When you install the open source, up-steam Kubernetes distribution, it doesn’t come with a high-performance storage engine. Unlike managed Kubernetes services in the public cloud that come with default storage classes mapped to their respective block storage services, your cluster doesn’t have any storage class.
A persistent volume is to storage what a node is to compute.
Just like the Continue reading
Virtual networking softwareKubernetes open source container orchestration software. While Kubernetes has extensive support for Role-Based Access Control (RBAC), the default networking stack based on Google Kubernetes Engine (GKE). Unlike other managed Kubernetes services, GKE comes with an integrated Calico stack that can be enabled during the cluster creation. It is also possible to configure Calico on an existing, running GKE cluster.
Start by launching a standard GKE cluster with network policies enabled. This can be done by clicking the Enable network policy checkbox available under Availability, networking, security, and additional features section.
After the cluster is up and running, we can check for Calico Pods deployed as a part of Daemonset in the kube-system namespace. Let’s download the calicoctl, Calico’s CLI to explore the environment further. We need to point calicoctl to etcd endpoints of GKE cluster. This can be done with the below settings:
Now, let’s go ahead and deploy one of the samples provided by Project Calico. Run the below commands to deploy the application. You can download the YAML files from Project Calico’s http://mi2.live.
The post Tutorial: Explore Project Calico Network Policies with Google Kubernetes Engine appeared first on The New Stack.