What would you build if you could treat your network infrastructure programmatically? That’s what we’re going to consider in today’s sponsored Heavy Networking episode with Nokia. Nokia’s SR-Linux is infrastructure-as-code friendly, and their NetOps Development Kit allows you to think of the network as data models and build all kinds of useful tools. Our guest is Bruce Wallis, Senior Director of Product Management in Data Center Switching at Nokia.
The post Heavy Networking 603: Network Apps For Smarter Network Ops With Nokia (Sponsored) appeared first on Packet Pushers.
The pandemic may have disrupted the global business world but it has also given birth to a lot of new technological innovations. With majority of people all around the world working from home, new and innovative networking products were needed to facilitate the new normal working environment.
We have gathered a list of some of the most amazing networking products for 2021.
Alkira Cloud Services Exchange is a new cloud computing service Alkira that allows for the cost-effective and scalable deployment of applications in the cloud. It provides a network infrastructure which includes data centers, storage, servers, racks, power and internet connections to customers at a fraction of what it would cost to build such infrastructure from scratch. This service provides scalable deployments based on customer’s needs. So far it has been deployed by many companies with very high success rates.
Aruba 630 Series Wi-Fi 6E is the next generation of Wi-Fi. It delivers up to 4 times the performance and twice the coverage of the previous generation. It also has a higher density of active clients and endpoints with greater throughput.
The Aruba 630 Series is well suited for dense, Continue reading
It’s a busy week for me thanks to Security Field Day but I didn’t want to leave you without some thoughts that have popped up this week from the discussions we’ve been having. Security is one of those topics that creates a lot of thought-provoking ideas and makes you seriously wonder if you’re doing it right all the time.
During Developer Week a few months ago, we opened up the Beta for Cloudflare for SaaS: a one-stop shop for SaaS providers looking to provide fast load times, unparalleled redundancy, and the strongest security to their customers.
Since then, we’ve seen numerous developers integrate with our technology, allowing them to spend their time building out their solution instead of focusing on the burdens of running a fast, secure, and scalable infrastructure — after all, that’s what we’re here for.
Today, we are very excited to announce that Cloudflare for SaaS is generally available, so that every customer, big and small, can use Cloudflare for SaaS to continue scaling and building their SaaS business.
If you’re running a SaaS company, you have customers that are fully reliant on you for your service. That means you’re responsible for keeping their domain fast, secure, and protected. But this isn’t simple. There’s a long checklist you need to get through to put a solution in your customers’ hands:
In May 2021, Javier Antich ran a great webinar explaining the principles of Artificial Intelligence and Machine learning and how they apply (or not) to networking.
He started with a brief overview of AI/ML hype that should help you understand why there’s a bit of a difference between self-driving cars (not that we got there) and self-driving networks.
In May 2021, Javier Antich ran a great webinar explaining the principles of Artificial Intelligence and Machine learning and how they apply (or not) to networking.
He started with a brief overview of AI/ML hype that should help you understand why there’s a bit of a difference between self-driving cars (not that we got there) and self-driving networks.
Table of Content
Table of Contents
VPC 1
VPC Introduction 1
The Structure of Availability Zone 2
Create VPC - AWS Console 4
Select Region 4
Create VPC 7
DHCP Options Set 9
Main Route Table 10
VPC Verification Using AWS CLI 12
Create VPC - AWS CloudFormation 16
Create Template 17
Uppload Template 17
Verification Using AWS Console 18
VPC Verification using AWS CLI 21
Create Subnets - AWS Console 23
Create Subnets 24
Route Tables 29
Create Subnets – AWS Console 30
Create Subnets - AWS CloudFormation 37
Create Network ACL 40
VPC Control-Plane – Mapping Service 43
Introduction 43
Mapping Register 43
Mapping Request - Reply 44
Data-Plane Operation 45
References 46
Introduction 47
Allow Internet Access from Subnet 48
Create Internet Gateway 49
Update Subnet Route Table 54
Network Access Control List 57
Associate SG and Elastic-IP with EC2 59
Create Security Group 59
Launch an EC2 Instance 65
Allocate Elastic IP address from Amazon Ipv4 Pool 71
Reachability Analyzer 81
Billing 85
Introduction 87
Create NAT Gateway and Allocate Continue reading
Containers have changed how applications are developed and deployed, with Kubernetes ascending as the de facto means of orchestrating containers, speeding development, and increasing scalability. Modern application workloads with microservices and containers eventually need to communicate with other applications or services that reside on public or private clouds outside the Kubernetes cluster. However, securely controlling granular access between these environments continues to be a challenge. Without proper security controls, containers and Kubernetes become an ideal target for attackers. At this point in the Kubernetes journey, the security team will insist that workloads meet security and compliance requirements before they are allowed to connect to outside resources.
As shown in the table below, Calico Enterprise and Calico Cloud offer multiple solutions that address different access control scenarios to limit workload access between Kubernetes clusters and APIs, databases, and applications outside the cluster. Although each solution addresses a specific requirement, they are not necessarily mutually exclusive.
Your requirement | Calico’s solution | Advantages |
You want to use existing firewalls and firewall managers to enforce granular egress access control of Kubernetes workloads at the destination (outside the cluster) | Egress Access Gateway | Security teams can leverage existing investments, experience, and training in firewall infrastructure and Continue reading |
Over the last twenty years, memory has risen from 10% of the semiconductor market to almost 30%, a trend that is expected to continue, propelled by compute at the edge all the way up to datacenter. …
Micron Urges Government Investment with R&D Spend was written by Nicole Hemsoth at The Next Platform.
Merit Network is a non-profit ISP that provides network and security services for universities, K-12 schools, libraries, and other educational communities in Michigan. On today's IPv6 Buzz we speak with Lola Killey, Infrastructure and Research Support Analyst, about how Merit encouraged IPv6 adoption and what other institutions can learn from that effort.
The post IPv6 Buzz 087: How Merit Network Used Carrots Over Sticks To Drive IPv6 Adoption appeared first on Packet Pushers.
Using CLI tools—instead of a “wall of YAML”—to install things onto Kubernetes is a growing trend, it seems. Istio and Cilium, for example, each have a CLI tool for installing their respective project. I get the reasons why; you can build logic into a CLI tool that you can’t build into a YAML file. Kuma, the open source service mesh maintained largely by Kong and a CNCF Sandbox project, takes a similar approach with its kumactl
tool. In this post, however, I’d like to take a look at creating reusable YAML to install Kuma, instead of using the CLI tool every time you install.
You might be wondering, “Why?” That’s a fair question. Currently, the kumactl
tool, unless configured otherwise, will generate a set of TLS assets to be used by Kuma (and embeds some of those assets in the YAML regardless of the configuration). Every time you run kumactl
, it will generate a new set of TLS assets. This means that the command is not declarative, even if the output is. Unfortunately, you can’t reuse the output, as that would result in duplicate TLS assets across installations. That brings me to the point of this Continue reading
docker run -p 8008:8008 -p 6343:6343/udp --name sflow-rt -d sflow/prometheus
Use Docker to run the pre-built sflow/prometheus image which packages sFlow-RT with the sflow-rt/prometheus application. Configure sFlow agents to stream data to this instance.
Create an InfluxDB Cloud account. Click the Data tab. Click on the Telegraf option and the InfluxDB Output Plugin button to get the URL to post data. Click the API Tokens option and generate a token.[agent]
interval = "15s"
round_interval = true
metric_batch_size = 5000
metric_buffer_limit = 10000
collection_jitter = "0s"
flush_interval = "10s"
flush_jitter = "0s"
precision = "1s"
hostname = ""
omit_hostname = true
[[outputs.influxdb_v2]]
urls = ["INFLUXDB_CLOUD_URL"]
token = Continue reading