During Developer Week a few months ago, we opened up the Beta for Cloudflare for SaaS: a one-stop shop for SaaS providers looking to provide fast load times, unparalleled redundancy, and the strongest security to their customers.
Since then, we’ve seen numerous developers integrate with our technology, allowing them to spend their time building out their solution instead of focusing on the burdens of running a fast, secure, and scalable infrastructure — after all, that’s what we’re here for.
Today, we are very excited to announce that Cloudflare for SaaS is generally available, so that every customer, big and small, can use Cloudflare for SaaS to continue scaling and building their SaaS business.
If you’re running a SaaS company, you have customers that are fully reliant on you for your service. That means you’re responsible for keeping their domain fast, secure, and protected. But this isn’t simple. There’s a long checklist you need to get through to put a solution in your customers’ hands:
In May 2021, Javier Antich ran a great webinar explaining the principles of Artificial Intelligence and Machine learning and how they apply (or not) to networking.
He started with a brief overview of AI/ML hype that should help you understand why there’s a bit of a difference between self-driving cars (not that we got there) and self-driving networks.
Table of Content
Table of Contents
VPC 1
VPC Introduction 1
The Structure of Availability Zone 2
Create VPC - AWS Console 4
Select Region 4
Create VPC 7
DHCP Options Set 9
Main Route Table 10
VPC Verification Using AWS CLI 12
Create VPC - AWS CloudFormation 16
Create Template 17
Uppload Template 17
Verification Using AWS Console 18
VPC Verification using AWS CLI 21
Create Subnets - AWS Console 23
Create Subnets 24
Route Tables 29
Create Subnets – AWS Console 30
Create Subnets - AWS CloudFormation 37
Create Network ACL 40
VPC Control-Plane – Mapping Service 43
Introduction 43
Mapping Register 43
Mapping Request - Reply 44
Data-Plane Operation 45
References 46
Introduction 47
Allow Internet Access from Subnet 48
Create Internet Gateway 49
Update Subnet Route Table 54
Network Access Control List 57
Associate SG and Elastic-IP with EC2 59
Create Security Group 59
Launch an EC2 Instance 65
Allocate Elastic IP address from Amazon Ipv4 Pool 71
Reachability Analyzer 81
Billing 85
Introduction 87
Create NAT Gateway and Allocate Continue reading
Containers have changed how applications are developed and deployed, with Kubernetes ascending as the de facto means of orchestrating containers, speeding development, and increasing scalability. Modern application workloads with microservices and containers eventually need to communicate with other applications or services that reside on public or private clouds outside the Kubernetes cluster. However, securely controlling granular access between these environments continues to be a challenge. Without proper security controls, containers and Kubernetes become an ideal target for attackers. At this point in the Kubernetes journey, the security team will insist that workloads meet security and compliance requirements before they are allowed to connect to outside resources.
As shown in the table below, Calico Enterprise and Calico Cloud offer multiple solutions that address different access control scenarios to limit workload access between Kubernetes clusters and APIs, databases, and applications outside the cluster. Although each solution addresses a specific requirement, they are not necessarily mutually exclusive.
Your requirement | Calico’s solution | Advantages |
You want to use existing firewalls and firewall managers to enforce granular egress access control of Kubernetes workloads at the destination (outside the cluster) | Egress Access Gateway | Security teams can leverage existing investments, experience, and training in firewall infrastructure and Continue reading |
Merit Network is a non-profit ISP that provides network and security services for universities, K-12 schools, libraries, and other educational communities in Michigan. On today's IPv6 Buzz we speak with Lola Killey, Infrastructure and Research Support Analyst, about how Merit encouraged IPv6 adoption and what other institutions can learn from that effort.
The post IPv6 Buzz 087: How Merit Network Used Carrots Over Sticks To Drive IPv6 Adoption appeared first on Packet Pushers.
docker run -p 8008:8008 -p 6343:6343/udp --name sflow-rt -d sflow/prometheus
Use Docker to run the pre-built sflow/prometheus image which packages sFlow-RT with the sflow-rt/prometheus application. Configure sFlow agents to stream data to this instance.
Create an InfluxDB Cloud account. Click the Data tab. Click on the Telegraf option and the InfluxDB Output Plugin button to get the URL to post data. Click the API Tokens option and generate a token.[agent]
interval = "15s"
round_interval = true
metric_batch_size = 5000
metric_buffer_limit = 10000
collection_jitter = "0s"
flush_interval = "10s"
flush_jitter = "0s"
precision = "1s"
hostname = ""
omit_hostname = true
[[outputs.influxdb_v2]]
urls = ["INFLUXDB_CLOUD_URL"]
token = Continue reading
Welcome back! In my first post here on Packet Pushers (Applying A Software Design Pattern To Network Automation – Packet Pushers) we explored the Model View Controller (MVC) software design pattern and how it can be applied to network automation. This post will go a little deeper into how this is achieved and the mix […]
The post Reimagining ‘Show IP Interface Brief’ appeared first on Packet Pushers.
One of the most interesting recent developments in natural language processing is the T5 family of language models. These are transformer based sequence-to-sequence models trained on multiple different tasks. The main insight is that different tasks provide a broader context for tokens and help the model statistically approximate the natural language context of words.
Tasks can be specified by providing an input context, often with a “command” prefix and then predicting an output sequence. This google AI blog article contains an excellent description of how multiple tasks are specified.
Naturally, these models are attractive to anyone working on NLP. The drawback, however, is that they are large and require very significant computational resources, beyond the resources of most developers desktop machines. Fortunately, google makes their Colaboratory platform free to use (with some resource limitations). It is the ideal platform to experiment with this family of models in order to see how their perform on a specific application.
As I decided to do so, for a text-classification problem, I had to pierce together information from a few different sources and try a few different approaches. I decided to put together a mini tutorial of how to fine-tune a T5 model Continue reading
One of the most interesting recent developments in natural language processing is the T5 family of language models. These are transformer based sequence-to-sequence models trained on multiple different tasks. The main insight is that different tasks provide a broader context for tokens and help the model statistically approximate the natural language context of words.
Tasks can be specified by providing an input context, often with a “command” prefix and then predicting an output sequence. This google AI blog article contains an excellent description of how multiple tasks are specified.
Naturally, these models are attractive to anyone working on NLP. The drawback, however, is that they are large and require very significant computational resources, beyond the resources of most developers desktop machines. Fortunately, google makes their Colaboratory platform free to use (with some resource limitations). It is the ideal platform to experiment with this family of models in order to see how their perform on a specific application.
As I decided to do so, for a text-classification problem, I had to pierce together information from a few different sources and try a few different approaches. I decided to put together a mini tutorial of how to fine-tune a T5 model Continue reading
A while ago my friend Nicola Modena sent me another intriguing curveball:
Imagine a CTO who has invested millions in a super-secure data center and wants to consolidate all compute workloads. If you were asked to run a BGP Route Reflector as a VM in that environment, and would like to bring OSPF or ISIS to that box to enable BGP ORR, would you use a GRE tunnel to avoid a dedicated VLAN or boring other hosts with routing protocol hello messages?
While there might be good reasons for doing that, my first knee-jerk reaction was: