Setting up an AWS-Integrated Kubernetes 1.15 Cluster with Kubeadm

In this post, I’d like to walk through setting up an AWS-integrated Kubernetes 1.15 cluster using kubeadm. Over the last year or so, the power and utility of kubeadm has vastly improved (thank you to all the contributors who have spent countless hours!), and it is now—in my opinion, at least—at a point where setting up a well-configured, highly available Kubernetes cluster is pretty straightforward.

This post builds on the official documentation for setting up a highly available Kubernetes 1.15 cluster. This post also builds upon previous posts I’ve written about setting up Kubernetes clusters with the AWS cloud provider:

All of these posts are focused on Kubernetes releases prior to 1.15, and given the changes in kubeadm in the 1.14 and 1.15 releases, I felt it would be helpful to revisit the process again for 1.15. For now, I’m focusing on the in-tree AWS cloud provider; however, in the very near future I’ll look at using the new external AWS cloud provider.

As pointed out in the “original” Continue reading

Datanauts 171: The Joy Of Engineering With William Lam

Turning to technical folks and their blogs is a good way to "not panic" when it comes to dealing with the trough of woe. In this episode, we'll talk to prolific technical blogger & VMware employee William Lam to get an insider's view of what happens to generate community-oriented content.

The post Datanauts 171: The Joy Of Engineering With William Lam appeared first on Packet Pushers.

Building a GraphQL server on the edge with Cloudflare Workers

Building a GraphQL server on the edge with Cloudflare Workers
Building a GraphQL server on the edge with Cloudflare Workers

Today, we're open-sourcing an exciting project that showcases the strengths of our Cloudflare Workers platform: workers-graphql-server is a batteries-included Apollo GraphQL server, designed to get you up and running quickly with GraphQL.

Building a GraphQL server on the edge with Cloudflare Workers
Testing GraphQL queries in the GraphQL Playground

As a full-stack developer, I’m really excited about GraphQL. I love building user interfaces with React, but as a project gets more complex, it can become really difficult to manage how your data is managed inside of an application. GraphQL makes that really easy - instead of having to recall the REST URL structure of your backend API, or remember when your backend server doesn't quite follow REST conventions - you just tell GraphQL what data you want, and it takes care of the rest.

Cloudflare Workers is uniquely suited as a platform to being an incredible place to host a GraphQL server. Because your code is running on Cloudflare's servers around the world, the average latency for your requests is extremely low, and by using Wrangler, our open-source command line tool for building and managing Workers projects, you can deploy new versions of your GraphQL server around the world within seconds.

If you'd like to try the GraphQL Continue reading

Adtran Mosaic Gets Smarter, Announces New Hardware

Adtran upgraded its Mosaic software-defined access suite with new features aimed at improving...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

sFlow-RT 3.0 released

The sFlow-RT 3.0 release has a simplified user interface that focusses on metrics needed to manage the performance of the sFlow-RT analytics software and installed applications.

Applications are available that replace features from the previous 2.3 release. The following instructions show how to install sFlow-RT 3.0 along with basic data exploration applications.

On a system with Java 1.8+ installed:
wget https://inmon.com/products/sFlow-RT/sflow-rt.tar.gz
tar -xvzf sflow-rt.tar.gz
./sflow-rt/get-app.sh sflow-rt flow-trend
./sflow-rt/get-app.sh sflow-rt browse-metrics
./sflow-rt/start.sh
On a system with Docker installed:
mkdir app
docker run -v $PWD/app:/sflow-rt/app --entrypoint /sflow-rt/get-app.sh sflow/sflow-rt sflow-rt flow-trend
docker run -v $PWD/app:/sflow-rt/app --entrypoint /sflow-rt/get-app.sh sflow/sflow-rt sflow-rt browse-metrics
docker run -v $PWD/app:/sflow-rt/app -p 6343:6343/udp -p 8008:8008 sflow/sflow-rt
The product user interface can be accessed on port 8008. The Status page, shown at the top of this article, displays key metrics about the performance of the software.
The Apps tab lists the two applications we installed, browse-metrics and flow-trend, and the green color of the buttons indicates both applications are healthy.

Click on the flow-trend button to open the application and trend traffic flows in real-time. The RESTflow article describes the flow analytics capabilities of sFlow-RT in Continue reading

AMD hosts an Epyc party — and everyone wants in

Last week, AMD launched the second generation of its Epyc server processor, the Epyc 7002 series a.k.a. “Rome,” and it’s a far cry from the days when it held a release party for the Opteron and no one showed up. These days, AMD has a whole lot of friends.Of course, it helps to deliver a part people want, and it looks like the Epyc 7002 is all that. It builds considerably upon the first generation, code-named “Naples,” delivered two years ago. One chip packs up to 64 cores and two threads per core, double the max of 32 cores in Naples. It has eight memory channels and up to 128 lanes of PCI Express Gen 4.[ Also read: What is quantum computing (and why enterprises should care) ] The Epyc 7002 achieves this massive core count through “chiplets,” eight small chips in the CPU die with eight cores each and connected by a high-speed interconnect. A single monolithic 64-core die is impractical from a manufacturing standpoint. There is so much more that can go wrong with 64 cores than 16. Plus, AMD is manufacturing this on a 7nm process (Intel is just getting to 10nm), so Continue reading

AMD hosts an Epyc party — and everyone wants in

Last week, AMD launched the second generation of its Epyc server processor, the Epyc 7002 series a.k.a. “Rome,” and it’s a far cry from the days when it held a release party for the Opteron and no one showed up. These days, AMD has a whole lot of friends.Of course, it helps to deliver a part people want, and it looks like the Epyc 7002 is all that. It builds considerably upon the first generation, code-named “Naples,” delivered two years ago. One chip packs up to 64 cores and two threads per core, double the max of 32 cores in Naples. It has eight memory channels and up to 128 lanes of PCI Express Gen 4.[ Also read: What is quantum computing (and why enterprises should care) ] The Epyc 7002 achieves this massive core count through “chiplets,” eight small chips in the CPU die with eight cores each and connected by a high-speed interconnect. A single monolithic 64-core die is impractical from a manufacturing standpoint. There is so much more that can go wrong with 64 cores than 16. Plus, AMD is manufacturing this on a 7nm process (Intel is just getting to 10nm), so Continue reading

VMware SVP Tom Corn: Security Is a Team Sport and vAdmins Play a Starring Role

Application security is changing the role of virtual administrators and expanding their job...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

On the recent HTTP/2 DoS attacks

On the recent HTTP/2 DoS attacks
On the recent HTTP/2 DoS attacks

Today, multiple Denial of Service (DoS) vulnerabilities were disclosed for a number of HTTP/2 server implementations. Cloudflare uses NGINX for HTTP/2. Customers using Cloudflare are already protected against these attacks.

The individual vulnerabilities, originally discovered by Netflix and are included in this announcement are:

As soon as we became aware of these vulnerabilities, Cloudflare’s Protocols team started working on fixing them. We first pushed a patch to detect any attack attempts and to see if any normal traffic would be affected by our mitigations. This was followed up with work to mitigate these vulnerabilities; we pushed the changes out few weeks ago and continue to monitor similar attacks on our stack.

If any of our customers host web services over HTTP/2 on an alternative, publicly accessible path that is not behind Cloudflare, we recommend you apply the latest security updates to your origin servers in order to protect yourselves from these HTTP/2 vulnerabilities.

We will soon follow up with more details on these vulnerabilities and how we mitigated them.

Full Continue reading

Deploying Dockerized .NET Apps Without Being a DevOps Guru

This is a guest post by Julie Lerman. She is a Docker Captain, published author, Microsoft Regional Director and a long-time Microsoft MVP who now counts her years as a coder in decades. She makes her living as a coach and consultant to software teams around the world. You can follow Julie on her blog at thedatafarm.com/blog, or on Twitter at @julielerman.
.NET Developers who use Visual Studio have access to a great extension to help them create Docker images for their apps. The Visual Studio Tools for Docker simplify the task of developing and debugging apps destined for Docker images. But what happens when you are ready to move from debugging in Visual Studio to deploying your image to a container in the cloud? This blog post will demonstrate first using the tooling to publish a simple ASP.NET Core API in an image to the Docker hub, and then creating a Linux virtual machine in Azure to host the API. It will also engage Docker Compose and Microsoft SQL Server for Linux in a Docker container, along with a Docker Volume for persistence. The goal is to create a simple test environment and a low-stress path Continue reading

Exploring Batfish with Cumulus – Part 2

In Part 1 of our look into navigating Batfish with Cumulus, we explored how to get started with communicating with the pybatfish SDK, as well as getting some basic actionable topology information back. With the introduction out of the way, we’re going to take a look at some of the more advanced use cases when it comes to parsing the information we get back in response to our queries. Finally, we’re going to reference an existing CI/CD pipeline, where templates are used to dynamically generate switch configuration files, and see exactly where and how Batfish can fit in and aid in our efforts to dynamically test changes.

For a look under the covers, the examples mentioned in this series of posts are tracked in “https://gitlab.com/permitanyany/cldemo2

Enforcing Policy

As you may remember, in Part 1 we gathered the expected BGP status of all our sessions via the bgpSessionStatus query and added some simple logic to tell us when any of those sessions would report back as anything but “Established”. Building on that type of policy expectation, we’re going to add a few more rules that we want to enforce in our topology.

For example:

Vodafone Ireland Activates 5G Service in 5 Cities

Fellow Irish network operators Eir and Three plan to launch their respective 5G networks before the...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Heavy Networking 465: Looking Backward and Forward with Harry Quackenboss

Harry Quackenboss is long time veteran of infrastructure technology. In networking he was a VP of Sales of Crescendo for FDDI networking (to the desktop) which was acquired by Cisco. He later founded Woven Systems as a high speed Ethernet company of the time and more lately CEO of cPlane, a SDN company now relaunched […]

The post Heavy Networking 465: Looking Backward and Forward with Harry Quackenboss appeared first on Packet Pushers.

CenturyLink’s Edge Strategy Starts With ‘Several Hundred Million’ Investment

While its edge services today tend to be more “on a bespoke basis,” by 2020 “we expect a huge...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

MANRS Observatory: Monitoring the State of Internet Routing Security

Routing security is vital to the future and stability of the Internet, but it’s under constant threat. Which is why we’ve launched a free online tool so that network operators can see how they’re doing, and what they can improve, while anyone can see the health of the Internet at a glance. The MANRS Observatory measures networks’ adherence to MANRS – their “MANRS readiness” – a key indicator of the state of routing security and resiliency of the Internet.

Here’s what the MANRS Observatory is in a nutshell:

  • Performance Barometer: MANRS participants can easily monitor how well they adhere to the requirements of this initiative and make any necessary adjustments to their security controls.
  • Business Development: Participants can see how they and their peers are performing. They can leverage the MANRS Observatory to determine whether potential partners’ security practices are up to par.
  • Government: Policymakers can better understand the state of routing security and resilience and help improve it by calling for MANRS best practices.
  • Social Responsibility: MANRS implementation is simple, voluntary, and non-disruptive. The Observatory can help participants ensure they and their peers are keeping their networks secure, which helps improve routing security of the Internet as a whole.

Continue reading

SaaS-ifing Backup Scores Clumio $51M in Funding

The startup, founded by former VMware and Nutanix execs, built a backup service on Amazon Web...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

Magic Transit makes your network smarter, better, stronger, and cheaper to operate

Magic Transit makes your network smarter, better, stronger, and cheaper to operate

Today we’re excited to announce Cloudflare Magic Transit. Magic Transit provides secure, performant, and reliable IP connectivity to the Internet. Out-of-the-box, Magic Transit deployed in front of your on-premise network protects it from DDoS attack and enables provisioning of a full suite of virtual network functions, including advanced packet filtering, load balancing, and traffic management tools.

Magic Transit makes your network smarter, better, stronger, and cheaper to operate

Magic Transit is built on the standards and networking primitives you are familiar with, but delivered from Cloudflare’s global edge network as a service. Traffic is ingested by the Cloudflare Network with anycast and BGP, announcing your company’s IP address space and extending your network presence globally. Today, our anycast edge network spans 193 cities in more than 90 countries around the world.

Once packets hit our network, traffic is inspected for attacks, filtered, steered, accelerated, and sent onward to the origin. Magic Transit will connect back to your origin infrastructure over Generic Routing Encapsulation (GRE) tunnels, private network interconnects (PNI), or other forms of peering.

Enterprises are often forced to pick between performance and security when deploying IP network services. Magic Transit is designed from the ground up to minimize these trade-offs: performance and security are better together. Magic Transit deploys IP security Continue reading

Magic Transit: Network functions at Cloudflare scale

Magic Transit: Network functions at Cloudflare scale

Today we announced Cloudflare Magic Transit, which makes Cloudflare’s network available to any IP traffic on the Internet. Up until now, Cloudflare has primarily operated proxy services: our servers terminate HTTP, TCP, and UDP sessions with Internet users and pass that data through new sessions they create with origin servers. With Magic Transit, we are now also operating at the IP layer: in addition to terminating sessions, our servers are applying a suite of network functions (DoS mitigation, firewalling, routing, and so on) on a packet-by-packet basis.

Over the past nine years, we’ve built a robust, scalable global network that currently spans 193 cities in over 90 countries and is ever growing. All Cloudflare customers benefit from this scale thanks to two important techniques. The first is anycast networking. Cloudflare was an early adopter of anycast, using this routing technique to distribute Internet traffic across our data centers. It means that any data center can handle any customer’s traffic, and we can spin up new data centers without needing to acquire and provision new IP addresses. The second technique is homogeneous server architecture. Every server in each of our edge data centers is capable of running every task. We Continue reading