A Letter from Matthew Prince and Michelle Zatlyn

A Letter from Matthew Prince and Michelle Zatlyn
Cloudflare's three co-founders: Michelle Zatlyn, Lee Holloway, and Matthew Prince
A Letter from Matthew Prince and Michelle Zatlyn

To our potential shareholders:

Cloudflare launched on September 27, 2010. Many great startups pivot over time. We have not. We had a plan and have been purposeful in executing it since our earliest days. While we are still in its early innings, that plan remains clear: we are helping to build a better Internet. Understanding the path we’ve taken to date will help you understand how we plan to operate going forward, and to determine whether Cloudflare is the right investment for you.

Cloudflare was formed to take advantage of a paradigm shift: the world was moving from on-premise hardware and software that you buy to services in the cloud that you rent. Paradigm shifts in technology always create significant opportunities, and we built Cloudflare to take advantage of the opportunities that arose as the world shifted to the cloud.

As we watched packaged software turn into SaaS applications, and physical servers migrate to instances in the public cloud, it was clear that it was only a matter of time before the same happened to network appliances. Firewalls, network optimizers, load balancers, and the myriad of other hardware appliances that Continue reading

How 6G will work: Terahertz-to-fiber conversion

Upcoming 6G wireless, superseding 5G and arriving possibly by 2030, is envisaged to function at hundreds of gigabits per second. Slowly, the technical advances needed are being made.A hole in the tech development thus far has been at the interface between terahertz spectrum and hard, optical transmission lines. How does one connect terahertz (THz), which is basically through-the-air spectrum found between microwave and infrared, to the transmission lines that will be needed for the longer-distance data sends? The curvature of the Earth, for one thing, limits line of sight, so hard-wiring is necessary for distances. Short distances, too, can be impeded by environmental obstructions: blocking by objects, even rain or fog, becomes more apparent the higher in spectrum one goes, as wavelengths get shorter.To read this article in full, please click here

An Introduction to Kustomize

kustomize is a tool designed to let users “customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is” (wording taken directly from the kustomize GitHub repository). Users can run kustomize directly, or—starting with Kubernetes 1.14—use kubectl -k to access the functionality (although the standalone binary is newer than the functionality built into kubectl as of the Kubernetes 1.15 release). In this post, I’d like to provide an introduction to kustomize.

In its simplest form/usage, kustomize is simply a set of resources (these would be YAML files that define Kubernetes objects like Deployments, Services, etc.) plus a set of instructions on the changes to be made to these resources. Similar to the way make leverages a file named Makefile to define its function or the way Docker uses a Dockerfile to build a container, kustomize uses a file named kustomization.yaml to store the instructions on the changes the user wants made to a set of resources.

Here’s a simple kustomization.yaml file:

resources:
- deployment.yaml
- service.yaml
namePrefix: dev-
namespace: development
commonLabels:
  environment: development

This article won’t attempt to explain all the various fields that could be Continue reading

Video: Beyond Two Nodes

In the introductory videos of How Networks Really Work webinar I described the mandatory elements of any networking solution and additional challenges you have to solve when you can’t pull a cable between the adjacent nodes.

It’s time for the next bit of complexity: what if we have more than two nodes connected to the same network segment? Welcome to the world of multi-access networks and data link control.

You need free ipSpace.net subscription to watch the videos in Overview of Networking Challenges section, or a paid ipSpace.net subscriptions to watch the rest of the webinar.

Declarative recursive computation on an RDBMS

Declarative recursive computation on an RDBMS… or, why you should use a database for distributed machine learing Jankov et al., VLDB’19

If you think about a system like Procella that’s combining transactional and analytic workloads on top of a cloud-native architecture, extensions to SQL for streaming, dataflow based materialized views (see e.g. Naiad, Noria, Multiverses, and also check out what Materialize are doing here), the ability to use SQL interfaces to query over semi-structured and unstructured data, and so on, then a picture begins to emerge of a unifying large-scale data platform with a SQL query engine on top that addresses all of the data needs of an organisation in a one-stop shop. Except there’s one glaring omission from that list: handling all of the machine learning use cases.

Machine learning inside a relational database has been done before, most notably in the form of MADlib, which was integrated into Greenplum during my time at Pivotal. The Apache MADLib project is still going strong, and the recent (July 2019) 1.16 release even includes some support for deep learning.

To make that vision of a one-stop shop for all of an organisation’s data Continue reading

ClearOS Gateway on GNS3

In a previous tutorial we have successfully installed ClearOS on QEMU VM in a gateway mode. At the end of the tutorial we have installed several apps from ClearOS marketplace. These apps enhance gateway functionality, however  we have not tested  them yet. Therefore, this tutorial goes further and we are going to test some services offered by ClearOS apps. In order to do it, we will connect ClearOS QEMU appliance into a GNS3 topology.

Our ClearOS QEMU instance is configured with two guest network cards (Picture 1). The first guest interface ens3 has assigned the LAN role and it is configured with the IP address 192.168.1.254/24. This is the IP address a web server is listening on, the port 81. The entire ClearOS management will be done via web browser using the url https://192.168.1.254:81.

Picture 1 - Network Interfaces Configuration During ClearOS Installation

The second guest interface ens4 has assigned External role and its IP address is assigned from DHCP server. DHCP server is running on SOHO router with the IP address 172.17.100.1/16 (Picture 2).

Picture 2 - Network Topology

GNS3 itself connects the second guest interface ens4 of ClearOS gateway Continue reading

BrandPost: Top 3 Misconceptions About SD-WAN

There’s little question that software-defined wide-area networks (SD-WANs) have taken off, as companies look for increased network resiliency and control. But there’s still significant confusion about SD-WAN, including some benefits that are more myth than reality.In this post, we’ll explore three common misconceptions that surround SD-WAN, starting with what is probably the most important one.Misconception #1: SD-WAN will replace services such as MPLS SD-WAN doesn’t necessary replace any existing network service, be it MPLS, broadband Internet, or anything else. In fact, it requires some kind of network service to work at all.To read this article in full, please click here

IBM Injects Cloud Innovation Into Its z15 Mainframe

This could help the hardware vendor better compete against hyperscale public cloud providers like...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

IBM z15 mainframe, amps-up cloud, security features

IBM has rolled out a new generation of mainframes – the z15 – that not only bolsters the speed and power of the Big Iron but promises to integrate hybrid cloud, data privacy and security controls for modern workloads.On the hardware side, the z15 mainframe systems ramp up performance and efficiency. For example IBM claims 14 percent more performance per core, 25 percent more system capacity, 25percent more memory, and 20 percent more I/O connectivity than the previous iteration, the z14 system. [ Check out What is hybrid cloud computing and learn what you need to know about multi-cloud. | Get regularly scheduled insights by signing up for Network World newsletters. ] IBM also says the system can save customers 50 percent of costs over operating x86-based servers and use 40 percent less power than a comparable x86 server farm. And the z15 has the capacity to handle scalable environments such as supporting 2.4 million Docker containers on a single system.To read this article in full, please click here

IBM z15 mainframe, amps-up cloud, security features

IBM has rolled out a new generation of mainframes – the z15 – that not only bolsters the speed and power of the Big Iron but promises to integrate hybrid cloud, data privacy and security controls for modern workloads.On the hardware side, the z15 mainframe systems ramp up performance and efficiency. For example IBM claims 14 percent more performance per core, 25 percent more system capacity, 25percent more memory, and 20 percent more I/O connectivity than the previous iteration, the z14 system. [ Check out What is hybrid cloud computing and learn what you need to know about multi-cloud. | Get regularly scheduled insights by signing up for Network World newsletters. ] IBM also says the system can save customers 50 percent of costs over operating x86-based servers and use 40 percent less power than a comparable x86 server farm. And the z15 has the capacity to handle scalable environments such as supporting 2.4 million Docker containers on a single system.To read this article in full, please click here

AT&T, DT, and Telefónica Trumpet Cost-Saving CO Pod

Operators moving to virtualized architectures can save 40% in capex, according to a study by Arthur...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

AnsibleFest Atlanta – Tech Deep Dives

Blog_AnsibleFest2019-Tech-Deep-Dives

 

Only one more week until AnsibleFest 2019 comes to Atlanta! We talked with Track Lead Sean Cavanaugh to learn more about the Technical Deep Dives track and the sessions within it. 

 

Who is this track best for? 

 

You've written playbooks. You've automated deployments. But you want to go deeper - learn new ways you could use Ansible you haven't thought of before. Extend Ansible for new functionality. Dig deep into new use cases. Then Tech Deep Dives is for you. This track is best suited for someone with existing Ansible knowledge and experience that already knows the nomenclature. It is best for engineers who want to learn how to take their automation journey to the next level. This track includes multiple talks from Ansible Automation developers, and it is your chance to ask them direct questions or provide feedback.  

 

What topics will this track cover? 

 

This track is about automation proficiency. Talks range from development and testing of modules and content to building and operationalizing automation to scale for your enterprise.  Think about best practices, but then use those takeaways to leverage automation for your entire organization.  



What should Continue reading

Operators Strike Realistic Edge Computing Balance

Almost every network operator agrees on the edge’s importance, but how each gets there and the...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

NEC Adds Infovista SD-WAN to Its Smart Enterprise Suite

The combination will lower costs and improve communications performance across NEC's communications...

Read More »

© SDxCentral, LLC. Use of this feed is limited to personal, non-commercial use and is governed by SDxCentral's Terms of Use (https://www.sdxcentral.com/legal/terms-of-service/). Publishing this feed for public or commercial use and/or misrepresentation by a third party is prohibited.

How InterSystems Builds an Enterprise Database at Scale with Docker Enterprise

We sat down recently with InterSystems, our partner and customer, to talk about how they deliver an enterprise database at scale to their customers. InterSystems’s software powers mission-critical applications at hospitals, banks, government agencies and other organizations.

We spoke with Joe Carroll, Product Specialist, and Todd Winey, Director of Partner Programs at InterSystems about how containerization and Docker are helping transform their business.

Here’s what they told us. You can also catch the highlights in this 2 minute video:

On InterSystems and Enterprise Databases…

Joe Carroll: InterSystems is a 41 year old database and data platform company. We’ve been in data storage for a very long time and our customers tend to be traditional enterprises — healthcare, finance, shipping and logistics as well as government agencies. Anywhere that there’s mission critical data we tend to be around. Our customers have really important systems that impact people’s lives, and the mission critical nature of that data characterizes who our customers are and who we are.

On Digital Transformation in Established Industries…

Todd Winey: Many of those organizations and industries have been traditionally seen as laggards in terms of their technology adoption in the past, so the speed with which they’re moving Continue reading