This blog post is probably more personal than the usual posts here. It’s about why I joined CloudFlare.
I’ve been working on DNSSEC evolution for a long time as implementor, IETF working group chair, protocol experimenter, DNS operator, consultant, and evangelist. These different perspectives allow me to look at the protocol in a holistic way.
First and foremost, it’s important to realize the exact role of DNSSEC. DNSSEC is actually a misnomer: it’s from an era when the understanding of different security technologies, and what role each plays, was not as good as today. Today, this protocol would be called DNSAUTH. This is because all it does is to provide integrity protection to the answers from authoritative servers.
Over the years, the design of DNSSEC has changed. A number of people working on early versions of DNSSEC (myself included) didn’t know DNS all that well. Similarly, many DNS people at the time didn’t understand security, and in particular, cryptography all that well. To make things even more complex, general understanding of the DNS protocol was lacking in certain areas and needed to be clarified in order to do DNSSEC properly. This has led to three major versions of the Continue reading
The time has come, CCIE Collaboration hopefuls, to focus my blog on Quality of Service (QoS). I know, it’s everyone’s favorite subject, right? Well, you don’t have to like it; you just have to know it!
I would specifically like to focus on WAN QoS policies as they are going to be an essential piece of the lab blueprint to understand. Typically, the goal on a WAN interface is to queue traffic in such a way as to prioritize certain types of traffic over other types of traffic. Voice traffic will usually be placed in some type of expedited or prioritized queue while other types of traffic (video, signaling, web, etc.) will use other queues to provide minimum bandwidth guarantees. Policies such as this will utilize the Modular QoS Command Line Interface (MQC) for implementation.
To begin, let’s use our three-site topology (HQ, SB, and SC) to provide a backdrop for this example. The HQ site (R1) has a Frame Relay connection to both the SB (R2) and SC (R3) sites through the same physical Serial interface, which has a total of 1.544 Mbps of bandwidth available. Assume that both R2 and R3 have connections to R1 using Continue reading
For an introduction to DNSSEC, see our previous post
Today is a big day for CloudFlare! We are publishing our first two DNSSEC signed zones for the community to analyze and give feedback on:
We've been testing our implementation internally for some time with great results, so we now want to know from outside users how it’s working!
Here’s an example of what you should see if you pull the records of, for example, www.cloudflare-dnssec-auth.com.
$ dig www.cloudflare-dnssec-auth.com A +dnssec
; <<>> DiG 9.10.1-P1 <<>> www.cloudflare-dnssec-auth.com A +dnssec
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29654
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 4096
;; QUESTION SECTION:
;www.cloudflare-dnssec-auth.com. IN A
;; ANSWER SECTION:
www.cloudflare-dnssec-auth.com. 300 IN A 104.28.29.67
www.cloudflare-dnssec-auth.com. 300 IN A 104.28.28.67
www.cloudflare-dnssec-auth.com. 300 IN RRSIG A Continue reading
As network security engineers have attempted to categorize blocks of IP addresses associated with spam or malware for subsequent filtering at their firewalls, the bad guys have had to evolve to continue to target their victims. Since routing on the global Internet is based entirely on trust, it’s relatively easy to commandeer IP address space that belongs to someone else. In other words, if the bad guys’ IP space is blocked, well then they can just steal someone else’s and continue on as before.
In an attempt to cover their tracks, these criminals will sometimes originate routes using autonomous system numbers (ASNs) that they don’t own either. In one of the cases described below, perpetrators hijacked the victim’s ASN to originate IP address space that could have plausibly been originated by the victim. However, in this case, the traffic was misdirected to the bad guy and an unsophisticated routing analysis would have probably shown nothing amiss.
The weakness of all spoofing techniques is that, at some point, the routes cross over from the fabricated to the legitimate Internet — and, when they do, they appear quite anomalous when compared against historical data and derived business Continue reading
Cloud builders are often using my ExpertExpress service to validate their designs. Tenant onboarding into a multi-tenant (private or public) cloud infrastructure is a common problem, and tenants frequently want to retain the existing network services appliances (firewalls and load balancers).
The Combine Physical and Virtual Appliances in a Private Cloud case study describes a typical solution that combines per-tenant virtual appliances with frontend physical appliances.
As an incentive to use their service, Amazon Web Services offers new users a “free tier” of service that provides a VPS “micro-instance” at no cost for one year.
The free tier of service is fairly flexible. Amazon AWS provides enough free hours to run the micro-instance twenty-four hours a day for a year. But if a user needs more services, he or she may create multiple micro instances and run them concurrently, which multiplies the rate the user consumes hours.
In this post, we’ll show how to set up the free server, and how to connect to it using SSH.
The first step is to create a user account on AWS. Go to the AWS Free Tier web page and click on “Sign up for AWS Account”
Then, click on “Create a free Account”.
Follow the directions provided on the AWS web site to set up a user account. You need to have a mobile phone for identity verification.
If you already have an account on amazon.com, you can use your already existing account to log into AWS services.
Amazon AWS provides excellent Continue reading
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
While the term Zero Touch Provisioning (ZTP) might be increasingly more common to networking, the concept of automation has existed for years in IT. At its core, ZTP is an automation solution that’s designed to reduce errors and save time when IT needs to bring new infrastructure online.
This is particularly useful for data center servers, where scale and configuration similarities across systems make automation a necessity. In the server world, for example, Linux has revolutionized on boarding and provisioning. Rather than using command-line interfaces (CLI) to configure systems one at a time, administrators can use automation tools to roll out the operating system software, patches and packages on new servers with a single command, or the click of a mouse.
To read this article in full or to leave a comment, please click here
SDN's First Use Case Is the WAN, But Wither Management?
Considering how long Packet Design has been talking about the promise of SDN in the WAN, it was encouraging to see a Q&A in Network World last week on the subject. Editor John Dix interviewed Michael Elmore, IT Senior Director of the Enterprise Network Engineering Infrastructure Group at Cigna. Michael is also on the board of Open Network Users Group (ONUG).
Dix began the interview by asking why ONUG’s membership has voted the WAN as the top use case for SDN twice in a row now. In the context of the enterprise, Elmore replied that software defined WANs (SD-WANs) can reduce both capital and operational costs. He also said that they are easier to deploy than in the data center.
Elmore went on to discuss the limitations of today’s WANs (mainly the MPLS-based layer 3 VPN service offerings used by the Fortune 500) in terms of cost, scale, service quality, security, visibility, and agility/flexibility. He then outlined the benefits of SD-WANs in all those areas, saying that enterprises will be able to “take back control from service Continue reading
Several key factors are driving the change to white box or open hardware – ROI, flexibility and customizability of design, ease of implementation, and the avoidance of vendor lock-in along with the high price-tag it can bring. The rise of white box hardware started with servers and storage, and now a movement towards the adoption of open networking has gained quite a bit of traction. The
Open Compute Project (OCP) movement is driving creation of bare metal switches, such as Open Switches, that are designed to be open and disaggregated. This white box model for switching enables users to deploy, monitor, and manage networking alongside servers and storage at a much lower price-point than a traditional network switch.
With a white box switch, the OS layer is decoupled from the hardware itself which allows users to independently select the best-of-breed components and networking software stack Continue reading
Cool news today from BigSwitch who have taken some big steps forward with their rather awesome Big Cloud Fabric (BCF) solution.
Building on the existing features of BCF 2.0 that was announced last July (see my post on the BCF launch for more details), version 2.5 adds some pretty good new features and a surprise partner.
BCF now supports VMWare vCenter. BigSwitch sees an Ethernet fabric as a complementary technology to VMWare’s NSX, not a competitor; very wisely they would like to be the underlay while NSX provides the overlay. The BCF controller integrates right into vCenter so that network configuration can be automated with the virtual environment, and the controller provides a single interface to the entire fabric.
The original BCF supported OpenStack. BCF 2.5 now has more elements of OpenStack (Juno) support and adds CloudStack support. With this and the vCenter integration, BCF has positioned itself quite nicely for full server and switch automation.
My first question when I heard about this was “What on earth is Brite Box switching?” It turns out that somebody somewhere coined the phrase Continue reading
It is just about time to get ready for Cisco Live US this year and, harking back to 2012, we are again in San Diego, CA for the June 7-11th event. I just wanted to take a few moments and share some information that I have put together. Oh, if you have not registered yet – […]
The post Cisco Live 2015 – Time to get ready! appeared first on Fryguy's Blog.
This is a continuation from Part 2 Fast Reroute Why Fast Reroute? Many NSP’s like ACME have traffic with tight SLAs. For instance below is an ITU delay recommendation for Voice. One Way Delay Characterization of Quality 0-150ms Acceptable for most applications 150-400ms May impact some applications Above 400ms Unacceptable ITU G.114 delay recommendations Having […]
The post MPLS TE Design -Part 3 appeared first on Packet Pushers Podcast and was written by Diptanshu Singh.
This is a continuation from Part 1 Case for LDPoRSVP As we mentioned at the very beginning that ACME provides L3VPN and L2VPN services, which requires end to end LSP between the PEs. But due to scaling reasons, ACME decided not to extend RSVP to the edge routers. This creates a problem as there is […]
The post MPLS TE Design -Part 2 appeared first on Packet Pushers Podcast and was written by Diptanshu Singh.
In this post we will be exploring different aspects of Traffic Engineering (RSVP-TE) from a design perspective using fictional ISP as a reference. The intent of the post is to not necessarily recommend a particular solution, but to bring up different aspects involved in the design. I am assuming that the reader already has somewhat […]
The post MPLS TE Design -Part 1 appeared first on Packet Pushers Podcast and was written by Diptanshu Singh.