Archive

Category Archives for "Networking"

SDN: What Small and Mid-Sized Businesses Need to Know in 2015

Guest blogger Alex Hoff is the VP of Product Management at Auvik Networks, a cloud-based SaaS that makes it dramatically easier for small and mid-sized businesses to manage their networks. Our thanks to Auvik for sponsoring the Packet Pushers community blog today. It’s January and the network industry pundits are calling for 2015 to be the […]

Author information

Sponsored Blog Posts

The Packet Pushers work with our vendors to present a limited number of sponsored blog posts to our community. This is one. If you're a vendor and think you have some blog content you'd like to sponsor, contact us via [email protected].

The post SDN: What Small and Mid-Sized Businesses Need to Know in 2015 appeared first on Packet Pushers Podcast and was written by Sponsored Blog Posts.

Contention Delay Killed the WLAN Star

The Wi-Fi industry seems dominated by discussions on the ever-increasing bandwidth capabilities and peak speeds brought with the latest product offerings based on 802.11ac. But while industry marketing touts Gigabit capable peak speeds, the underlying factors affecting WLAN performance have changed little.

Medium contention is the true driver in the success or failure of a WLAN and we must effectively understand its effect on WLAN performance in order to design and optimize our networks.

Read the full blog article over on the Aruba Airheads Technology Blog...

Open Networking: From Concept to Reality

When I first heard about Cumulus Networks in August 2013, I thought, “what’s the catch?” Now, after a year of working with companies deploying Cumulus Linux based switches in their production environments, it turns out there is really no catch. Open networking is for real.

The way you buy a server is now the way you can buy a switch

An open ecosystem has supported the server business for many years. One can build servers with components from various suppliers and run their choice of operating system. This same concept for the networking world, now called “open networking” for the disaggregated model of switch hardware and software, has long been on many wish lists.

The good news: this concept is a reality now, thanks to companies like Cumulus Networks whose OS, Cumulus Linux, is a Debian based distribution that is the OS for open networking on bare metal switches.

No License Gotchas

With Cumulus Linux, there are no additional or “enhanced” license fees akin to what traditional vendors have charged for years. The yearly renewal license fees cost the same each year – not a penny more. The yearly or multi-year license (option) can be ported from one switch Continue reading

IPv6 availability in New Zealand

IPv6 has been around a fair while, and we’re constantly encouraged to learn it and use it. I agree with the sentiment, but it’s been hard for most users, when few ISPs offer IPv6 for residential users. Hurricane Electric offers a great free IPv6 tunnel broker service, but that’s impractical for most people. What they need is for their ISP to offer native IPv6, by default.

The ISPs in New Zealand with the largest market share don’t offer IPv6, but some of the smaller ones do. The design of the ISP market here means that users can easily switch between a large range of suppliers, and choose the mix of price/service they want. When I last changed ISP a couple of years ago, I specifically chose an ISP that offers IPv6.

Last year that ISP disabled IPv6 for a few weeks due to some technical issues, and I was disappointed with the support they offered. I wanted to evaluate my other options, but couldn’t find any good source of data that showed which ISPs were offering IPv6. There’s plenty of talk out there about trials, and the like, but most of that hasn’t been updated in years.

So I pulled Continue reading

Multi-Gigabit AP Backhaul – Do you need it?

I was recently asked by a Wi-Fi engineer about the potential need for multi-gigabit backhaul needs from an AP with the pending release of 802.11ac Wave 2. This seems to be a point of confusion for many in the wireless industry. Here's what I told them:

Industry claims of throughput capabilities exceed 1 Gbps are correct from a theoretical standpoint. However, real world client mixes on almost every WLAN will mean that backhaul never approaches even close to 1 Gbps of throughput.

First, when you combine clients of varying capabilities there is no chance of exceeding 1 Gbps backhaul. The only time you will need more than 1 Gbps of backhaul is in POC bakeoffs between vendors, lab tests, and very low-density locations where you have only a few users on an AP radio but they are using top of the line high-end wireless laptops and applications that can push large amounts of data (I'm thinking CAD users here for instance who collaborate and push files of several GBs across the network and want it done fast). This is somewhat counter-intuitive because most people would think off-hand that high-density areas is where you'll need the greater backhaul. But in high-density areas Continue reading

Addressing 2014

Time for another annual roundup from the world of IP addresses. What happened in 2014 and what is likely to happen in 2015? This is an update to the reports prepared at the same time in previous years, so lets see what has changed in the past 12 months in addressing the Internet, and look at how IP address allocation information can inform us of the changing nature of the network itself.

CCIE Lab or dual CCIE written preferred

I got this sent from a friend of mine who is looking for a job. The job description asked for "CCIE Lab or dual CCIE written".

I wonder who wrote this stuff?

CCIE written is easy. It is not a certification exam.

The exam is not intendant to mean anything other than a ticket for the lab or to recert and existing CCIE certification, so Cisco is not putting too much effort into it. For example there are no simulations, everything is a multi-choice, so it is easy to eliminate absurd answers.

Most if not all CCIE candidates, who are already CCNPs, are surprised how easy it is. Many are fooled to believe that the lab is anywhere close to being at the same level of difficulty and depth.

If I was a CCNP, I would have preferred to take CCIE written to recert over the CCNP exams.

If I was hiring, I would prefer a dual CCNP over dual CCIE written anytime. In fact, I would prefer a humble CCNP than someone who passed the written and brags about it.

DNSSEC Done Right

alt This blog post is probably more personal than the usual posts here. It’s about why I joined CloudFlare.

I’ve been working on DNSSEC evolution for a long time as implementor, IETF working group chair, protocol experimenter, DNS operator, consultant, and evangelist. These different perspectives allow me to look at the protocol in a holistic way.

First and foremost, it’s important to realize the exact role of DNSSEC. DNSSEC is actually a misnomer: it’s from an era when the understanding of different security technologies, and what role each plays, was not as good as today. Today, this protocol would be called DNSAUTH. This is because all it does is to provide integrity protection to the answers from authoritative servers.

Over the years, the design of DNSSEC has changed. A number of people working on early versions of DNSSEC (myself included) didn’t know DNS all that well. Similarly, many DNS people at the time didn’t understand security, and in particular, cryptography all that well. To make things even more complex, general understanding of the DNS protocol was lacking in certain areas and needed to be clarified in order to do DNSSEC properly. This has led to three major versions of the Continue reading

Understanding WAN Quality of Service

The time has come, CCIE Collaboration hopefuls, to focus my blog on Quality of Service (QoS). I know, it’s everyone’s favorite subject, right? Well, you don’t have to like it; you just have to know it!

I would specifically like to focus on WAN QoS policies as they are going to be an essential piece of the lab blueprint to understand. Typically, the goal on a WAN interface is to queue traffic in such a way as to prioritize certain types of traffic over other types of traffic. Voice traffic will usually be placed in some type of expedited or prioritized queue while other types of traffic (video, signaling, web, etc.) will use other queues to provide minimum bandwidth guarantees. Policies such as this will utilize the Modular QoS Command Line Interface (MQC) for implementation.

To begin, let’s use our three-site topology (HQ, SB, and SC) to provide a backdrop for this example. The HQ site (R1) has a Frame Relay connection to both the SB (R2) and SC (R3) sites through the same physical Serial interface, which has a total of 1.544 Mbps of bandwidth available. Assume that both R2 and R3 have connections to R1 using Continue reading

Help us test our DNSSEC implementation

For an introduction to DNSSEC, see our previous post

Today is a big day for CloudFlare! We are publishing our first two DNSSEC signed zones for the community to analyze and give feedback on:

We've been testing our implementation internally for some time with great results, so we now want to know from outside users how it’s working!

Here’s an example of what you should see if you pull the records of, for example, www.cloudflare-dnssec-auth.com.

$ dig www.cloudflare-dnssec-auth.com A +dnssec

; <<>> DiG 9.10.1-P1 <<>> www.cloudflare-dnssec-auth.com A +dnssec
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29654
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 4096
;; QUESTION SECTION:
;www.cloudflare-dnssec-auth.com.    IN  A

;; ANSWER SECTION:
www.cloudflare-dnssec-auth.com.    300 IN  A   104.28.29.67  
www.cloudflare-dnssec-auth.com.    300 IN  A   104.28.28.67  
www.cloudflare-dnssec-auth.com.    300 IN  RRSIG   A  Continue reading

The Vast World of Fraudulent Routing

188.253.79.0_24_1418169600_1419885767

As network security engineers have attempted to categorize blocks of IP addresses associated with spam or malware for subsequent filtering at their firewalls, the bad guys have had to evolve to continue to target their victims.  Since routing on the global Internet is based entirely on trust, it’s relatively easy to commandeer IP address space that belongs to someone else.  In other words, if the bad guys’ IP space is blocked, well then they can just steal someone else’s and continue on as before.

In an attempt to cover their tracks, these criminals will sometimes originate routes using autonomous system numbers (ASNs) that they don’t own either.  In one of the cases described below, perpetrators hijacked the victim’s ASN to originate IP address space that could have plausibly been originated by the victim.  However, in this case, the traffic was misdirected to the bad guy and an unsophisticated routing analysis would have probably shown nothing amiss.

The weakness of all spoofing techniques is that, at some point, the routes cross over from the fabricated to the legitimate Internet — and, when they do, they appear quite anomalous when compared against historical data and derived business Continue reading

Case Study: Combine Physical and Virtual Appliances in a Private Cloud

Cloud builders are often using my ExpertExpress service to validate their designs. Tenant onboarding into a multi-tenant (private or public) cloud infrastructure is a common problem, and tenants frequently want to retain the existing network services appliances (firewalls and load balancers).

The Combine Physical and Virtual Appliances in a Private Cloud case study describes a typical solution that combines per-tenant virtual appliances with frontend physical appliances.

IOS server load balancing with mininet server farm

The idea is to play with IOS load balancing mechanism using large number of “real” servers (50 servers), and observe the difference in behavior between different load balancing algorithms. Due to resource scarcity in the lab environment, I use mininet to emulate “real” servers. I will stick to the general definition for load balancing: A load balancer is a device […]

IOS server load balancing with mininet server farm

The idea is to play with IOS load balancing mechanism using large number of “real” servers (50 servers), and observe the difference in behavior between different load balancing algorithms. Due to resource scarcity in the lab environment, I use mininet to emulate “real” servers. I will stick to the general definition for load balancing: A load balancer is a device […]

Create a free virtual private server on Amazon Web Services

As an incentive to use their service, Amazon Web Services offers new users a “free tier” of service that provides a VPS “micro-instance” at no cost for one year.

AWS-00c-1x

The free tier of service is fairly flexible. Amazon AWS provides enough free hours to run the micro-instance twenty-four hours a day for a year. But if a user needs more services, he or she may create multiple micro instances and run them concurrently, which multiplies the rate the user consumes hours.

In this post, we’ll show how to set up the free server, and how to connect to it using SSH.

Create an AWS account

The first step is to create a user account on AWS. Go to the AWS Free Tier web page and click on “Sign up for AWS Account”

Then, click on “Create a free Account”.

AWS-00a

Click on the “Free Account” button

Follow the directions provided on the AWS web site to set up a user account. You need to have a mobile phone for identity verification.

If you already have an account on amazon.com, you can use your already existing account to log into AWS services.

Create a free instance

Amazon AWS provides excellent Continue reading

Zero Touch Provisioning can help the network world catch up to server advances

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

While the term Zero Touch Provisioning (ZTP) might be increasingly more common to networking, the concept of automation has existed for years in IT.  At its core, ZTP is an automation solution that’s designed to reduce errors and save time when IT needs to bring new infrastructure online.

This is particularly useful for data center servers, where scale and configuration similarities across systems make automation a necessity.  In the server world, for example, Linux has revolutionized on boarding and provisioning. Rather than using command-line interfaces (CLI) to configure systems one at a time, administrators can use automation tools to roll out the operating system software, patches and packages on new servers with a single command, or the click of a mouse.

To read this article in full or to leave a comment, please click here

SDN’s First Use Case Is the WAN, But Wither Management?

SDN's First Use Case Is the WAN, But Wither Management?


by Steve Harriman, VP of Marketing - January 28, 2015

Considering how long Packet Design has been talking about the promise of SDN in the WAN, it was encouraging to see a Q&A in Network World last week on the subject. Editor John Dix interviewed Michael Elmore, IT Senior Director of the Enterprise Network Engineering Infrastructure Group at Cigna. Michael is also on the board of Open Network Users Group (ONUG). 

Dix began the interview by asking why ONUG’s membership has voted the WAN as the top use case for SDN twice in a row now. In the context of the enterprise, Elmore replied that software defined WANs (SD-WANs) can reduce both capital and operational costs. He also said that they are easier to deploy than in the data center. 

Elmore went on to discuss the limitations of today’s WANs (mainly the MPLS-based layer 3 VPN service offerings used by the Fortune 500) in terms of cost, scale, service quality, security, visibility, and agility/flexibility. He then outlined the benefits of SD-WANs in all those areas, saying that enterprises will be able to “take back control from service Continue reading

Open Networking is the New Normal

The data center is in a constant state of transition. What was once home for rows upon rows of propriety and often siloed equipment based on closed-architecture designs, the modern day data center is now filled with white box solutions serving various functions but working in a harmonious or converged manner.

Several key factors are driving the change to white box or open hardware – ROI, flexibility and customizability of design, ease of implementation, and the avoidance of vendor lock-in along with the high price-tag it can bring. The rise of white box hardware started with servers and storage, and now a movement towards the adoption of open networking has gained quite a bit of traction. The

Open Compute Project (OCP) movement is driving creation of bare metal switches, such as Open Switches, that are designed to be open and disaggregated. This white box model for switching enables users to deploy, monitor, and manage networking alongside servers and storage at a much lower price-point than a traditional network switch.

Scaled Networking Simplified

With a white box switch, the OS layer is decoupled from the hardware itself which allows users to independently select the best-of-breed components and networking software stack Continue reading