Wrapping up the Debate

What do you want to be when you grow up? Can you picture it? Close your eyes. Now give your mental self a super-hero kind of outfit. What’s emblazoned on your shirt? What job roles do you think you’d like? What technology do you think you’d like to work with?

In the past, most networkers put some cert letters or logo on their mental super-hero selfie. However, I think that the changes in the networking industry mean that we need to pay a little more attention to building that future self-image through better professional development planning. Those plans help try and reach that ideal image of where we want to be in our careers – and how we go about planning our own development has to change along with the rapid changes in the networking industry.

Wrapping the Series

This really will be the last in this series, with posts related somehow to our Interop debate about traditional certs vs. SDN skills development. Here’s a list of the other posts in the series:

Drupal 7 SA-CORE-2014-005 SQL Injection Protection

Yesterday the Drupal Security Team released a critical security patch for Drupal 7 that fixes a very serious SQL injection vulnerability. At the same time we pushed an update to our Drupal WAF rules to mitigate this problem. Any customer using the WAF and with the Drupal ruleset enabled will have received automatic protection.

Rule D0002 provides protection against this vulnerability. If you do not have that ruleset enabled and are using Drupal clicking the ON button next to CloudFlare Drupal in the WAF Settings will enable protection immediately.

CloudFlare WAF protection can help mitigate vulnerabilities like this, but it is vital that Drupal 7 users upgrade to the safe version of Drupal immediately.

The network won’t fit in your head anymore

Triggered by a discussion with a customer yesterday, it occurred to me (again?) that network engineers are creatures of habit and control. We have strong beliefs of how networks should be architected, designed and build. We have done so for long times and understand it well. We have tweaked our methods, our tools, our configuration templates. We understand our networks inside out. We have a very clear mental view of how they behave and how packets get forwarded, how they should be forwarded. It’s comfort, it’s habit, we feel (mostly) in control of the network because we have a clear model in our head.

I don’t believe this is a network engineering trait per se. Software engineers want to understand algorithms inside out, they want to understand the data modeling, types structures and relationships.

Uncomfortable

Many of us know the feeling. Something new comes around and it’s hard to put your head around it. It challenges the status quo, it changes how we do things, it changes what we (think we) know. When we are giving responsibility of something new, there is a desire to understand “it” inside out, as a mechanism to be able to control “it”.

I Continue reading

Automation – Is the cart before the horse?

Over the last year I’ve had the opportunity to hear about lots of new and exciting products in the network and virtualization world.  The one clear takeaway from all of these meetings has been that the vendors are putting a lot of their focus into ensuring their product can be automated.  While I agree that any new product on the market needs to have a robust interface, I’m also sort of shocked at the way many vendors are approaching this.  Before I go further, let me clarify two points.  First, when I say ‘interface’ I’m purposefully being generic.  An interface can be a user interface, it could be a REST interface, a Python interface, etc.  Basically, its any means in which I, or something else, can interact with the product.  Secondly, I’ll be the first person to tell you that any new product I look at should have a usable REST API interface.  Why do I want REST?  Simple, because I know that’s something that most automation tools or orchestrators can consume. 

So what’s driving this?  Why are we all of a sudden consumed with the need to automate Continue reading

IPv6 to IPv4 basic setup

Lab goal

Configure Alteon to serve IPv6 clients. The servers should use IPv4.

The IPv6 VIP should be fc00:85::10.



Setup


The loadbalancer is Radware's Alteon VA version 29.5.1.0

The initial Alteon VA configuration can be found here.

Below is the IPv4 real servers configuration which we will use as a base config.


 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
/c/slb/real 1
ena
ipver v4
rip 10.136.85.1
/c/slb/real 2
ena
ipver v4
rip 10.136.85.2
/c/slb/real 3
ena
ipver v4
rip 10.136.85.3
/c/slb/group 10
ipver v4
add 1
add 2
add 3

Alteon configuration

All we need to do is create a new virt/VIP and assign it with IPv6 address.



 1
2
3
4
5
6
7
8
9
10
11
 /c/slb/virt v6_85_10
ena
ipver v6
vip fc00:85:0:0:0:0:0:10
/c/slb/virt v6_85_10/service 80 http
group 10
rport 80
dbind forceproxy
/c/slb/virt v6_85_10/service 80 http/pip
mode address
addr v4 10.136.85.200 255.255.255.255 persist disable

Notice that we need the pip which is Proxy IP, a.k.a SNAT. Since we translating from IPv6 to IPv4 we need Alteon to act as a proxy and for that it needs IPv4 address to communicate with the real servers.

Test


Summary

That was really simple, wasn't it? Just change the virt/VIP to be IPv6 and we have IPv6 to IPv6 gateway.

Killer Apps in the Gigabit Age | Pew Research Center’s Internet & American Life Project


Very, very funny quote in the Pew Research Report: How could people benefit from a gigabit network? One expert in this study, David Weinberger, a senior researcher at Harvard’s Berkman Center for Internet & Society, predicted, “There will be full, always-on, 360-degree environmental awareness, a semantic overlay on the real world, and full-presence massive open […]

The post Killer Apps in the Gigabit Age | Pew Research Center’s Internet & American Life Project appeared first on EtherealMind.

ECDSA and DNSSEC

Yes, that's a cryptic topic, even for an article that addresses matters of the use of cryptographic algorithms, so congratulations for getting even this far! This is a report of a an experiment conducted in September and October 2014 by the authors to measure the extent to which deployed DNSSEC-validating resolvers fully support the use of the Elliptic Curve Digital Signature Algorithm (ECDSA) with curve P-256.

AS-Path Filtering

2014-10-15 at 8.36 AM
Before we get into the how, let’s talk about the why. According to the CIDR Report, the global IPv4 routing table sits at about 525,000 routes, it has doubled in size since mid 2008 and continues to press upwards at an accelerated rate. This momentum, which in my estimate started around 2006, will most likely never slow down. As network engineers, what are we to do? Sure, memory is as plentiful as we could ask for, but what of TCAM? On certain platforms, like the 7600/6500 on the Sup720 and even some of the ASR1ks we have already surpassed the limits of what they can handle (~512k routes in the FIB). While it is possible to increase the TCAM available for routing information, there are other solutions that don’t include replacing hardware just yet.

As far as I know, adjusting TCAM partitioning on the ASR1000 is not possible at this time.

Before I get too deep into this, I should clarify as many of you (yes, I’m looking at you Fry) are asking yourselves why is an ISP running BGP on a 6500… Many of my customers are small ISPs or data centers that have little to no Continue reading

Why Network Automation Won’t Kill Your Job

I’ve been focusing lately on shortening the gap between traditional automation teams and network engineering. This week I was fortunate enough to attend the DevOps 4 Networks event, and though I’d like to save most of my thoughts for a post dedicated to the event, I will say I was super pleased to spend the time with the legends of this industry. There are a lot of bright people looking at this space right now, and I am really enjoying the community that is emerging.

I’ve heard plenty of excuses for NOT automating network tasks. These range from “the network is too crucial, automation too risky” to “automating the network means I, as a network engineer, will be put out of a job”.

To address the former, check out Ivan Pepelnjak’s podcast with Jeremy Schulman of Schprokits, where they discuss blast radius (regarding network automation).

I’d like to talk about that second excuse for a little bit, because I think there’s an important point to consider.

 

A Recent Example

A few years back, I was working for a small reseller helping small companies consolidate their old physical servers into a cheap cluster of virtual hosts. For every sizing discussion that Continue reading

When they throw a Cisco guy to do something with HP networking gear

How does the internet work - We know what is networking

…There’s a nice little pdf to get you through HP is aware that most of networking engineers start their learning process in Cisco Networking Academy. Is is a normal course of events if you want to learn networking. Cisco has the very best study materials and best, carefully developed syllabus that is both high quality […]

When they throw a Cisco guy to do something with HP networking gear

Certified Application to Network Isomorphism Engineer, anyone?

There has been a recurrent debate over the last few years on the future of CCIEs, or more broadly network engineers as we know (and love) them today. While certainly calls for the “death of the CCIE” are certain to grab eyeballs, as always the more probably truth is more nuanced and tricky to predict. Certainly change is coming, but it is important to understand the true value of the present network engineer and how that maps into the future we expect.

Why would network engineers die?

Apart from a deadly virus outbreak transmitted by TCPDump, or for the true preppers out there – the distant alien race whose network engineers have all died out and who are coming to claim ours – the vast disappearance of all network engineers is most likely hyperbole. Yet it is probably fair to say that specific skills that network engineers have long used to compare or present their own value, attributes like the CCIE certification, are diminishing in the value they present to the market. The reason is pretty simple – while a CCIE (or JNCIE, etc) certification implies a thorough knowledge of overall network engineering theories and concepts and a detailed understanding Continue reading

Some POODLE notes

Heartbleed and Shellshock allowed hacks against servers (meaning websites and such). POODLE allows hacking clients (your webbrowser and such). If Hearbleed/Shellshock merited a 10, then this attack is only around a 5.

It requires MitM (man-in-the-middle) to exploit. In other words, the hacker needs to be able to to tap into the wires between you and the website you are browsing, which is difficult to do. This means you are probably safe from hackers at home, because hackers can't tap backbone links. But, since the NSA can tap into such links, it's probably easy for them. However, when using the local Starbucks or other unencrypted WiFi, you are in grave danger from this hack from hackers sitting the table next to you.

It requires, in almost all cases, JavaScript running in the browser. That's because the attacker needs to MitM thousands of nearly identical connections that can fail. There are possibly rare cases where such connections may happen (like automated control systems), but JavaScript is nearly a requirement. That means your Twitter app in your iPhone is likely safe, as the attacker can't run JavaScript in the app.

It doesn't hack computers, but crack encryption. It reveals previously encrypted data.

Continue reading

Why Network Automation Won’t Kill Your Job

I’ve been focusing lately on shortening the gap between traditional automation teams and network engineering. This week I was fortunate enough to attend the DevOps 4 Networks event, and though I’d like to save most of my thoughts for a post dedicated to the event, I will say I was super pleased to spend the time with the legends of this industry. There are a lot of bright people looking at this space right now, and I am really enjoying the community that is emerging.

Why Network Automation Won’t Kill Your Job

I’ve been focusing lately on shortening the gap between traditional automation teams and network engineering. This week I was fortunate enough to attend the DevOps 4 Networks event, and though I’d like to save most of my thoughts for a post dedicated to the event, I will say I was super pleased to spend the time with the legends of this industry. There are a lot of bright people looking at this space right now, and I am really enjoying the community that is emerging.

SSLv3 Support Disabled By Default Due to POODLE Vulnerability

SSLv3 Vulnerability

For the last week we've been tracking rumors about a new vulnerability in SSL. This specific vulnerability, which was just announced, targets SSLv3. The vulnerability allows an attacker to add padding to a request in order to then calculate the plaintext of encryption using the SSLv3 protocol. Effectively, this allows an attacker to compromise the encryption when using the SSLv3 protocol. Full details have been published by Google in a paper which dubs the bug POODLE (PDF).

Generally, modern browsers will default to a more modern encryption protocol (e.g., TLSv1.2). However, it's possible for an attacker to simulate conditions in many browsers that will cause them to fall back to SSLv3. The risk from this vulnerability is that if an attacker could force a downgrade to SSLv3 then any traffic exchanged over an encrypted connection using that protocol could be intercepted and read.

In response, CloudFlare has disabled SSLv3 across our network by default for all customers. This will have an impact on some older browsers, resulting in an SSL connection error. The biggest impact is Internet Explorer 6 running on Windows XP or older. To quantify this, we've been tracking SSLv3 usage.

SSLv3 Continue reading

vPC order of operations

Cisco Nexus can be very temperamental or capricious (pick the one you prefer ) and the vPC technology is not an isolated case. There is a certain way to configure vPC and we will see that in that blogpost. The following topology will be used:     Enabling the feature Obviously we need to activate the […]

Standards are a farce

Today (October 14) is "World Standards Day", celebrating the founding of the ISO, also known as the "International Standards Organization". It's a good time to point out that people are wrong about standards.

You are reading this blog post via "Internet standards". It's important to note that through it's early existence, the Internet was officially not a standard. Through the 1980s, the ISO was busy standardizing a competing set of internetworking standards.

What made the Internet different is that it's standards were de facto not de jure. In other words, the Internet standards body, the IETF, documented things that worked, not how they should work. Whenever somebody came up with a new protocol to replace an old one, and if people started using it, then the IETF would declare this as "something people are using". Protocols were documented so that others could interoperate with them if they wanted, but there was no claim that they should. Internet evolution in these times was driven by rogue individualism -- people rushed to invent new things with waiting for the standards body to catch up.

The ISO's approach was different. Instead of individualism, it was based on "design by committee", where committees were Continue reading