Very, very funny quote in the Pew Research Report: How could people benefit from a gigabit network? One expert in this study, David Weinberger, a senior researcher at Harvard’s Berkman Center for Internet & Society, predicted, “There will be full, always-on, 360-degree environmental awareness, a semantic overlay on the real world, and full-presence massive open […]
The post Killer Apps in the Gigabit Age | Pew Research Center’s Internet & American Life Project appeared first on EtherealMind.
Before we get into the how, let’s talk about the why. According to the CIDR Report, the global IPv4 routing table sits at about 525,000 routes, it has doubled in size since mid 2008 and continues to press upwards at an accelerated rate. This momentum, which in my estimate started around 2006, will most likely never slow down. As network engineers, what are we to do? Sure, memory is as plentiful as we could ask for, but what of TCAM? On certain platforms, like the 7600/6500 on the Sup720 and even some of the ASR1ks we have already surpassed the limits of what they can handle (~512k routes in the FIB). While it is possible to increase the TCAM available for routing information, there are other solutions that don’t include replacing hardware just yet.
As far as I know, adjusting TCAM partitioning on the ASR1000 is not possible at this time.
Before I get too deep into this, I should clarify as many of you (yes, I’m looking at you Fry) are asking yourselves why is an ISP running BGP on a 6500… Many of my customers are small ISPs or data centers that have little to no Continue reading
I’ve been focusing lately on shortening the gap between traditional automation teams and network engineering. This week I was fortunate enough to attend the DevOps 4 Networks event, and though I’d like to save most of my thoughts for a post dedicated to the event, I will say I was super pleased to spend the time with the legends of this industry. There are a lot of bright people looking at this space right now, and I am really enjoying the community that is emerging.
I’ve heard plenty of excuses for NOT automating network tasks. These range from “the network is too crucial, automation too risky” to “automating the network means I, as a network engineer, will be put out of a job”.
To address the former, check out Ivan Pepelnjak’s podcast with Jeremy Schulman of Schprokits, where they discuss blast radius (regarding network automation).
I’d like to talk about that second excuse for a little bit, because I think there’s an important point to consider.
A few years back, I was working for a small reseller helping small companies consolidate their old physical servers into a cheap cluster of virtual hosts. For every sizing discussion that Continue reading
How does the internet work - We know what is networking
…There’s a nice little pdf to get you through HP is aware that most of networking engineers start their learning process in Cisco Networking Academy. Is is a normal course of events if you want to learn networking. Cisco has the very best study materials and best, carefully developed syllabus that is both high quality […]
When they throw a Cisco guy to do something with HP networking gear
I was listening to the Packet Pushers show #203 – an interesting high-level discussion of policies (if you happen to be interested in those things) – and unavoidably someone had to mention how the networking is all broken because different devices implement the same functionality in different ways and use different CLI/API syntax.
Read more ...For the last week we've been tracking rumors about a new vulnerability in SSL. This specific vulnerability, which was just announced, targets SSLv3. The vulnerability allows an attacker to add padding to a request in order to then calculate the plaintext of encryption using the SSLv3 protocol. Effectively, this allows an attacker to compromise the encryption when using the SSLv3 protocol. Full details have been published by Google in a paper which dubs the bug POODLE (PDF).
Generally, modern browsers will default to a more modern encryption protocol (e.g., TLSv1.2). However, it's possible for an attacker to simulate conditions in many browsers that will cause them to fall back to SSLv3. The risk from this vulnerability is that if an attacker could force a downgrade to SSLv3 then any traffic exchanged over an encrypted connection using that protocol could be intercepted and read.
In response, CloudFlare has disabled SSLv3 across our network by default for all customers. This will have an impact on some older browsers, resulting in an SSL connection error. The biggest impact is Internet Explorer 6 running on Windows XP or older. To quantify this, we've been tracking SSLv3 usage.
I use the term "invariant" quite regularly when designing networks. It sounds fancy.
The post Network Dictionary – Invariant appeared first on EtherealMind.
Some people are corporate survivors, sticking with one company for decades. Some people move around when it suits, while others would like to move, but are fearful of change. Here’s a few things I’ve learnt about adapting to new work environments. It’s not that scary.
We’ve all seen the people who seem to survive in a corporate environment. They seem to know everyone, and almost everything about the business. Return to a company after 10 years, and they’re still there. Somehow they survive, through mergers, acquisitions, and round after round of re-organisation. But often they seem to be doing more or less the same job for years, with little change.
There’s four possible reasons for staying at a job for a long time:
Why drives its popularity? Being open source, it puts cloud builders in charge of their own destiny, whether they choose to work with a partner, or deploy it themselves. Because it is Linux based, it is highly amenable to automation, whether you’re building out your network or are running it in production. At build time, it’s great for provisioning, installing and configuring the physical resources. In production, it’s just as effective, since provisioning tenants, users, VMs, virtual networks and storage is done via self-service Web interfaces or automatable APIs. Finally, it’s always been designed to run well on commodity servers, avoiding reliance on proprietary vendor features.
Cumulus Linux fits naturally into an OpenStack cloud, because it shares a similar design and philosophy. Built on open source, Cumulus Linux is Linux, allowing common management, monitoring and configuration on both servers and switches. The same automation and provisioning tools that you commonly use for OpenStack servers you can also use unmodified on Cumulus Linux switches, giving a single Continue reading
It’s been far too long since the last MindshaRE post, so I decided to share a technique I’ve been playing around with to pull C2 and other configuration information out of malware that does not store all of its configuration information in a set structure or in the resource section (for a nice set of publicly available decoders check out KevTheHermit’s RATDecoders repository on GitHub). Being able to statically extract this information becomes important in the event that the malware does not run properly in your sandbox, the C2s are down or you don’t have the time / sandbox bandwidth to manually run and extract the information from network indicators.
To find C2 info, one could always just extract all hostname-/IP-/URI-/URL-like elements via string regex matching, but it’s entirely possible to end up false positives or in some cases multiple hostname and URI combinations and potentially mismatch the information. In addition to that issue, there are known families of malware that will include benign or junk hostnames in their disassembly that may never get referenced or only referenced to make false phone-homes. Manually locating references and then disassembling using a disassembler (in my case, Capstone Engine) can help Continue reading
The Current State of SDN Protocols
In last week’s blog post I started outlining the various standards needed to make SDN a reality. Here is more detail about the relevant protocols and the IETF’s progress on each one.
OpenFlow has emerged as a Layer 2 software defined networking (SDN) southbound protocol. Similarly, Path Computation Element Protocol (PCEP), BGP Link State Distribution (BGP-LS), and NetConf/YANG are becoming the de-facto SDN southbound protocols for Layer 3. The problem is that these protocols are stuck in various draft forms that are not interoperable, which limits the industry’s SDN progress.
Path Computation Element Protocol (PCEP)
PCEP is used for communicating Label Switched Paths (LSPs) between a Path Computation Client (PCC) and a Path Computation Element (PCE). PCEP has been in use since 2006. The stateful [draft-ietf-pce-stateful-pce] and PCE-initiated LSP [draft-ietf-pce-pce-initiated-lsp] extensions were added more recently and enable PCEP use for SDN deployments. The IETF drafts for both extensions have not yet advanced to “Proposed Standard” status after more than two years.
Because the drafts went through many significant revisions, vendors are struggling to keep up with the Continue reading
It seems as though the entire tech world is splitting up. HP announced they are splitting the Personal Systems Group into HP, Inc and the rest of the Enterprise group in HP Enterprise. Symantec is forming Veritas into a separate company as it focuses on security and leaves the backup and storage pieces to the new group. IBM completed the sale of their x86 server business to Lenovo. There are calls for EMC and Cisco to split as well. It’s like the entire tech world is breaking up right before the prom.
Acquisition Fever
The Great Tech Reaving is a logical conclusion to the acquisition rush that has been going on throughout the industry for the past few years. Companies have been amassing smaller companies like trading cards. Some of the acquisitions have been strategic. Buying a company that focuses on a line of work similar to the one you are working on makes a lot of sense. For instance, EMC buying XtremIO to help bolster flash storage.
Other acquisitions look a bit strange. Cisco buying Flip Video. Yahoo buying Tumblr. There’s always talk around these left field mergers. Is the CEO Continue reading
If you are a CloudFlare Pro or above customer you enjoy the protection of the CloudFlare WAF. If you use one of the common web platforms, such as WordPress, Drupal, Plone, WHMCS, or Joomla, then it's worth checking if the relevant CloudFlare WAF ruleset is enabled.
That's because CloudFlare pushes updates to these rules automatically when new vulnerabilities are found. If you enable the relevant ruleset for your technology then you'll be protected the moment new rules are published.
For example, here's a screenshot of the WAF Settings for a customer who uses WordPress (but doesn't use Joomla). If CloudFlare pushes rules to the WordPress set then they'll be protected automatically.
Enabling a ruleset is simple. Just click the ON/OFF button and make sure it's set to ON.
Here's a customer with the Drupal ruleset disabled. Clicking the ON/OFF button would enable that ruleset and provide protection from existing vulnerabilities and automatic protection if new rules are deployed.
For common problems we've rolled out protection across the board. For example, we rolled out Heartbleed protection and Shellshock automatically, but for technology-specific updates it's best to enable the appropriate ruleset in the WAF Settings.
Original content from Roger's CCIE Blog Tracking the journey towards getting the ultimate Cisco Certification. The Routing & Switching Lab Exam
You may have heard that you should not run OSPF over DMVPN but do you actually know why? There is actually no technical reason why you cannot run OSPF over DMVPN but it does not scale very well. So why is that? The reason is that OSPF is a link state protocol so each spoke […]
Post taken from CCIE Blog
Original post Should you run OSPF over DMVPN?