It seems as though the entire tech world is splitting up. HP announced they are splitting the Personal Systems Group into HP, Inc and the rest of the Enterprise group in HP Enterprise. Symantec is forming Veritas into a separate company as it focuses on security and leaves the backup and storage pieces to the new group. IBM completed the sale of their x86 server business to Lenovo. There are calls for EMC and Cisco to split as well. It’s like the entire tech world is breaking up right before the prom.
Acquisition Fever
The Great Tech Reaving is a logical conclusion to the acquisition rush that has been going on throughout the industry for the past few years. Companies have been amassing smaller companies like trading cards. Some of the acquisitions have been strategic. Buying a company that focuses on a line of work similar to the one you are working on makes a lot of sense. For instance, EMC buying XtremIO to help bolster flash storage.
Other acquisitions look a bit strange. Cisco buying Flip Video. Yahoo buying Tumblr. There’s always talk around these left field mergers. Is the CEO Continue reading
If you are a CloudFlare Pro or above customer you enjoy the protection of the CloudFlare WAF. If you use one of the common web platforms, such as WordPress, Drupal, Plone, WHMCS, or Joomla, then it's worth checking if the relevant CloudFlare WAF ruleset is enabled.
That's because CloudFlare pushes updates to these rules automatically when new vulnerabilities are found. If you enable the relevant ruleset for your technology then you'll be protected the moment new rules are published.
For example, here's a screenshot of the WAF Settings for a customer who uses WordPress (but doesn't use Joomla). If CloudFlare pushes rules to the WordPress set then they'll be protected automatically.
Enabling a ruleset is simple. Just click the ON/OFF button and make sure it's set to ON.
Here's a customer with the Drupal ruleset disabled. Clicking the ON/OFF button would enable that ruleset and provide protection from existing vulnerabilities and automatic protection if new rules are deployed.
For common problems we've rolled out protection across the board. For example, we rolled out Heartbleed protection and Shellshock automatically, but for technology-specific updates it's best to enable the appropriate ruleset in the WAF Settings.
Original content from Roger's CCIE Blog Tracking the journey towards getting the ultimate Cisco Certification. The Routing & Switching Lab Exam
You may have heard that you should not run OSPF over DMVPN but do you actually know why? There is actually no technical reason why you cannot run OSPF over DMVPN but it does not scale very well. So why is that? The reason is that OSPF is a link state protocol so each spoke […]
Post taken from CCIE Blog
Original post Should you run OSPF over DMVPN?
In many companies—both large and small—getting staffing right is a challenge. Critical teams are always starved for resources, and the common peanut butter approach to distributing headcount rarely maps to an explicit strategy. In fact, over time, some teams will grow disproportionately large, leaving other teams (system test, anyone?) struggling to keep pace.
But why is it that staffing is so difficult to get right?
In yesterday’s blog post, I referenced Eliyahu Goldratt’s seminal business book The Goal, and highlighted the Theory of Constraints. I’ll summarize the theory here (if you read yesterday, you can safely skip to the next section).
The Theory of Constraints is basically a management paradigm that promotes attention to bottleneck resources above all else. In any complex system that has multiple subsystems required to deliver something of value, there will be one or more bottleneck resources. A bottleneck resource is the limiting subsystem in the overall system. To increase the overall output of the system as a whole, you have to address the bottlenecks.
The book focuses on a manufacturing plant. Basically, there are different stations in the manufacturing process. Each makes a different part of the widget. To Continue reading
If you want to get a free copy of my SDN and OpenFlow – The Hype and the Harsh Reality book, download it now. The offer will expire by October 20th.
You know it needs to be done, it could be easy… or it could get messy, and you’re sure that the world will be a better place when you’re finished.
That’s the dilemma that some of our enterprise customers have when grappling with “the cloud.”
We’ve noticed a distinct trend among customers that grew up outside the “cloud era”; they’ve been trying to bolt “cloud” onto their legacy IT blueprint and it has been a struggle. They expected to realize operational and capital efficiencies that approximate high scale Internet businesses. Unfortunately, they are missing by a long shot.
At some point along the way, these customers realize that they need to be willing to drive structural change. They need to create a “cloud blueprint” for their applications and IT infrastructure. In some cases, this means a transition to public/hosted infrastructure; in other cases, it means building new private infrastructure based on cloud principles. In many cases, it’s a mixture of both.
When private cloud is part of the answer, we’ve consistently found design patterns built on infrastructure platforms like VMware vSphere and OpenStack and big data platforms such as Hortonworks. Customers want to get these services operational quickly so they often stay with Continue reading
One of the questions I’ve encountered in talking to our customers has been “What environments are a good example of working on top of the Layer-3 Clos design?” Most engineers are familiar with the classic Layer-2 based Core/Distribution/Access/Edge model for building a data center. And while that has served us well in the older client-server north-south traffic flow approaches and in smaller deployments, modern distributed applications stress the approach to its breaking point. Since L2 designs normally need to be built around pairs of devices, relying on individual platforms to carry 50% of your data center traffic can present a risk at scale. On top of this you have to have a long list of protocols that can result in a brittle and operationally complex environment as you deploy 10′s of devices.
Hence the rise of the L3 Clos approach allowing for combining many small boxes, each carrying only a subset of your traffic, along with running industry standard protocols that have a long history of operational stability and troubleshooting ease. And, while the approach can be applied to many different problems, building a practical implementation of a problem is the best way to show Continue reading
Now that I’ve kept marketing/SEO happy, lets dive right in. Two of the latest initiatives from the VMware marketing engine are ‘“SDDC”’ (software-defined data center) and “Hyper-Converged.” Hype aside, these two concepts are fundamentally aligned with what hyper-scale operators have been doing for years. It boils right down to having generic hardware configured for complex and varied roles by layering different software.
At Cumulus Networks, we help businesses build cost-effective networks by leveraging “white box” switches together with a Linux® operating system. We feel this is the crucial missing piece to the overall SDDC vision.
First, let’s back up a little and talk a little history. VMware started as a hypervisor stack for abstracting compute resources from the underlying server (and before that, workstations… but I digress). They moved on to more advanced management of that newly disaggregated platform, with the rise of vCenter and everything that followed.
Then VMware turned their attention to the Continue reading
There has been an aspiration to replace Nova-net with Neutron for about 4 years now, hasn’t happened yet. The latest is that Neutron is being threatened with being demoted back into “Incubation”, due to promising to make itself production ready for each of the last 4 releases, and then totally failing to follow through. All of the handful of production deployments of Neutron are in conjunction with Nicira/NSX-MH, which does all the heavy lifting.
The Neutron folks are optimistic that they will be production ready in Juno (the next release, Oct this year), but I’m betting on Kilo, the release early next year.
The post Nova-net is the networking component of Nova appeared first on Cumulus Networks Blog.
Man is a tool-using animal. Without tools he is nothing, with tools he
is all. – Thomas Carlyle
The most satisfactory definition of man from the scientific point of view is probably Man the Tool-maker. – Kenneth Oakley
The Artist and the Scientist both agree that tools are an integral part of what it means to be human. Even human history is divided into ages based on the power and sophistication of the tools: Stone Age, Bronze Age, Iron Age and now, Information Age. Our capabilities are enormously magnified by the tools we have at hand.
A server admin has the tools to manage 100,000 servers fairly easily, while the network admin struggles to manage a small fraction of that size of networking equipment. In the modern data center, where scale, agility and flexibility are part of its DNA, this lack of netadmin tools is a major hurdle in the operation of the data center. Subsequently, a big part of the transformation and turmoil in networking today is an attempt to get rid of this limitation.
Among the tools the server admin has at his or her disposal, automation is key. It is impossible to manually manage devices at the Continue reading
Since the dawn of time people have skirted best practice and banged together networks, putting the proverbial square peg in the esoteric round hole. For example, new vendor XYZ’s solution has brought in new requirements for deployment. While it may seem easier for to throw together a new firewall, a switch, and maybe some additional routes, and of course Tom‘s favorite… NAT — but where does it stop!? As you continue to pile layer upon layer into your uninspired network design you will soon realize that your “beautiful network” has become the ugly duckling that you need help fixing.
That leads me to my first point. Complex systems are expensive, not only in CAPEX, but in OPEX. When you design and build a network, you have to ensure that you are not building something that no one else has dreamed up, or else your problems will also be unique. And without the additional money to hire top tier engineers, you could be short staffed, or worse yet, facing the problem on your own. The more complex your network becomes, the more likely it is to fail. As I’m often quoted as saying, “The complexity required for robustness, often goes Continue reading
Following the breakups of IBM and HP as they divest the low profit divisions and EMC under a some pressure to disband the Federation, the same question is often raised about Cisco but what could go ?
The post If Cisco Could Be Split Up, What Could Go ? appeared first on EtherealMind.
This is part 16 of the Learning NSX series, in which I will show you how to configure VMware NSX to route to multiple external VLANs. This configuration will allow you to have logical routers that could be uplinked to any of the external VLANs, providing additional flexibility for consumers of NSX logical networks.
Naturally, this post builds on all the previous entries in this series, so I encourage you to visit the Learning NVP/NSX page for links to previous posts. Because I’ll specifically be discussing NSX gateways and routing, there are some posts that are more applicable than others; specifically, I strongly recommend reviewing part 6, part 9, part 14, and part 15. Additionally, I’ll assume you’re using VMware NSX with OpenStack, so reviewing part 11 and part 12 might also be helpful.
Ready? Let’s start with a very quick review.
You may recall from part 6 that the NSX gateway appliance is the piece of VMware NSX that handles traffic into or out of logical networks. As such, the NSX gateway appliance is something of a “three-legged” appliance:
In the first part of this two part series, I talked about why it’s important to learn to write — and to learn to write effectively. But how do you become an effective writer? I started with the importance of reading, particularly difficult and regular reading across a broad array of topics. Is there anything else you do to improve your writing skills? Yes — specifically, get yourself edited, and get some practice.
Hey — I’m a pretty good writer, why do I need to get myself edited? After all, I’ve written nine books, hundreds of articles, tens of research papers, and… But that’s just the point, isn’t it? I wrote several large papers (at least I considered them large at the time) while I was in the Air Force, but they never seemed to have the impact I thought they should have. Weren’t they well written? Weren’t they well organized? Well researched? As it turns out, no, not really. I started on my first white paper just after I’d started in the Cisco TAC, reading through the EIGRP code and writing a paper — for internal use only — based on what I could find. Done and I Continue reading
At Interop ’14 New York a few weeks ago, Ethan Banks collected four fellow CCIEs together for a panel discussion about whether we should be studying newer SDN technologies or pursuing the same old traditional certifications. I’ve been getting that kind of question for a while. This post summarizes a few points I took away from the other panelists at the show, with a promise to give some of my own thoughts in the post that follows.
We had a pretty good spread of competing ideas from the four panelists. I couldn’t sit there and furiously write what the others were saying, for later blogging… but thankfully, there were a couple of professionals in the room! While Interop doesn’t normally post audio or video of the sessions, there have been a few trade press articles written about what was discussed the session:
I came away with several ideas from the other panelists that either taught me something or made an existing opinion much stronger.
First, it seemed that there was general agreement that cloud, DevOps, and automation were the point. SDN, which was in the session title and Continue reading
For those of you into expanding your experience through reading, there is a foundational reference at the core of many MBA programs. The book, Eliyahu Goldratt’s The Goal, introduces a concept call the Theory of Constraints. Put simply, the Theory of Constraints is the premise that systems will tend to be limited by a very small number of constraints (or bottlenecks). By focusing primarily on the bottlenecks, you can remove limitations and increase system throughput.
The book uses this theory to talk through management paradigms as the main character works through a manufacturing problem. But the theory actually applies to all systems, making its application useful in more scenarios than management or manufacturing.
Before we get into networking applications, it is worth walking through some basics about the Theory of Constraints. Imagine a simple set of subsystems strung together in a larger system. Perhaps, for example, software development requires two development teams, a QA team, and a regressions team before new code can be delivered.
If output relies on each of these subsystems, then the total output of the system as a whole is determined by the lowest-output subsystem. For instance, imagine that SW1 Continue reading
Over at CircleID, Geoff Huston has a long’ish article on Title II regulation of the Internet, and the ideals of “net neutrality.” The reasoning is tight and strong — his conclusion a simple one: At its heart, the Internet access business really is a common carrier business. So my advice to the FCC is to […]
My “Was it bufferbloat?” blog post generated an unexpected amount of responses, most of them focusing on a side note saying “it looks like there really are service providers out there that are clueless enough to reorder packets within a TCP session”. Let’s walk through them.
Read more ...