While this is likely widely known, I figured this was worth documenting for someone that might not have done this before. If you’ve looked at CoreOS at all you know that in order to connect to a CoreOS server it needs to be configured with your machines SSH key. This is done through the cloud-config file which we’ll cover in much greater detail in a later post. So let’s quickly walk through how to generate these keys on the three different operating systems I’m using for testing…
CentOS
You’ll need to have OpenSSH installed on the CentOS host you’re using in order for this to work. Check to see if it’s installed…
Now let’s check to make sure that we don’t already have a key generated…
There’s no .ssh folder listed in our home directory so let’s move on to generating the key.
Note: If there was a .ssh folder already present, there’s a good chance you already have a key generated. If you want to use this existing key, skip ahead to after we generate the new key.
Since there isn’t a .ssh folder present, let’s create a new key. Continue reading
Outsourcing is complex, and there are lots of ways it can go wrong, or simply fail to deliver. I’ve put together a list of things that I see going wrong with outsourcing arrangements. Of course it’s not exclusive!
There’s a few different types of outsourcing. It might mean procuring a commodity service – e.g. IaaS, or it might mean handing over your existing environment and staff to a third party. There’s also a whole range of levels in between, but the usual deal is: Some part of your environment gets managed or delivered by someone else, according to the terms of a fixed agreement.
Here’s a few things I’ve learnt to watch out for (nb not all these items apply to all types of agreements):
If your outsourcer is managing your software, the contract usually covers applying security patches and bug fixes. But what gets missed is larger upgrades – e.g. ESXi 4.1 to 5.x. Everything goes OK for a while…and then your version goes End of Support.
It then becomes a major drama to get the upgrades sorted out. For financial purposes, you may not be able to do major Continue reading
Facebook engineering has recently introduced their next-generation data center network architecture planned to be operational in the new Altoona facility. Introducing data center fabric, the next-generation Facebook data center network
In this post we are going to look at the design proposed and break down some of the reasons why this design was necessary although maybe still not ideal.
Within Facebook’s data center the network is a critical component driving this social networks ever increasing appetite to connect people together. The challenges unfold on two axis, the first being the explosive growth in the user population and its increasing demand and access bolstered by modern mobile devices and the second related to the exponential growth of machine to machine traffic required to aggregate and compose information which we all know and love. Due to the nature of Facebook applications, machine to machine communications is “several orders of magnitude larger than what goes out the Internet”
The need to parallelize work becomes a daunting task in this environment requiring an enormous communication fabric for internal processing.
the rate of our machine-to-machine traffic growth remains exponential, and the volume has been doubling at an interval of less Continue reading
A long time student of INE, Neil Moore has done it again, last time becoming the worlds first 7x CCIE, and this time becoming the worlds first and only 8x CCIE. And no, he doesn’t work for Cisco.
As a side note, INE has been experiencing phenomenal growth, and tremendous passing rates for people that have been sitting our R&S, Data Center and Collaboration bootcamps. In fact, of just the bootcamps we’ve held this year, nearly all of our students have reported back to us a pass in the 3-4 weeks following their bootcamp experience. Now mind you, these folks come to us studied up and prepared for the bootcamp, but they all credit us as being the deciding factor in their pass.
We’re also adding new content all the time, including Python scripting, Openstack and SDN such as OVS. Check out our Black Friday deals and grab an All Access Pass or sign up for a bootcamp and check out what’s new!
Pica8 Says ‘Yes’ and Challenges the FUD
Up to this point, OpenFlow has mostly been deployed in research and higher-education environments. These early trials have shed some light on interesting use cases, what OpenFlow is good for, and of course, what OpenFlow might not be so good for.
This is important because OpenFlow and SDN adoption is only going to grow. It’s imperative that we understand these limitations – specifically, what’s real and what’s FUD.
One of these is scale.
If you’ve kicked the tires on OpenFlow, one question you may have heard is “How many flows does that switch support?” However, this question is only part of the story. It’s like asking only about a car’s top speed when what you should be thinking other things too – such as fuel efficiency and maintenance. So to figure out the right questions, we first need to go over a bit of background.
In its most basic terms, any network traffic, whether it’s Layer 2, Layer 3, or something else, is governed by a of forwarding rules as defined by a series of protocols. If it’s this MAC, do this. If it’s that IP, go Continue reading
You know those times when you paste innocuous config to a router and it just freezes up on you? Even if you know you’ve done nothing wrong it can be a few scary seconds until the router starts to respond again. While reading up on onePK I was trying to come up with a use case. Though I eventually thought about some other things that would actually be useful. The very first thing that came to mind was something to test just for fun.
Picture this; You ask a co-worker to login to a router and shutdown an interface which won’t be used anymore. Your colleague logs into the device and disables the interface and the session hangs. Only it doesn’t just hang, it’s dead and apparently your colleague can’t ping the device now. At this point it can be a good idea to ask your co-worker about what exactly he changed.
Continue readingYou know those times when you paste innocuous config to a router and it just freezes up on you? Even if you know you’ve done nothing wrong it can be a few scary seconds until the router starts to respond again. While reading up on onePK I was trying to come up with a use case. Though I eventually thought about some other things that would actually be useful. The very first thing that came to mind was something to test just for fun.
Picture this; You ask a co-worker to login to a router and shutdown an interface which won’t be used anymore. Your colleague logs into the device and disables the interface and the session hangs. Only it doesn’t just hang, it’s dead and apparently your colleague can’t ping the device now. At this point it can be a good idea to ask your co-worker about what exactly he changed.
Last week we hosted the Open vSwitch 2014 Fall Conference, which was another great opportunity to demonstrate our continued investment in leading open source technologies. To get a sense of the energy and enthusiasm at the event, take a quick view of this video we captured with attendees.
I’ve been thinking about the key takeaways from everything I saw and everyone I spoke with.
First, there’s huge interest in Open vSwitch performance, both in terms of measurement and improvement. The talks from Rackspace and Noiro Networks/Cisco led me to believe that we’ve reached the point where Open vSwitch performance is good enough on hypervisors for most applications, and often faster than competing software solutions such as the Linux bridge.
Talks from Intel and one from Luigi Rizzo at the University of Pisa demonstrated that by bypassing the kernel entirely through DPDK or netmap, respectively, we haven’t reached the limits of software forwarding performance. Based on a conversation I had with Chris Wright from Red Hat, this work is helping the Linux kernel community look into reducing the overhead of the kernel, so that we can see improved performance without losing the functionality provided by the kernel.
Johann Tönsing from Netronome Continue reading
The Packet Pushers team are once again packing their virtual underpants and this time heading to Spain at HP Discover in Barcelona with our “cloud studio”. Next week we will enjoying some warm winter nerdiness on HP Networking products and strategy, looking closely at the ever-growing HP VAN strategy for SDN and also diving in the bread & […]
The post We Are At HP Discover Conference in Barcelona Next Week appeared first on Packet Pushers Podcast and was written by Greg Ferro.
Relying on a DMZ to protect your network and data is like putting money in a bank that depends on one guard and a single gate to secure its deposits. Imagine how tempting all those piles of money would be to those who had access — and how keen everyone else would be to obtain access.
But banks do not keep cash out on tables in the lobby, they stash it in security boxes inside vaults, behind locked doors, inside a building patrolled by a guard and secured by a gate. Likewise, network segmentation offers similar security for an organization’s assets.
+ ALSO ON NETWORK WORLD Free security tools you should try +
To read this article in full or to leave a comment, please click here
I recently received a note from a colleague from ZeroHedge (http://www.zerohedge.com/news/2014-11-21/not-so-fab-1-billion-valuation-15-million-year) that was officially calling the beginning of the bubble bursting based on the untimely (or timely depending on your perspective) demise of the startup Fab. I had never heard of Fab, but according to ZeroHedge, Fab “started out as a dating site for the gay community and then relaunched as a flash sale site for home decor – raised $150 million just over a year ago (at a $1 billion valuation), but as TechCrunch reports today, multiple sources have confirmed that Fab is in talks to sell to PCH International for $15 million in a half cash and half stock deal. Pets.com?”
Its a fair question indeed – are we seeing the same pattern we saw in the last bubble (i.e. Dot-Com 1.0) being repeated? Certainly, crazy valuations of equally crazy or non-existent business models are a cause for concern, but more important than that are the fundamentals of what is driving the speculation in the first place. In Dot-Com 1.0 we saw simultaneous speculative investment across at least 3 major areas: internet backbone infrastructure, internet edge/access Continue reading
Leon Adato, Technical Product Marketing Manager with SolarWinds is our guest blogger today, with a sponsored post in a four-part series on the topic of alerting. In the first part of this series, Leon explained how to answer the first of four (ok, really 5) questions that monitoring professionals are inevitably asked once they join […]
The post 4 Inevitable Questions When Joining a Monitoring Group, Pt. 2 appeared first on Packet Pushers Podcast and was written by Sponsored Blog Posts.
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
News reports show cyber attacks continue to outpace IT’s ability to protect critical data, but teams that have built systems to deliver accurate threat intelligence can often end an attack before damage is done. Threat intelligence comes from commercially available information, ongoing analysis of user behavior and native intelligence from within the organization.
+ ALSO ON NETWORK WORLD 5 ways to escape password hell +
To read this article in full or to leave a comment, please click here
In search of agility and low overhead, companies are putting as many applications as practical in the cloud. But the resulting hybrid IT environments, where certain applications remain on-premise for security or other reasons, can result in data integration issues that reduce efficiency drags and hamper agility.
In fact, cloud integration is much more demanding than many people want to believe.
As an applications intelligence company that has built its customer-facing and internal operations primarily on cloud applications within a hybrid environment, AppDynamics has had considerable experience with cloud data integration. Here are the five essential data integration capabilities that any company serious about harnessing the cloud should have in its pocket:
To read this article in full or to leave a comment, please click here
Okay, finally, I’m going to answer the question. For some value of the word “answer,” anyway. I’ve spent three weeks thinking through various question you should be asking, along the way making three specific points:
Okay, so how do I actually decide?
First, ask: where do I want to go? Who do I want to be as a person, overall? This question needs to be a “bigger life” question, not a narrow, “how much money do I want to be making,” question. One of those other turning points in my life as an engineer was when Don S said to me one day, “When I’m gone, people aren’t going to remember me for writing a book. They are going to remember me as a father, friend, and Continue reading
I’ve come across this scenario on multiple occasions now. Your company wants to set up a demo at a “customers” location. Your demo is reliant on its own router talking back to HQ to pull necessary data for the program in question. Unfortunately your internet connection at the “demo” site is sitting behind a NAT. […]
The post Configure a DMVPN Spoke behind a Home router/modem appeared first on Packet Pushers Podcast and was written by Korey.
While the industry press deliberates the disaggregation of Arista and Cisco, and Juniper’s new CEO, Juniper launched a virtual version of its vMX router, which is supposed to have up to 160 Gbps of throughput (as compared to 10 Gbps offered by Vyatta 5600 and Cisco CSR). Can Juniper really deliver on that promise?
Read more ...Somehow I missed this when it was announced, but the Juniper SRX-110H-VA is End of Life, and is no longer supported for new software releases.
End of Life announcement is here, with extra detail in this PDF. Announcement was Dec 10 2013, with “Last software engineering support” date Dec 20 2013.
This is now starting to take effect, with 12.1X47 not supported on this platform:
Note: Upgrading to Junos OS Release 12.1X47-D10 or later is not supported on the J Series devices or on the low-memory versions of the SRX100 and SRX200 lines. If you attempt to upgrade one of these devices to Junos OS 12.1X47-D10, installation will be aborted with the following error message:
ERROR: Unsupported platform <platform-name >for 12.1X47 and higher
The replacement hardware is the SRX-110H2-VA, which has 2GB of RAM instead of 1GB. Otherwise it’s exactly the same, which seems a missed opportunity to at least update to local 1Gb switching.
Michael Dale has a little more info here, along with tips for tricking a 240H into installing 12.1X47.
So I decided to see if I could work around this and trick JunOS into installing on my 240H, I Continue reading