This is part 18 of the Learning NSX blog series, in which I talk about using layer 3 (L3) routing with VMware NSX but without network address translation (NAT). This post describes a configuration that offers yet another connectivity option for OpenStack cloud administrators and operators.
In part 6, I showed you how to add a gateway appliance to your NSX installation. Part 9 leveraged the gateway appliances to create a L3 gateway service, which—as I explained in part 15—provides the functionality for logical routers in OpenStack. (Logical routing was covered in part 14.) Part 16 expanded the routing configuration to support multiple external networks. This post expands the options again by showing you how to do logical routing without using network address translation (NAT). Of course, it would probably be helpful to read the entire series; links to all posts can be found on the Learning NVP/NSX page.
As I mentioned, so far you’ve seen three different external connectivity options:
Both of the routed Continue reading
This is part 18 of the Learning NSX blog series, in which I talk about using layer 3 (L3) routing with VMware NSX but without network address translation (NAT). This post describes a configuration that offers yet another connectivity option for OpenStack cloud administrators and operators.
In part 6, I showed you how to add a gateway appliance to your NSX installation. Part 9 leveraged the gateway appliances to create a L3 gateway service, which—as I explained in part 15—provides the functionality for logical routers in OpenStack. (Logical routing was covered in part 14.) Part 16 expanded the routing configuration to support multiple external networks. This post expands the options again by showing you how to do logical routing without using network address translation (NAT). Of course, it would probably be helpful to read the entire series; links to all posts can be found on the Learning NVP/NSX page.
As I mentioned, so far you’ve seen three different external connectivity options:
Routing (layer 3 connectivity) to a single external network
Routing (layer 3 connectivity) to multiple external networks using VLANs
Bridging (layer 2 connectivity) between a logical network and a physical broadcast domain
Both of the routed Continue reading
This is part 17 of the Learning NSX blog series. In this post, I’ll show you how to add layer 2 (L2) connectivity to your NSX environment, and how to leverage that L2 connectivity in an NSX-powered OpenStack implementation. This will allow you, as an operator of an NSX-powered OpenStack cloud, to offer L2/bridged connectivity to your tenants as an additional option.
As you might expect, this post does build on content from previous posts in the series. Links to all the posts in the series are available on the Learning NVP/NSX page; in particular, this post will leverage content from part 6. Additionally, I’ll be discussing using NSX in the context of OpenStack, so reviewing part 11 and part 12 might also be helpful.
There are 4 basic steps to adding L2 connectivity to your NSX-powered OpenStack environment:
This is part 17 of the Learning NSX blog series. In this post, I’ll show you how to add layer 2 (L2) connectivity to your NSX environment, and how to leverage that L2 connectivity in an NSX-powered OpenStack implementation. This will allow you, as an operator of an NSX-powered OpenStack cloud, to offer L2/bridged connectivity to your tenants as an additional option.
As you might expect, this post does build on content from previous posts in the series. Links to all the posts in the series are available on the Learning NVP/NSX page; in particular, this post will leverage content from part 6. Additionally, I’ll be discussing using NSX in the context of OpenStack, so reviewing part 11 and part 12 might also be helpful.
There are 4 basic steps to adding L2 connectivity to your NSX-powered OpenStack environment:
Add at least one NSX gateway appliance to your NSX implementation. (Ideally, you would add two NSX gateway appliances for redundancy.)
Create an NSX L2 gateway service.
Configure OpenStack for L2 connectivity by configuring Neutron to use the L2 gateway service you just created.
Add L2 connectivity to a Neutron logical network by attaching to the L2 gateway service.
In this post, I’ll describe a technique I found for simplifying the use of multi-machine Vagrantfiles by extracting configuration data into a separate YAML file. This technique is by no means something that I invented or created, so I can’t take any credit whatsoever; this is an idea I first saw here. I wanted to share it here in the hopes that it might prove useful to a larger audience.
If you aren’t familiar with Vagrant and Vagrantfiles, you might start with my quick introduction to Vagrant.
I found this technique after trying to find a way to simplify the creation of multiple machines using Vagrant. Specifically, I was trying to create multiple instances of CoreOS along with an Ubuntu instance for testing things like etcd, fleet, Docker, etc. The Vagrantfile was getting more and more complex, and making changes (to add another CoreOS node, for example) wasn’t as straightforward as I would have liked.
So what’s the fix? As with other DSLs (domain-specific languages) such as Puppet, the fix was found in separating the data from the code. In Puppet, that means parameterizing the module or class, and I needed to use a similar technique here with Vagrant. So, Continue reading
In this post, I’ll describe a technique I found for simplifying the use of multi-machine Vagrantfiles by extracting configuration data into a separate YAML file. This technique is by no means something that I invented or created, so I can’t take any credit whatsoever; this is an idea I first saw here. I wanted to share it here in the hopes that it might prove useful to a larger audience.
If you aren’t familiar with Vagrant and Vagrantfiles, you might start with my quick introduction to Vagrant.
I found this technique after trying to find a way to simplify the creation of multiple machines using Vagrant. Specifically, I was trying to create multiple instances of CoreOS along with an Ubuntu instance for testing things like etcd, fleet, Docker, etc. The Vagrantfile was getting more and more complex, and making changes (to add another CoreOS node, for example) wasn’t as straightforward as I would have liked.
So what’s the fix? As with other DSLs (domain-specific languages) such as Puppet, the fix was found in separating the data from the code. In Puppet, that means parameterizing the module or class, and I needed to use a similar technique here with Vagrant. So, Continue reading
It’s no secret that I’m something of a photography enthusiast. To me, photography is a relaxing puzzle of how to assemble all the various pieces—setting, lighting, exposure, composition, etc.—to create just the right image. I’m not an expert, but that’s OK; I just do this for fun and to relax. If you’d like to see a small sampling of some of the photos I’ve taken, I publish some of them here on 500px.com.
I know that a fair number of folks in the IT industry are also photo enthusiasts, and so I was curious to hear some feedback from fellow enthusiasts about their photography workflows. In particular, I’m curious to know about how others answer these questions:
It’s no secret that I’m something of a photography enthusiast. To me, photography is a relaxing puzzle of how to assemble all the various pieces—setting, lighting, exposure, composition, etc.—to create just the right image. I’m not an expert, but that’s OK; I just do this for fun and to relax. If you’d like to see a small sampling of some of the photos I’ve taken, I publish some of them here on 500px.com.
I know that a fair number of folks in the IT industry are also photo enthusiasts, and so I was curious to hear some feedback from fellow enthusiasts about their photography workflows. In particular, I’m curious to know about how others answer these questions:
What formats do you use with your photos? (I’ve been shooting in RAW—NEF, specifically, since I’m a Nikon guy—then converting to Adobe DNG for use with Lightroom.)
How do you handle long-term storage of your photos? (Once I have the photos in DNG in Lightroom, then I’ve been archiving the RAW files on my Synology NAS.)
What pictures do you keep—all of them, or only the best ones? (So far, I’ve been keeping all the RAW files, archiving when Continue reading
This is part 16 of the Learning NSX series, in which I will show you how to configure VMware NSX to route to multiple external VLANs. This configuration will allow you to have logical routers that could be uplinked to any of the external VLANs, providing additional flexibility for consumers of NSX logical networks.
Naturally, this post builds on all the previous entries in this series, so I encourage you to visit the Learning NVP/NSX page for links to previous posts. Because I’ll specifically be discussing NSX gateways and routing, there are some posts that are more applicable than others; specifically, I strongly recommend reviewing part 6, part 9, part 14, and part 15. Additionally, I’ll assume you’re using VMware NSX with OpenStack, so reviewing part 11 and part 12 might also be helpful.
Ready? Let’s start with a very quick review.
You may recall from part 6 that the NSX gateway appliance is the piece of VMware NSX that handles traffic into or out of logical networks. As such, the NSX gateway appliance is something of a “three-legged” appliance:
This is part 16 of the Learning NSX series, in which I will show you how to configure VMware NSX to route to multiple external VLANs. This configuration will allow you to have logical routers that could be uplinked to any of the external VLANs, providing additional flexibility for consumers of NSX logical networks.
Naturally, this post builds on all the previous entries in this series, so I encourage you to visit the Learning NVP/NSX page for links to previous posts. Because I’ll specifically be discussing NSX gateways and routing, there are some posts that are more applicable than others; specifically, I strongly recommend reviewing part 6, part 9, part 14, and part 15. Additionally, I’ll assume you’re using VMware NSX with OpenStack, so reviewing part 11 and part 12 might also be helpful.
Ready? Let’s start with a very quick review.
You may recall from part 6 that the NSX gateway appliance is the piece of VMware NSX that handles traffic into or out of logical networks. As such, the NSX gateway appliance is something of a “three-legged” appliance:
One “leg” (network interface) provides management connectivity among the gateway appliance and Continue reading
Welcome to Technology Short Take #45. As usual, I’ve gathered a collection of links to various articles pertaining to data center-related technologies for your enjoyment. Here’s hoping you find something useful!
Welcome to Technology Short Take #45. As usual, I’ve gathered a collection of links to various articles pertaining to data center-related technologies for your enjoyment. Here’s hoping you find something useful!
Cormac Hogan has a list of a few useful NSX troubleshooting tips.
If you’re not really a networking pro and need a “gentle” introduction to VXLAN, this post might be a good place to start.
Also along those lines—perhaps you’re a VMware administrator who wants to branch into networking with NSX, or you’re a networking guru who needs to learn more about how this NSX stuff works. vBrownBag has been running a VCP-NV series covering various objectives from the VCP-NV exam. Check them out–objective 1, objective 2, objective 3, and objective 4 have been posted so far.
One of the great things about this site is the interaction I enjoy with readers. It’s always great to get comments from readers about how an article was informative, answered a question, or helped solve a problem. Knowing that what I’ve written here is helpful to others is a very large part of why I’ve been writing here for over 9 years.
Until today, I’ve left comments (and sometimes trackbacks) open on very old blog posts. Just the other day I received a comment on a 4 year old article where a reader was sharing another way to solve the same problem. Unfortunately, that has to change. Comment spam on the site has grown considerably over the last few months, despite the use of a number of plugins to help address the issue. It’s no longer just an annoyance; it’s now a problem.
As a result, starting today, all blog posts more than 3 years old will automatically have their comments and trackbacks closed. I hate to do it—really I do—but I don’t see any other solution to the increasing blog spam.
I hope that this does not adversely impact my readers’ ability to interact with me, but it is Continue reading
One of the great things about this site is the interaction I enjoy with readers. It’s always great to get comments from readers about how an article was informative, answered a question, or helped solve a problem. Knowing that what I’ve written here is helpful to others is a very large part of why I’ve been writing here for over 9 years.
Until today, I’ve left comments (and sometimes trackbacks) open on very old blog posts. Just the other day I received a comment on a 4 year old article where a reader was sharing another way to solve the same problem. Unfortunately, that has to change. Comment spam on the site has grown considerably over the last few months, despite the use of a number of plugins to help address the issue. It’s no longer just an annoyance; it’s now a problem.
As a result, starting today, all blog posts more than 3 years old will automatically have their comments and trackbacks closed. I hate to do it—really I do—but I don’t see any other solution to the increasing blog spam.
I hope that this does not adversely impact my readers’ ability to interact with me, but it is Continue reading
You may have heard of Intel Rack-Scale Architecture (RSA), a new approach to designing data center hardware. This is an idea that was discussed extensively a couple of weeks ago at Intel Developer Forum (IDF) 2014 in San Francisco, which I had the opportunity to attend. (Disclaimer: Intel paid my travel and hotel expenses to attend IDF.)
Of course, IDF 2014 wasn’t the first time I’d heard of Intel RSA; it was also discussed last year. However, this year I had the chance to really dig into what Intel is trying to accomplish through Intel RSA—note that I’ll use “Intel RSA” instead of just “RSA” to avoid any confusion with the security company—and I wanted to share some of my thoughts and conclusions here.
Intel always seems to present Intel RSA as a single entity that is made up of a number of other technologies/efforts; specifically, Intel RSA is typically presented as:
When you look at Intel RSA this way—and this is the way that Continue reading
You may have heard of Intel Rack-Scale Architecture (RSA), a new approach to designing data center hardware. This is an idea that was discussed extensively a couple of weeks ago at Intel Developer Forum (IDF) 2014 in San Francisco, which I had the opportunity to attend. (Disclaimer: Intel paid my travel and hotel expenses to attend IDF.)
Of course, IDF 2014 wasn’t the first time I’d heard of Intel RSA; it was also discussed last year. However, this year I had the chance to really dig into what Intel is trying to accomplish through Intel RSA—note that I’ll use “Intel RSA” instead of just “RSA” to avoid any confusion with the security company—and I wanted to share some of my thoughts and conclusions here.
Intel always seems to present Intel RSA as a single entity that is made up of a number of other technologies/efforts; specifically, Intel RSA is typically presented as:
Disaggregation of the compute, memory, and storage capacity in a rack
Silicon photonics as a low-latency, high-speed rack-scale fabric
Some software that combines disaggregated hardware capacity over a rack-scale fabric to create “pooled systems”
When you look at Intel RSA this way—and this is the way that Continue reading
This post will provide a quick introduction to a tool called Vagrant. Unless you’ve been hiding under a rock—or, more likely, been too busy doing real work in your data center to pay attention—you’ve probably heard of Vagrant. Maybe, like me, you had some ideas about what Vagrant is (or isn’t) and what it does (or doesn’t) do. Hopefully I can clear up some of the confusion in this post.
In its simplest form, Vagrant is an automation tool with a domain-specific language (DSL) that is used to automate the creation of VMs and VM environments. The idea is that a user can create a set of instructions, using Vagrant’s DSL, that will set up one or more VMs and possibly configure those VMs. Every time the user uses the precreated set of instructions, the end result will look exactly the same. This can be beneficial for a number of use cases, including developers who want a consistent development environment or folks wanting to share a demo environment with other users.
Vagrant makes this work by using a number of different components:
This post will provide a quick introduction to a tool called Vagrant. Unless you’ve been hiding under a rock—or, more likely, been too busy doing real work in your data center to pay attention—you’ve probably heard of Vagrant. Maybe, like me, you had some ideas about what Vagrant is (or isn’t) and what it does (or doesn’t) do. Hopefully I can clear up some of the confusion in this post.
In its simplest form, Vagrant is an automation tool with a domain-specific language (DSL) that is used to automate the creation of VMs and VM environments. The idea is that a user can create a set of instructions, using Vagrant’s DSL, that will set up one or more VMs and possibly configure those VMs. Every time the user uses the precreated set of instructions, the end result will look exactly the same. This can be beneficial for a number of use cases, including developers who want a consistent development environment or folks wanting to share a demo environment with other users.
Vagrant makes this work by using a number of different components:
Providers: These are the “back end” of Vagrant. Vagrant itself doesn’t provide any virtualization functionality; it relies on Continue reading
This is a liveblog for session DATS013, on microservers. I was running late to this session (my calendar must have been off—thought I had 15 minutes more), so I wasn’t able to capture the titles or names of the speakers.
The first speaker starts out with a review of exactly what a microserver is; Intel sees microservers as a natural evolution from rack-mounted servers to blades to microservers. Key microserver technologies include: Intel Atom C2000 family of processors; Intel Xeon E5 v2 processor family; and Intel Ethernet Switch FM6000 series. Microservers share some common characteristics, such as high integrated platforms (like integrated network) and being designed for high efficiency. Efficiency might be more important than absolute performance.
Disaggregation of resources is a common platform option for microservers. (Once again this comes back to Intel’s rack-scale architecture work.) This leads the speaker to talk about a Technology Delivery Vehicle (TDV) being displayed here at the show; this is essentially a proof-of-concept product that Intel built that incorporates various microserver technologies and design patterns.
Upcoming microserver technologies that Intel has announced or is working on incude:
This is a liveblog of IDF 2014 session DATS009, titled “Ceph: Open Source Storage Software Optimizations on Intel Architecture for Cloud Workloads.” (That’s a mouthful.) The speaker is Anjaneya “Reddy” Chagam, a Principal Engineer in the Intel Data Center Group.
Chagam starts by reviewing the agenda, which—as the name of the session implies—is primarily focused on Ceph. He next transitions into a review of the problem with storage in data centers today; specifically, that storage needs “are growing at a rate unsustainable with today’s infrastructure and labor costs.” Another problem, according to Chagam, is that today’s workloads end up using the same sets of data but in very different ways, and those different ways of using the data have very different performance profiles. Other problems with the “traditional” way of doing storage is that storage processing performance doesn’t scale out with capacity, storage environments are growing increasingly complex (which in turn makes management harder).
Chagam does admit that not all workloads are suited for distributed storage solutions. If you need high availability and high performance (like for databases), then the traditional scale-up model might work better. For “cloud workloads” (no additional context/information provided to qualify what a Continue reading