Here is a quick post for you guys. I’m in the midst of creating a follow up to one of my other articles and it dawns on me that I need to do this particular post first.. A post within a post, or before a post, or something. In either case, I need to provide an update to configuring NFS to poke through a firewall in RHEL 7 for the purpose of RHV in a home lab.. or other use cases. Read on, if you will…
Background
In some older posts, I show you how to configure NFSv3 to use predictable ports in RHEL so that it is more IPtables friendly. You don’t want to shut your firewall down and leave your security wide open. And if your firewall is also doing other work for you like port forwarding, then your ~really~ can’t shut it down…
So here’s the skinny: I’m in the process of setting up new systems for “RHV w/ Hosted Engine”, and I’m using an NFS server for the storage. It’s a home lab, so I’m not exactly worried about performance. I really don’t recommend using a Linux server for production NFS in virtualization, but again, this Continue reading
The oVirt community is made up of a diverse mix of individuals using and contributing to all aspects of the project from all over the world, and we want to make sure that the community is a safe and friendly place for everyone.
This code of conduct applies equally to founders, mentors, and those seeking help and guidance. It applies to all spaces managed by the oVirt project, including IRC, mailing lists, GitHub, Gerrit, oVirt events, and any other forums created by the project team which the community uses for communication.
While we have contribution guidelines for specific tools, we expect all members of our community to follow these general guidelines and be accountable to the community. This isn’t an exhaustive list of things that you can’t do. Rather, take it in the spirit in which it’s intended—a guide to make it easier to enrich all of us and the technical communities in which we participate.
To that end, some members of the oVirt community have put together a new Community Code of Conduct to help guide everyone through what it means to be respectful and tolerant in a global community like the oVirt Project.
We're not looking for a Continue reading
Hi folks, so time ago (years?) I wrote about how to put together High Availability for RHV-M. At the time the actual configuration that I proposed was solid, if a little unorthodox. Still, it certainly left room for improvement. In this week’s post, I’m updating the configuration with something that Red Hat fully supports. They refer to the configuration as Self-Hosted Engine.
Why Hosted Engine?
The primary benefits to using the Self-Hosted Engine, or “HE”, is that it provides a fully supported HA configuration for RHV-M as well as a smaller overall footprint as compared to a traditional deployment of RHV. Also, RHV-M is delivered as an appliance for the HE configuration, so the entire process is streamlined. Who doesn’t like that?
Let’s go back to the smaller footprint statement a few times though.. First off, in a traditional deployment of RHV, you have RHV-M, plus hosts. That deployment of RHV may be on a bare-metal host or it may be on a VM in a different virtualization environment. Regardless, you’re already using up resources and software subscriptions that you may not want to use. Not to mention the fact that it may cause you to cross-deploy resource across Continue reading
After explaining the basics of Linux containers, Dinesh Dutt moved on to the basics of Docker networking, starting with an in-depth explanation of how a container communicates with other containers on the same host, with containers residing on other hosts, and the outside world.
We did several podcasts describing how one could get stellar packet forwarding performance on x86 servers reimplementing the whole forwarding stack outside of kernel (Snabb Switch) or bypassing the Linux kernel and moving the packet processing into userspace (PF_Ring).
Now let’s see if it’s possible to improve the Linux kernel forwarding performance. Thomas Graf, one of the authors of Cilium claims it can be done and explained the intricate details in Episode 64 of Software Gone Wild.
Read more ...Welcome to Technology Short Take #72. Normally, I try to publish these on Fridays, but some personal travel prevented that this time around so I’m publishing on a Monday instead. Enough of that, though…bring on the content! As usual, here’s my random collection of links, articles, and thoughts about various data center technologies.
We got pretty far in our Data Center optimization journey. We virtualized the workload, got rid of legacy technologies, and reduced the number of server uplinks and replaced storage arrays with distributed file system.
Final step on the journey: replace physical firewalls and load balancers with virtual appliances.
I am pleased to announce that my FREE unikernel eBook is now available from O’Reilly.
I have been giving talks about unikernels for the past 2 years at conferences throughout North America. This eBook is my attempt to present most of the information from these talks in a written form. It is not a technical HowTo book, but rather an introduction to the basic concept of unikernels and an explanation of their value.
I hope this eBook will be a useful tool for introducing people to the whys and wherefores of unikernels.
You can download your copy here: http://www.oreilly.com/webops-perf/free/unikernels.csp
In this post, I’d like to share with you some techniques I used to build a triple-provider Vagrant environment—that is, a Vagrant environment that will work unmodified with multiple backend providers. In this case, it will work (mostly) unmodified with AWS, VirtualBox, and the VMware provider (tested with Fusion, but should work with Workstation as well). I know this may not seem like a big deal, but it marks something of a milestone for me.
Since I first started using Vagrant a couple of years ago, I’ve—as expected—gotten better and better at leveraging this tool in a flexible way. You can see this in the evolution of the Vagrant environments found in my GitHub “learning-tools” repository, where I went from hard-coded data values to pulling data from external YAML files.
One thing I’d been shooting for was a Vagrantfile
that would work with multiple backend providers without any modifications, and tonight I managed to build an environment that works with AWS, VirtualBox, and VMware Fusion. There are still a couple of hard-coded values, but the vast majority of information is pulled from an external YAML file.
Let’s take a look at the Vagrantfile
that I created. Here’s Continue reading
Hi folks, the Captain is taking a few weeks off as he has a brand new instance being spun up this week, if you catch my drift… You know, the kind of instance that takes 9 months to boot… I expect to be back at the keyboard by the end of October, but don’t be alarmed if you don’t see much in the way of posts, demos, or tweets. It’s all good.
The post Temporary Time Out appeared first on Captain KVM.
One of the things I often tell people is, “Use the right tool for the job.” As technologists, we shouldn’t get so locked onto any one technology or product that we can’t see when other technologies or products might solve a particular problem more effectively. It’s for this reason that I recently made VirtualBox—not VMware Fusion—my primary virtualization provider for Vagrant environments.
I know it seems odd for a VMware employee to use/prefer a non-VMware product over a competing VMware product. I’ve been a long-time Fusion user (since 2006 when I was part of the original “friends and family” early release). Since I started working with Vagrant about two years ago, I really tried to stick it out with VMware Fusion as my primary virtualization provider. I had a ton of experience with Fusion, and—honestly—it seemed like the right thing to do. After a couple of years, though, I’ve decided to switch to using VirtualBox as my primary provider for Vagrant.
Why? There’s a few different reasons:
Greater manageability: VirtualBox comes with a really powerful CLI tool, vboxmanage
, that lets me do just about anything from the command line. In fact, the VirtualBox documentation refers to Continue reading
Hi folks, I recently posted an article on one of the official Red Hat blogs about the new Neutron integration between RHV and RHOSP. I have to say it’s very cool and might change the way you look at networking capabilities in RHV, at least if you’re also using RHOSP in the same data center.
As a side note, I’ve mentioned my friend and colleague, Tony James in recent posts and he makes another appearance this week. He helped pull together the configuration steps as well as the demo that we recorded. Big kudos to to “Big T”.
Back to the actual integration. If you don’t want to look at the other article, the condensed version of “why should you might care” is as follows:
Those are the 3 big use cases, in a nutshell. If Continue reading
Dinesh Dutt started his excellent Docker Networking webinar with introduction to the concepts of microservices and Linux containers. You won’t find any deep dives in this part of the webinar, but all you need to do to get the details you’re looking for is to fill in the registration form.
With the record-breaking $60 billion Dell/EMC acquisition now complete, both of these companies and their customers now have more options than ever before to meet evolving storage needs. Joining forces helps the newly minted Dell Technologies combine the best of both worlds to better serve customers by blending EMC storage and support with Dell pricing and procurement.
But there is some trouble in paradise. Even when sold by the same vendor, most storage systems have been designed as secluded islands of data, meaning they aren’t terribly good at talking to each other.
In fact, this silo effect is exacerbated …
Modern Storage Software Erodes Resistant Data Silos was written by Timothy Prickett Morgan at The Next Platform.
Wassim Haddad is at Ericsson Silicon Valley where he currently works on distributed cloud infrastructure. Heikki Mahkonen and Ravi Manghirmalani work at Ericsson Research at Silicon Valley in the advanced Networking and Transport labs. The Ericsson team has a diverse background in different NFV, SDN and Cloud related R&D projects.
The Network Function Virtualization (NFV) paradigm breaks away from traditional “monolithic” approaches, which normally build network functions by tightly coupling application code to the underlying hardware. Decoupling these components offers a new approach to designing and deploying network services. One that brings a high degree of flexibility in terms of separating their lifecycle management and enabling much more efficient scaling. Moreover, the move away from specialized hardware coupled with a “virtualize everything” trend is fuelling operators and service providers’ expectations of significant cost reductions. This is undoubtedly a strong motivation behind NFV adoption.
Current NFV market trends point towards two key technologies: Cloud Orchestration (e.g., OpenStack) to provision and manage workflows, and Software Defined Networking (SDN) to enable dynamic connectivity between different workflows as well as network slicing. In parallel, there is also a strong desire to migrate from virtual machines towards microservice enablers, Continue reading
Unikernel technologies, specifically the libraries, are applicable in many ways (e.g. the recent Docker for Mac and Windows products). However, unikernels themselves can enable new categories of products. One of the most prominent products is a network security tool called CyberChaff, based on open source HaLVM unikernels. Today Formaltech, a Galois subsidiary, revealed that Reed College is one of their happy CyberChaff users!
CyberChaff is designed to detect one of the early and critical steps in a security breach: the point when an attacker pivots from their initial entry point to the more juicy parts of the network. This step, the pivot, typically involves scanning the network for hosts that may be better positioned, appear to have more privileges, or are running critical services.
To impair this step of the attack, CyberChaff introduces hundreds (or thousands) of false, lightweight nodes on the network. These hosts are indistinguishable from real hosts when scanned by the attacker, and are each implemented as their own HaLVM unikernel. See the diagram below where green nodes are the real hosts and the orange nodes are HaLVM CyberChaff nodes. This means that an attacker is faced with a huge Continue reading