I want to get Chef installed and running before we dive into all of the lingo required to fully understand what Chef is doing. In this post we’ll install the Chef Server, a Chef client, and a test node we’ll be testing our Chef configs on. That being said, let’s dive right into the configuration!
Installing Chef Server
The Linux servers I’ll be using are based on CentOS (the exact ISO is CentOS-6.4-x86_64-minimal.iso). The Chef server is really the brains of the operation. The other two components we’ll use in the initial lab are the client and the node both of which interact with the server. So I’m going to assume that I’ve just installed Linux and haven’t done anything besides configured the hostname, IP address, gateway, and name server (as a rule, I usually disable SELinux as well). We’ll SSH to the server and start from there…
The base installation of CentOS I’m running doesn’t have wget installed so the first step is to get that…
yum install wget –y
The next step is to go the Chef website and let them tell you how to install the server. Browse Continue reading
I've been thinking about running Docker on CoreOS and Project Atomic lately... While the deployment model would be pretty different to what we are used to, I have 50% of the work already done in docker-ovs so I was interested to see if my containers would work on a system with the Open vSwitch kernel module loaded...
As I'm a Mac User, I use boot2docker for all my docker-related things. It's also pretty easy to change the kernel config to allow the Open vSwitch module to be loaded.
Clone my fork
git checkout openvswitch
Build the iso
docker build -t boot2docker . && docker run --rm boot2docker > boot2docker.iso
Run boot2docker with the new iso
boot2docker destroy
boot2docker init --iso="`pwd`/boot2docker.iso"
boot2docker up
Load the Open vSwitch kernel module
boot2docker ssh
sudo modprobe openvswitch
exit
Run an Open vSwitch container
docker run -t -i --privileged=true davetucker/docker-ovs:2.1.2 /bin/sh
export OVS_RUNDIR=/var/run/openvswitch
sed -i s/nodaemon=true/nodaemon=false/g /etc/supervisord.conf
supervisord
Test it out
ovs-vsctl add-br br0
ovs-vsctl show
# This didn't work before
ovs-dpctl show
This isn't a thorough test. I'd like to create some traffic and see the Continue reading
I've been thinking about running Docker on CoreOS and Project Atomic lately... While the deployment model would be pretty different to what we are used to, I have 50% of the work already done in docker-ovs so I was interested to see if my containers would work on a system with the Open vSwitch kernel module loaded...
Notes on the CheckPoint firewall clustering solution based on a review of the documentation in August 2014.
The post Tech Notes: CheckPoint Firewall Cluster XL in 2014 appeared first on EtherealMind.
My good friend Tiziano complained about the fact that BGP considers next hop reachable if there’s an entry in the IP routing table even though the router cannot even ping the next hop.
That behavior is one of the fundamental aspects of IP networks: networks built with IP routing protocols rely on fate sharing between control and data planes instead of path liveliness checks.
Read more ...Today, we will be discussing the Open vSwitch Database Management Protocol, commonly (and herein) referred to as OVSDB. This is a network configuration protocol that has been the subject of a lot of conversations pertaining to SDN. My goal in this post is to present the facts about OVSDB as they stand. If you want to know what OVSDB does, as well as does NOT do, read on.
I would like to call out a very important section, titled “OVSDB Myths”. I have encountered a lot of false information about OVSDB in the last year or so, and would like to address this specifically. Find this section at the end of this post.
If you’re new to OVSDB, it’s probably best to think of it in the same way you might think of any other configuration API like NETCONF, or maybe even proprietary vendor configuration APIs like NXAPI; it’s goal is to provide programmatic access to the management plane of a network device or software. However, in addition to being a published open standard, it is quite different in it’s operation from other network APIs.
‘ On Earth Day at 1990 , New York City’s Transportation Commissioner decided to close 42d Street , which as every New Yorker knows is always congested. “Many predicted it would be doomsday,” said the Commissioner, Lucius J. Riccio. “You didn’t need to be a rocket scientist or have a sophisticated computer queuing model to […]
The post Designing Networks for Selfish Users is Hard appeared first on Packet Pushers Podcast and was written by Orhan Ergun.
The CORE Network Emulator development team released CORE version 4.7 in August 2014. I installed this new version of CORE on a newly-installed Linux 14.04 system and tested some of the new features.
In this post, I list the new features that are most relevant to researchers who use the CORE GUI to set up and run network simulations. I also list some of the defects that I found, along with workarounds.
The following are the most updates and new features most visible to users like me, who use the CORE GUI. There are many other updates and new features so read the CORE 4.7 release notes to review all the changes in CORE 4.7.
The CORE team made some major improvements to the way link effects are implemented. This alone is worth upgrading to CORE 4.7. The changes are:
I've been thinking about running Docker on CoreOS and Project Atomic lately... While the deployment model would be pretty different to what we are used to, I have 50% of the work already done in docker-ovs so I was interested to see if my containers would work on a system with the Open vSwitch kernel module loaded...
Go's "object-orientation" approach is through interfaces. Interfaces provide a way of specifying the behavior expected of an object, but rather than saying what an object itself can do, they specify what's expected of an object. If any object meets the interface specification it can be used anywhere that interface is expected.
I was working on a new, small piece of software that does image compression for CloudFlare and found a nice use for interfaces when stubbing out a complex piece of code in the unit test suite. Central to this code is a collection of goroutines that run jobs. Jobs are provided from a priority queue and performed in priority order.
The jobs ask for images to be compressed in myriad ways and the actual package that does the work contained complex code for compressing JPEGs, GIFs and PNGs. It had its own unit tests that checked that the compression worked as expected.
But I wanted a way to test the part of the code that runs the jobs (and, itself, doesn't actually know what the jobs do). Because I only want to test if the jobs got run correctly (and not the compression) I don't want to have to create (and configure) the complex job type that gets used when the code really runs.
What I wanted was a DummyJob
.
The Worker
package actually runs jobs in a goroutine like this:
func (w *Worker) do(id int, ready chan int) { for { ready <- id j, ok := <-w.In if !ok { return } if err := j.Do(); err != nil { logger.Printf("Error performing job %v: %s", j, err) } } }
do
gets started as a goroutine passed a unique ID (the id
parameter) and a channel called ready
. Whenever do
is able to perform work it sends a message containing its id
down ready
and then waits for a job on the worker w.In
channel. Many such workers run concurrently and a separate goroutine pulls the IDs of workers that are ready for work from the ready
channel and sends them work.
If you look at do
above you'll see that the job (stored in j
) is only required to offer a single method:
func (j *CompressionJob) Do() error
The worker's do
just calls the job's Do
function and checks for an error return. But the code originally had w.In
defined like this:
w := &Worker{In: make(chan *job.CompressionJob)}
which would have required that the test suite for Worker
know how to create a CompressionJob
and make it runnable. Instead I defined a new interface like this:
type Job interface { Priority() int Do() error }
The Priority
method is used by the queueing mechanism to figure out the order in which jobs should be run. Then all I needed to do was change the creation of the Worker
to
w := &Worker{In: make(chan job.Job)}
The w.In
channel is no longer a channel of CompressionJob
s, but of interfaces of type Job
. This shows a really powerful aspect of Go: anything that meets the Job
interface can be sent down that channel and only a tiny amount of code had to be changed to use an interface instead of the more 'concrete' type CompressionJob
.
Then in the unit test suite for Worker
I was able to create a DummyJob
like this:
var Done bool type DummyJob struct { } func (j DummyJob) Priority() int { return 1 } func (j DummyJob) Do() error { Done = true return nil }
It sets a Done
flag when the Worker
's do
function actually runs the DummyJob
. Since DummyJob
meets the Job
interface it can be sent down the w.In
channel to a Worker
for processing.
Creating that Job
interface totally isolated the interface that the Worker
needs to be able to run jobs and hides any of the other details greatly simplifying the unit test suite. Most interesting of all, no changes at all were needed to CompressionJob
to achieve this.
So how’s this for a condescending tweet?
@tbourke @elonden @sbdewindt ; Learn what Tony doesn’t know. See why 2 * 8 != 16. (And yes, 2 * 10 < 16 also). http://t.co/1hx6RPlZ2V
— EGI Russ (@russtystorage) August 27, 2014
It’s from Russ Fellows, author of the infamous FCoE “study” (which has been widely debunked for its many hilarious errors):
Interesting article (check it out). But the sad/amusing irony is that he’s wrong. How is he wrong? Here’s what Russ Fellows doesn’t know about storage:
1, 2, 4, and 8 Gbit Fibre Channel (as he points out) uses 8/10 bit encoding. That means about a 20% of the bandwidth available was lost due to encoding overhead (as Russ pointed out). That’s why 8 Gbit Fibre Channel only provides 800 MB/s of connectivity, even though 8,000 Megabits per second equates to 1,000 Megabytes per second (8000 Megabits / (8 bits per byte) = 1,000 Megabytes).
With this overhead in mind, Fibre Channel was designed to give 100 MB/s for every Gigabit of speed. It never increased the baud rate to make up for the overhead.
Ethernet, on the other hand, did increase the baud rate to make up for Continue reading
I first met Elisa Jasinska when she had one of the coolest job titles I ever saw: Senior Packet Herder. Her current job title is almost as cool: Senior Network Toolsmith @ Netflix – obviously an ideal guest for the Software Gone Wild podcast.
In our short chat she described some of the tools she’s working on, including an adaptation of pmacct to environments with numerous BGP exit points (more details in her NANOG presentation).