I've been thinking about running Docker on CoreOS and Project Atomic lately... While the deployment model would be pretty different to what we are used to, I have 50% of the work already done in docker-ovs so I was interested to see if my containers would work on a system with the Open vSwitch kernel module loaded...
Go's "object-orientation" approach is through interfaces. Interfaces provide a way of specifying the behavior expected of an object, but rather than saying what an object itself can do, they specify what's expected of an object. If any object meets the interface specification it can be used anywhere that interface is expected.
I was working on a new, small piece of software that does image compression for CloudFlare and found a nice use for interfaces when stubbing out a complex piece of code in the unit test suite. Central to this code is a collection of goroutines that run jobs. Jobs are provided from a priority queue and performed in priority order.
The jobs ask for images to be compressed in myriad ways and the actual package that does the work contained complex code for compressing JPEGs, GIFs and PNGs. It had its own unit tests that checked that the compression worked as expected.
But I wanted a way to test the part of the code that runs the jobs (and, itself, doesn't actually know what the jobs do). Because I only want to test if the jobs got run correctly (and not the compression) I don't want to have to create (and configure) the complex job type that gets used when the code really runs.
What I wanted was a DummyJob
.
The Worker
package actually runs jobs in a goroutine like this:
func (w *Worker) do(id int, ready chan int) { for { ready <- id j, ok := <-w.In if !ok { return } if err := j.Do(); err != nil { logger.Printf("Error performing job %v: %s", j, err) } } }
do
gets started as a goroutine passed a unique ID (the id
parameter) and a channel called ready
. Whenever do
is able to perform work it sends a message containing its id
down ready
and then waits for a job on the worker w.In
channel. Many such workers run concurrently and a separate goroutine pulls the IDs of workers that are ready for work from the ready
channel and sends them work.
If you look at do
above you'll see that the job (stored in j
) is only required to offer a single method:
func (j *CompressionJob) Do() error
The worker's do
just calls the job's Do
function and checks for an error return. But the code originally had w.In
defined like this:
w := &Worker{In: make(chan *job.CompressionJob)}
which would have required that the test suite for Worker
know how to create a CompressionJob
and make it runnable. Instead I defined a new interface like this:
type Job interface { Priority() int Do() error }
The Priority
method is used by the queueing mechanism to figure out the order in which jobs should be run. Then all I needed to do was change the creation of the Worker
to
w := &Worker{In: make(chan job.Job)}
The w.In
channel is no longer a channel of CompressionJob
s, but of interfaces of type Job
. This shows a really powerful aspect of Go: anything that meets the Job
interface can be sent down that channel and only a tiny amount of code had to be changed to use an interface instead of the more 'concrete' type CompressionJob
.
Then in the unit test suite for Worker
I was able to create a DummyJob
like this:
var Done bool type DummyJob struct { } func (j DummyJob) Priority() int { return 1 } func (j DummyJob) Do() error { Done = true return nil }
It sets a Done
flag when the Worker
's do
function actually runs the DummyJob
. Since DummyJob
meets the Job
interface it can be sent down the w.In
channel to a Worker
for processing.
Creating that Job
interface totally isolated the interface that the Worker
needs to be able to run jobs and hides any of the other details greatly simplifying the unit test suite. Most interesting of all, no changes at all were needed to CompressionJob
to achieve this.
So how’s this for a condescending tweet?
@tbourke @elonden @sbdewindt ; Learn what Tony doesn’t know. See why 2 * 8 != 16. (And yes, 2 * 10 < 16 also). http://t.co/1hx6RPlZ2V
— EGI Russ (@russtystorage) August 27, 2014
It’s from Russ Fellows, author of the infamous FCoE “study” (which has been widely debunked for its many hilarious errors):
Interesting article (check it out). But the sad/amusing irony is that he’s wrong. How is he wrong? Here’s what Russ Fellows doesn’t know about storage:
1, 2, 4, and 8 Gbit Fibre Channel (as he points out) uses 8/10 bit encoding. That means about a 20% of the bandwidth available was lost due to encoding overhead (as Russ pointed out). That’s why 8 Gbit Fibre Channel only provides 800 MB/s of connectivity, even though 8,000 Megabits per second equates to 1,000 Megabytes per second (8000 Megabits / (8 bits per byte) = 1,000 Megabytes).
With this overhead in mind, Fibre Channel was designed to give 100 MB/s for every Gigabit of speed. It never increased the baud rate to make up for the overhead.
Ethernet, on the other hand, did increase the baud rate to make up for Continue reading
I first met Elisa Jasinska when she had one of the coolest job titles I ever saw: Senior Packet Herder. Her current job title is almost as cool: Senior Network Toolsmith @ Netflix – obviously an ideal guest for the Software Gone Wild podcast.
In our short chat she described some of the tools she’s working on, including an adaptation of pmacct to environments with numerous BGP exit points (more details in her NANOG presentation).
One of the confusing aspects of Internet operation is the difference between the types of providers and the types of peering. There are three primary types of peering, and three primary types of services service providers actually provide. The figure below illustrates the three different kinds of peering. One provider can agree to provide transit […]
“The most interesting part of building our house was choosing the brick and trim,” explains Randy Cross, Director of Product Line Management at Avaya, “but in Texas with clay soils, the most IMPORTANT element was the foundation.” This podcast explains that much of the SDN hype today centers on the outer elements of SDN – […]
The post Show 202 – Avaya & The Critical Importance of the SDN Underlay – Sponsored appeared first on Packet Pushers Podcast and was written by Ethan Banks.
Original content from Roger's CCIE Blog Tracking the journey towards getting the ultimate Cisco Certification. The Routing & Switching Lab Exam
This Cisco MPLS Tutorial will guide you through building the simple MPLS topology below. This consists of a 3 router MPLS core and two remote sites in the same VRF running OSPF as the PE=CE routing protocol. This will be quite a long post as I will be taking you through every single verification along […]
Post taken from CCIE Blog
Original post MPLS Tutorial – including verifications
Great set of instructions for installing Kali Linux in VMware Player.
Originally posted on Cyber Warrior+:
First we need to download Kali from http://kali.org/downloads/. If you have a 64-bit capable computer (like me), then you probably will want the 64-bit version of Kali for performance reasons. Expand the drop down menu’s to find the version you need. Select the 64-bit version ONLY if you have a 64-bit computer.
View original 968 more words
SDN a Mixed Bag for U.S. Enterprises
Juniper Networks recently surveyed 400 enterprise IT “decision makers” in government, education, financial services and healthcare about their SDN adoption plans. The results were split: While almost 53 percent have plans to deploy SDN, the other half (48 percent) has no plans to adopt the technology.
Nearly three-quarters of those who plan to implement SDN say they will do so within the next year. Their motivations are the perceived SDN benefits of improved network performance and efficiency (26 percent), simplified network operations (19 percent), and cost savings on operations (13 percent). The survey does not delve into how much of these enterprise networks will be SDN-enabled. Indeed, 63 percent of those surveyed said business networks in the next five years will be a mix of software-defined and traditional.
The gap between the perceived benefits and reality on the ground may be inhibiting SDN deployments. The survey respondents cited cost (50 percent), difficulty integrating with existing systems (35 percent), security concerns (34 percent), and lack of skills from existing employees (28 percent) as the top challenges to SDN adoption.
We recently published a program that we wrote in conjunction with our friends at MetaCloud: the VXLAN Flooder, or vxfld. vxfld is the basis of our Lightweight Network Virtualization feature (new with Cumulus Linux 2.2!), as well as MetaCloud’s next generation OpenStack networking. It enables easy to deploy, scalable virtual switched networks built on top of an L3 fabric.
Of course, vxfld is just the latest in a series of contributions! There are projects we’ve written from scratch, such as ONIE, the Open Network Install Environment, which we contributed to the Open Compute Project. Like Prescriptive Topology Manager, which simplifies the deployment of large L3 networks. And ifupdown2, a rewrite of Debian’s tool for configuring networks that greatly simplifies large, complicated networking configurations.
And then there are our contributions back to the upstream projects that we include in Cumulus Linux. These include (in order of decreasing number of contributions) the Quagga routing protocol suite, the Linux Continue reading
The Moscone Center in San Francisco is a popular place for technical events. Apple’s World Wide Developer Conference (WWDC) is an annual user of the space. Cisco Live and VMworld also come back every few years to keep the location lively. This year, both conferences utilized Moscone to showcase tech advances and foster community discussion. Having attended both this year in San Francisco, I think I can finally state the following with certainty.
It’s time for tech conferences to stop using the Moscone Center.
Let’s face it. If your conference has more than 10,000 attendees, you have outgrown Moscone. WWDC works in Moscone because they cap the number of attendees at 5,000. VMworld 2014 has 22,000 attendees. Cisco Live 2014 had well over 20,000 as well. Cramming four times the number of delegates into a cramped Moscone Center does not foster the kind of environment you want at your flagship conference.
The main keynote hall in Moscone North is too small to hold the large number of audience members. In an age where every keynote address is streamed live, that shouldn’t be a problem. Except that people still want to be involved and close to the event. At both Cisco Continue reading
Building a private cloud infrastructure tends to be a cumbersome process: even if you do it right, you oft have to deal with four to six different components: orchestration system, hypervisors, servers, storage arrays, networking infrastructure, and network services appliances.
Read more ...Last week in Chicago, at the annual SIGCOMM flagship research conference on networking, Arbor collaborators presented some exciting developments in the ongoing story of IPv6 roll out. This joint work (full paper here) between Arbor Networks, the University of Michigan, the International Computer Science Institute, Verisign Labs, and the University of Illinois highlighted how both the pace and nature of IPv6 adoption has made a pretty dramatic shift in just the last couple of years. This study is a thorough, well-researched, effective analysis and discussion of numerous published and previously unpublished measurements focused on the state of IPv6 deployment.
The study examined a decade of data reporting twelve measures drawn from ten global-scale Internet datasets, including several years of Arbor data that represents a third to a half of all interdomain traffic. This constitutes one of the longest and broadest published measurement of IPv6 adoption to date. Using this long and wide perspective, the University of Michigan, Arbor Networks, and their collaborators found that IPv6 adoption, relative to IPv4, varies by two orders of magnitude (100x!) depending on the measure one looks at and, because of this, care must really be taken when looking at individual measurements of IPv6. For example, examining only the fraction of IPv6 to IPv4 Continue reading
With the current interest in network automation, it’s imperative that the correct tools are chosen for the right tasks. It should be acknowledged that there isn’t a single golden bullet approach and the end solution will be very much based on customer requirements, customer abilities, customer desire to learn and an often overlooked fact; the abilities of the incumbent or services provider.
The best projects are always delivered with a KISS! Keep It Simple Stupid.
Note – I have used the term ‘playbooks’ as a generic term to define an automation set of tasks. Commonly known as a runbook, playbook and recipe.
Because a services provider may have delivered an automation project using a bulky generic work-flow automation tool in the past, it does not mean it is the correct approach or set of skills for the current and future set of network automation requirements. To exercise this, let’s create a hypothetical task of hanging a picture on the wall. We have many choices when it comes to this task, for example: bluetak, sellotape, gaffer tape, masking tape, duct tape or my favourite, use a glue gun! However, the correct way would be to frame Continue reading