Russ

Author Archives: Russ

Fabric versus Network: What’s the Difference?

We often hear about fabrics, and we often hear about networks—but on paper, an in practice, they often seem to be the same thing. Leaving aside the many realms of vendor hype, what’s really the difference? Poking around on the ‘net, I came across a couple of definitions that seemed useful, at least at first blush. For instance, SDN Search gives provides the following insight

The word fabric is used as a metaphor to illustrate the idea that if someone were to document network components and their relationships on paper, the lines would weave back and forth so densely that the diagram would resemble a woven piece of cloth.

While this is interesting, it gives us more of a “on the paper” answer than what might be called a functional view. The entry at Wikipedia is more operationally based

Switched Fabric or switching fabric is a network topology in which network nodes interconnect via one or more network switches (particularly crossbar switches). Because a switched fabric network spreads network traffic across multiple physical links, it yields higher total throughput than broadcast networks, such as early Ethernet.

Greg has an interesting (though older) post up on the topic, Continue reading

Reaction: DNS is Part of the Stack

Over at ipspace.net, Ivan is discussing using DNS to program firewall rules—

Could you use DNS names to translate human-readable rules into packet filters? The traditional answer was “no, because I don’t trust DNS”.

This has been a pet peeve of mine for some years—particularly after my time at Verisign Labs, looking at the DNS system, and its interaction with the control plane, in some detail. I’m just going to say this simply and plainly; maybe someone, somewhere, will pay attention—

The Domain Name System is a part of the IP networking stack.

Network engineers and application developers seem to treat DNS as some sort of red-headed-stepchild; it’s best if we just hide it in some little corner someplace, and hope someone figures out how to make it work, but we’re not going to act like it should or will work. We’re just going to ignore it, and somehow hope it goes away so we don’t have to deal with it.

Let’s look at some of the wonderful ideas this we’ll just ignore DNS has brought us over the years, like, “let’s embed the IP address in the packet someplace so we know who we’re talking to,” and “we Continue reading

snaproute Go BGP Code Dive (12): Moving to Established

In last week’s post, the new BGP peer we’re tracing through the snaproute BGP code moved from open to openconfirmed by receiving, and processing, the open message. In processing the open message, the list of AFIs this peer will support was built, the hold timer set, and the hold timer started. The next step is to move to established. RFC 4271, around page 70, describes the process as—

If the local system receives a KEEPALIVE message (KeepAliveMsg (Event 26)), the local system:
 - restarts the HoldTimer and
 - changes its state to Established.

In response to any other event (Events 9, 12-13, 20, 27-28), the local system:
 - sends a NOTIFICATION with a code of Finite State Machine Error,
 - sets the ConnectRetryTimer to zero,
 - releases all BGP resources,
 - drops the TCP connection,
 - increments the ConnectRetryCounter by 1,
 - (optionally) performs peer oscillation damping if the DampPeerOscillations attribute is set to TRUE, and
 - changes its state to Idle.

For a bit of review (because this is running so long, you might forget how the state machine works), the way the snaproute code is written is as a state machine. The way the state machine works is Continue reading