Using the yes command to automate responses

One of the more unusual Linux commands is named “yes”. It’s a very simple tool intended to help you avoid having to answer a lot of questions that might be asked when you run a script or a program that needs a series of responses to do its work.If you type “yes” by itself at the command prompt, your screen is going to start filling up with the just the letter “y” (one per line) until you hit control-C to stop it. It’s also incredibly fast. Unlike what you see displayed below, yes will likely spit out more than a million y’s in the time it likely takes you to reach down and press control-C. Fortunately, that’s not all that this command can do.To read this article in full, please click here

Using the yes command to automate responses

One of the more unusual Linux commands is named “yes”. It’s a very simple tool intended to help you avoid having to answer a lot of questions that might be asked when you run a script or a program that needs a series of responses to do its work.If you type “yes” by itself at the command prompt, your screen is going to start filling up with the just the letter “y” (one per line) until you hit control-C to stop it. It’s also incredibly fast. Unlike what you see displayed below, yes will likely spit out more than a million y’s in the time it likely takes you to reach down and press control-C. Fortunately, that’s not all that this command can do.To read this article in full, please click here

Day Two Cloud 157: Highlights Of Cloud Field Day 14

Today's Day Two Cloud podcast brings you highlights from Cloud Field Day 14, where Day Two Cloud's Ned Bellavance was a delegate. The Field Day event brings together cloud vendors and tech bloggers for in-depth presentations. Ned will share  highlights and impressions from presentations from companies including Weka, Alkira, and Morpheus Data.

Bigger, Faster, Better (and Cheaper!)

There has been much speculation on the evolution of the Internet. Is our future somewhere out there in the blockchains? Is it all locked up in crypto? Or will it all shatter under the pressure of fragmentation? It seems to me that all this effort is being driven by a small number of imperatives: making it bigger, faster and better. Oh, and making it cheaper as well!

I, The Braggart – A Network Fable

My boss stepped into our shared cubicle space and rested his arm on top of the fabric wall. He peered down at me. “Hey.” He always started with a quiet “hey” when he was about to ask me to do something new. I glanced at my whiteboard filled with projects and statuses, and steeled myself for the fresh request.

“Hey. I just got out of a meeting with Lewis.” I groaned inwardly. Lewis was my boss’s boss, and while Lewis was a fantastic human being, meetings with him were usually in the context of projects. Big ones. I put on a fake smile to mask creeping despair. “Oh? How did that go?”

My boss ripped off the band-aid. “Lewis wants a monthly summary from everyone of what they’ve been doing. So, on the last Friday of the month, make sure you have all your project statuses updated, including key milestones. Your whiteboard is great for you and me since we share this space, but now you’re going to need to log your statuses into the project database.” He smirked. “Like a big boy.”

I died a little inside. One of the reasons I’d left consulting Continue reading

Hedge 141: Improving WAN Router Performance

Wide area networks in large-scale cores tend to be performance choke-points—partially because of differentials between the traffic they’re receiving from data center fabrics, campuses, and other sources, and the availability of outbound bandwidth, and partially because these routers tend to be a focal point for policy implementation. Rachee Singh joins Tom Ammon, Jeff Tantsura, and Russ White to discuss “Shoofly, a tool for provisioning wide-area backbones that bypasses routers by keeping traffic in the optical domain for as long as possible.”

download

SSD roundup: New products deliver speed, density gains

There’s never a dull moment in the enterprise SSD market. Among the latest developments are three new products from Samsung, Micron and Kioxia. Here are the highlights.Samsung’s new computational storage drive Samsung unveiled a second generation of its SmartSSD, an SSD with a Xilinx FPGA and some memory for doing computational storage. Computational storage is the process of processing data where it lies rather than moving it around the network. It’s a new concept and only possible with SSDs; there’s no way this could be done with a mechanical hard drive.To read this article in full, please click here

SSD roundup: New products deliver speed, density gains

There’s never a dull moment in the enterprise SSD market. Among the latest developments are three new products from Samsung, Micron and Kioxia. Here are the highlights.Samsung’s new computational storage drive Samsung unveiled a second generation of its SmartSSD, an SSD with a Xilinx FPGA and some memory for doing computational storage. Computational storage is the process of processing data where it lies rather than moving it around the network. It’s a new concept and only possible with SSDs; there’s no way this could be done with a mechanical hard drive.To read this article in full, please click here

Notes from DNS OARC 38

There is still much in the way the DNS behaves that we really don't know, much we would like to do that we can't do already, and much we probably want to do better. DNS-OARC Meetings bring together a collection of people interested in all aspects of the DNS, from its design through to all aspects of its operation, and the presentations and discussions at OARC meetings touch upon the current hot topics in the DNS today.

Building and using Managed Components with WebCM

Building and using Managed Components with WebCM
Building and using Managed Components with WebCM

Managed Components are here to shake up the way third-party tools integrate with websites. Two months ago we announced that we’re open sourcing parts of the most innovative technologies behind Cloudflare Zaraz, making them accessible and usable to everyone online. Since then, we’ve been working hard to make sure that the code is well documented and all pieces are fun and easy to use. In this article, I want to show you how Managed Components can be useful for you right now, if you manage a website or if you’re building third-party tools. But before we dive in, let’s talk about the past.

Third-party scripts are a threat to your website

For decades, if you wanted to add an analytics tool to your site, a chat widget, a conversion pixel or any other kind of tool – you needed to include an external script. That usually meant adding some code like this to your website:

<script src=”https://example.com/script.js”></script>

If you think about it – it’s a pretty terrible idea. Not only that you’re now asking the browser to connect to another server, fetch and execute more JavaScript code – you’re also completely giving up the control on your Continue reading

3 places edge-computing challenges can lurk

How much computing power should we put at the edge of the network?In the past when networks weren’t supposed to be very smart, it wasn’t even a question. The answer was none. But now that it’s possible to bring often substantial amounts of computational equipment right to the very edge of the network, the right answer isn’t always so easy.The arguments in favor are simple. When packets travel shorter distances, response time is faster. With compute, storage and networking deployed at the edge, network lags and latencies don’t slow down each trip between users and resources, and users and applications get better response times.At the same time, because more work is done at the edge, the need will drop for bandwidth between remote sites back and central data centers or the cloud: Less bandwidth, lower cost.To read this article in full, please click here

3 places edge-computing challenges can lurk

How much computing power should we put at the edge of the network?In the past when networks weren’t supposed to be very smart, it wasn’t even a question. The answer was none. But now that it’s possible to bring often substantial amounts of computational equipment right to the very edge of the network, the right answer isn’t always so easy.The arguments in favor are simple. When packets travel shorter distances, response time is faster. With compute, storage and networking deployed at the edge, network lags and latencies don’t slow down each trip between users and resources, and users and applications get better response times.At the same time, because more work is done at the edge, the need will drop for bandwidth between remote sites back and central data centers or the cloud: Less bandwidth, lower cost.To read this article in full, please click here

EVPN-VXLAN: Symmetrical IRB versus Asymmetrical IRB

EVPN-VXLAN: Symmetrical IRB versus Asymmetrical IRB

Now that we've covered the two flavours of IRB in depth, I want to share more of a discussion piece. Technical details are interesting, sometimes even fun, but what about real-world operational considerations?

"Everyone has a plan..."

Viewing the intimidating assortment of pikes, swords, sharp, and bashy objects in the Tower of London armoury during a recent visit, I was reminded of that Tyson quote "Everyone has a plan until they get punched in the mouth."
My train of thought was, "being a knight riding around on horseback would be fun and all until an encounter with a big stick with a pointy metal end."
Similarly (or maybe not similar at all, but I hope you get where I'm going here), playing around with the various types of IRB for EVPN has been enlightening and, at times, fun; but what about on a real-world network with its everyday concerns and risk of outages - the pokey, hurty things in my analogy.
What works on paper might not be feasible on a live network, when the focus is primarily on reliability and deploying networks that the NetOps team can realistically support.

Symmetrical IRB - it scales, but at Continue reading

What is eBPF and what are its use cases

With the recent advancements in service delivery through containers, Linux has gained a lot of popularity in cloud computing by enabling digital businesses to expand easily regardless of their size or budget. These advancements have also brought a new wave of attack, which is challenging to address with the same tools we have been using for non cloud-native environments. eBPF offers a new way to interact with the Linux kernel, allowing us to reexamine the possibilities that once were difficult to achieve.

In this post, I will go through a brief history of the steps that eBPF had to take to become the Swiss army knife inside the Linux kernel and point out how it can be used to achieve security in a cloud-native environment. I will also share my understanding of what happens inside the kernel that prevents BPF programs from wreaking havoc on your operating system.

BPF history

In the early days of computing, Unix was a popular solution for capturing network traffic, and using CMU/Stanford packet filter (CSPF) to capture packets using 64KB PDP-11 was gaining popularity by the second. Without a doubt, this was a pioneering work and a leap forward for its time but like Continue reading