Archive

Category Archives for "Networking"

Post Equifax, We Need to Reconsider How to Identify People 

Victims of identity theft will tell you the experience is like having your personal life broken into, tossed around, and thrown out onto the street. It is a violation that is indescribable. Then, you could discover that strangers are impersonating you, carrying out crimes under your name, and destroying your reputation. Unraveling the mess that follows is a long, painful and never-ending process – all this because someone else was careless or willfully negligent with your data.

Even if your data was not exposed in the Equifax breach, you should be both concerned and angry. This is a potentially catastrophic breach: roughly 143 million individuals (approximately 45% of the US population) now face the prospect of identity theft.

As a society, we need to seriously rethink why and how we identify people. How did the social security number become the default identifier, especially for non-governmental functions such as credit reporting? When the Social Security Administration first issued SSNs in 1936, their “sole purpose” was to track the earning history of workers for benefits. In fact, Kaya Yurieff points out that until 1972, the bottom of the card read: “FOR SOCIAL SECURITY PURPOSES — NOT FOR IDENTIFICATION.”

Social security numbers Continue reading

A networking expert on how to experiment with containers using Mesosphere and Cumulus Linux

With the rising popularity of containers, it seems that containers and networking interact more and more frequently. Amongst all the excitement, there is also terminology and technical complexity. And because of this, I’m super grateful for Cumulus in the Cloud. As a Sr. Consulting Engineer, part of my job is ensuring I am deeply familiar with the technologies and methodologies our customers are using. I’ve recently been playing with Cumulus in the Cloud to better learn how Mesos’s Marathon and Mesosphere interoperate with Cumulus Linux and NetQ.

Let me start off by saying that if you’re interested in container networking but want more information on how to do it right, we’re hosting a webinar with Mesosphere that you should most definitely check out. Our co-founder, JR Rivers, will also be hosting, and, I promise you, he’s always an engaging speaker. Of course if you’re already familiar with container networking, or you would like to learn about it in a more hands-on atmosphere, then please read on!

I’m a networking veteran, but working at Cumulus has pushed the boundaries of my networking knowledge as I’ve had to learn more about integrating networking solutions with application functionality. When I have to talk Continue reading

Pluribus Networks… Wait, where are we again?


I was privileged to visit Pluribus Networks as a delegate at Network Field Day 16 a couple of weeks ago. Somebody else paid for the trip. Details here.

Much has changed at Pluribus, I hardly recognized the place!

I quite like Pluribus (their use of Solaris under their Netvisor switching OS got me right in the feels early on) so I'm happy to report that most of what's new looks like changes for the better.

When we arrived at Pluribus HQ we were greeted by some new faces, a new logo, color scheme... Even new accent lighting in the demo area!

Gone also are the Server Switches with their monstrous control planes (though still listed on the website, they weren't mentioned in the presentation), Solaris, and a partnership with Supermicro.

In their place we found:

  • The new logo and colors
  • New faces in management and marketing
  • Netvisor running on Linux
  • Whitebox and OCP-friendly switches
  • A partnership with D-Link
  • Some Netvisor improvements

Linux

This was probably inevitable, and likely matters little to Netvisor users. When Pluribus was first getting off the ground, I was waiting for an OpenSolaris release that never happened. That Pluribus stuck with Solaris for as long Continue reading

A lack of cloud skills could cost companies money

A poll from Europe finds two in three IT decision makers say their organization is losing out on revenue because their firm lacks specific cloud expertise.The report, compiled by cloud hosting provider Rackspace and the London School of Economics, polled 950 IT decision makers and 950 IT pros and found nearly three quarters of IT decision makers (71 percent) believed their organizations have lost revenue due to a lack of cloud expertise. On average, this accounts for 5 percent of total global revenue, no small amount of money. + Also on Network World: 10 most in-demand tech skills + Also, the survey found 65 percent believed they could bring greater innovation to their company with “the right cloud insight.” And 85 percent said greater expertise within their organization would help them recoup the return on their cloud investment.To read this article in full or to leave a comment, please click here

A lack of cloud skills could cost companies money

A poll from Europe finds two in three IT decision makers say their organization is losing out on revenue because their firm lacks specific cloud expertise.The report, compiled by cloud hosting provider Rackspace and the London School of Economics, polled 950 IT decision makers and 950 IT pros and found nearly three quarters of IT decision makers (71 percent) believed their organizations have lost revenue due to a lack of cloud expertise. On average, this accounts for 5 percent of total global revenue, no small amount of money. + Also on Network World: 10 most in-demand tech skills + Also, the survey found 65 percent believed they could bring greater innovation to their company with “the right cloud insight.” And 85 percent said greater expertise within their organization would help them recoup the return on their cloud investment.To read this article in full or to leave a comment, please click here

5 Reasons Why Attending the Transform Security Track at vForum Online is a Must

If you’ve been working in IT for the past few years, you know how much the security landscape has changed recently. Application infrastructures — once hosted in on-premises data centers — now sit in highly dynamic public and private multicloud environments. With the rise of mobile devices, bring-your-own-device (BYOD) policies, and Internet of Things (IoT), end-user environments are no longer primarily about corporately managed desktops. And attackers are growing more sophisticated by the day.

In such an atmosphere, traditional network perimeter security ceases to provide adequate protection.

That’s where the VMware solutions come in. At the heart of the solutions is a ubiquitous software layer across application infrastructure and endpoints that’s independent of the underlying physical infrastructure or location. To really understand how it works, you need to experience it for yourself. And the Transform Security track at vForum Online Fall 2017 on October 18th is the perfect opportunity. As our largest virtual conference, vForum Online gives IT professionals like yourself the chance to take a deep dive into VMware products with breakout sessions, chats with experts, and hands-on labs — all from the comfort of your own desk.

With this free half-day event just weeks away, it’s time to Continue reading

Familiarizing Africans with Internet of Things

About 70 years ago, the world was introduced to the digital computers revolution which made the computation of millions of operations as fast and easy as 1+2. This simplified so many time-consuming activities and brought about new applications that amazed the world. Then, about 40 years ago the advent of networks and inter-networks (or the Internet) revolutionized the way we work and live by connecting the hundreds of millions of computing devices that have invaded our homes and offices.

Today, we are at the beginning of a new revolution, that of the Internet of Things (IoT), the extent of which might only be limited by our imagination. Internet of Things refers to the rapidly growing network of objects connected through the Internet. The objects can be sensors such as a thermostat or a speed meter, or actuators that open a valve or that turn on/off a light or a motor. These devices are embedded in our everyday home and workplace equipment (refrigerators, machines, cars, road infrastructure, etc.) or even the human body. These devices connected to powerful computers in the “cloud” might change our world in a way that few of us can imagine today. It is estimated that Continue reading

Response: The Worm Has Turned Against Tech Companies

For the past decade, the loudest arguments waged regarding the Four (Amazon, Apple, Facebook, and Google) were about which CEO was more Jesus-like or should run for president. These platforms brought down autocrats, were going to cure death and put a man on Mars, because they are just so awesome. Media outlets were coopted into […]

The post Response: The Worm Has Turned Against Tech Companies appeared first on EtherealMind.

Hitachi reorganizes to focus on IoT

In yet another sign of how the Internet of Things (IoT) is re-arranging the international corporate landscape, Japanese manufacturing giant Hitachi is reorganizing to challenge the global leaders in the IoT market. The conglomerate is combining three of its Bay Area divisions into a single $4 billion unit responsible for growing Hitachi's IoT operations in more than 130 countries.The new IoT-centric operation will combine Hitachi Data Systems (data center infrastructure), Hitachi Insight Group (big data software) and Pentaho (analytics) into a wholly owned subsidiary called Hitachi Vantara, which will employ some 7,000 people (about a of third of Hitachi’s IT workforce) out of its Santa Clara, California, headquarters.To read this article in full or to leave a comment, please click here

Getting Linux to ignore pings

The ping command sends one or more requests to a system asking for a response. It's typically used to check that a system is up and running, verify an IP address, or prove that the sending system can reach the remote one (i.e., verify the route).The ping command is also one that network intruders often use as a first step in identifying systems on a network that they might next want to attack. In this post, we're going to take a quick look at how ping works and then examine options for configuring systems to ignore these requests.How ping works The name "ping" came about because the ping command works in a way that is similar to sonar echo-location, which used sound propogation for navigation. The sound pulses were called "pings." The ping command on Unix and other systems sends an ICMP ECHO_REQUEST to a specified computer, which is then expected to send an ECHO_REPLY. The requests and replies are very small packets.To read this article in full or to leave a comment, please click here

Handling A10 PCAP Files Using Automator in MacOS

I’m not a big user of Apple’s Automator tool, but sometimes it’s very useful. For example, A10 Networks load balancers make it pretty easy for administrators to capture packets without having to remember the syntax and appropriate command flags for a tcpdump command in the shell. Downloading the .pcap file is pretty easy too (especially using the web interface), but what gets downloaded is not just a single file; instead, it’s a gzip file containing a tar file which in turn contains (for the hardware I use) seventeen packet capture files. In this post I’ll explain what these files are, why it’s annoying, and how I work around this in MacOS.

A10 Logo

Sixteen Candles

If you’re wondering how one packet capture turned into sixteen PCAP files, that’s perfectly reasonable and the answer is simple in its own way. The hardware I use has sixteen CPU cores, fifteen of which are used by default to process traffic, and inbound flows are spread across those cores. Thus when taking a packet capture, the system actually requests each core to dump the flows matching the filter specification. Each core effectively has awareness of both the client and server sides of any connection, so both Continue reading

Geo Key Manager: How It Works

Today we announced Geo Key Manager, a feature that gives customers unprecedented control over where their private keys are stored when uploaded to Cloudflare. This feature builds on a previous Cloudflare innovation called Keyless SSL and a novel cryptographic access control mechanism based on both identity-based encryption and broadcast encryption. In this post we’ll explain the technical details of this feature, the first of its kind in the industry, and how Cloudflare leveraged its existing network and technologies to build it.

Keys in different area codes

Cloudflare launched Keyless SSL three years ago to wide acclaim. With Keyless SSL, customers are able to take advantage of the full benefits of Cloudflare’s network while keeping their HTTPS private keys inside their own infrastructure. Keyless SSL has been popular with customers in industries with regulations around the control of access to private keys, such as the financial industry. Keyless SSL adoption has been slower outside these regulated industries, partly because it requires customers to run custom software (the key server) inside their infrastructure.

Standard Configuration

Standard Configuration

Keyless SSL

Keyless SSL

One of the motivating use cases for Keyless SSL was the expectation that customers may not trust a third party like Cloudflare with their Continue reading

Introducing the Cloudflare Geo Key Manager

Introducing the Cloudflare Geo Key Manager

Introducing the Cloudflare Geo Key Manager

Cloudflare’s customers recognize that they need to protect the confidentiality and integrity of communications with their web visitors. The widely accepted solution to this problem is to use the SSL/TLS protocol to establish an encrypted HTTPS session, over which secure requests can then be sent. Eavesdropping is protected against as only those who have access to the “private key” can legitimately identify themselves to browsers and decrypt encrypted requests.

Today, more than half of all traffic on the web uses HTTPS—but this was not always the case. In the early days of SSL, the protocol was viewed as slow as each encrypted request required two round trips between the user’s browser and web server. Companies like Cloudflare solved this problem by putting web servers close to end users and utilizing session resumption to eliminate those round trips for all but the very first request.

Expanding footprint meets geopolitical concerns

As Internet adoption grew around the world, with companies increasingly serving global and more remote audiences, providers like Cloudflare had to continue expanding their physical footprint to keep up with demand. As of the date this blog post was published, Cloudflare has data centers in over 55 countries, and we continue Continue reading