Katherine Gorham

Author Archives: Katherine Gorham

Taking the ‘hands-on’ out of data center administration

Discussions about networking in a work-from-home world often focus on employees and endpoints, but how can network administrators do more than just keep the lights on if they can’t go to the data center? Maintaining what exists isn’t enough, especially as the entire world is redefining the future of work. Organizations need to be able to adapt to change, so how is that possible when administrators can’t go hands on?

There are any number of remote administration options available today, and any number of ways to compare them. Deciding between them is all about finding the right balance between cost, capability, and the labor intensity of implementation. In other words, they’re subject to all of the same considerations as any other technology implementation.

To dispense with the network administration 101 portion of the discussion*, yes, networking is mostly a matter of remote administration anyway. If you can remote into something that has access to the management network, you can use SSH, HTTPS, or what-have-you to administer networks just as you would if you were in the office. That’s maintenance, not change.

Accomplishing change remotely and at scale requires automation and orchestration. In practice, this is heavily dependent upon virtualization and/or Continue reading

Split-tunnel VPNs—friend or foe?

Virtual private networks (VPNs) provide security when remote workers access corporate networks, but they’re notoriously slow. Backhauling all traffic for all remote users through the corporate data center just isn’t practical when work from home really starts to scale. Fortunately, VPNs can be configured to operate in more than one way.

Today, most organizations—regardless of size—use some combination of on-premises and public cloud computing. This means that some requests need to go to one or more corporate data centers, while some need to find their way to the Internet.

Traditional VPNs send all requests—both corporate-bound and Internet-bound—through the corporate network because that’s where the corporate information security defenses are located. Today, this approach is causing significant performance problems.

Scaling …

The most popular traditional solution to the problem of VPN performance problems was to just buy a bigger router or firewall. The overhead of the VPN tunnel on throughput isn’t that large, and many traditional corporate applications weren’t latency sensitive. This meant that performance problems usually occurred because the device where the VPNs terminated—the router or firewall—just didn’t have enough processing power to handle the required number of concurrent sessions at the current level of throughput usage.

Times have changed, Continue reading

Modular networking in a volatile business environment

Organizational change, growth, and environmental diversity are all challenges for IT teams, and they’re going to be a part of everyday life for the foreseeable future. As the number of device models and network architectures increases, so, too, does management complexity. Coping with 2020’s ongoing gift of unpredictability requires technological agility, something Cumulus Networks, acquired by NVIDIA, can help you with.

It’s easy to worry about the consequences of our collective, rapidly changing economic circumstances as though the problems presented are somehow novel. They’re not.

2020 has increased uncertainty, leading to an increased velocity of change, but change is the only constant in life, and the need for agile networking has been obvious to many in the industry for some time. Even without problems like having to rapidly figure out how to cope with large chunks of the workforce working from home, change-responsive networking has been a challenge for organizations experiencing growth for decades, a problem many continue to struggle with today.

At a practical level, one of the biggest problems with rapid change is that it quickly leads to a dilemma: precisely meet the needs of the moment, resulting in a significant uptick in equipment diversity, or deploy Continue reading

As networks grow, web-scale automation is the only way to keep up

Networks just keep growing, don’t they? They’ve evolved from a few machines on a LAN to the introduction of Wi-Fi—and with the Internet of Things (IoT), we’ve now got a whole new class of devices. Throw in the rise of smartphones and tablets, cloud and edge computing, and network management starts to get a little unwieldy. Managing a network with 300 devices manually might be possible—300,000 devices, not so much.

What is web-scale automation?

Network automation has been around awhile now, in various names from various vendors, using a number of proprietary protocols. The key word being “proprietary.” Many traditional network vendors design a well-functioning network automation system, but participate in vendor lock-in by ensuring that the associated automation stack, and its requisite protocols, only run on their hardware.

Web-scale automation is different. It relies on open, extendable standards like HTTPS, JSON, and netconf, among an ever-increasing number of systems and solutions. With web-scale automation in your organization, network management can over time become a background function; something that only notifies you in exceptional circumstances.

This does not, in any way, reduce the need for those who know networks to be employed at your organization—it simply reduces the amount Continue reading

Virtual Data Centers, SDN, and Multitenancy

When you aren’t the size of Netflix, you may not be guaranteed dedicated infrastructure within a data center; you have to share. Even in larger organizations, multitenancy may be required to solve regulatory compliance issues. So what is multitenancy, how does it differ from other forms of resource division, and what role do networks play?

Gartner Inc. defines multitenancy as “a reference to the mode of operation of software where multiple independent instances of one or multiple applications operate in a shared environment. The instances (tenants) are logically isolated, but physically integrated.” This is basically a fancy way of saying “cutting up IT infrastructure so that more than one user/department/organization/and so on can share the same physical IT infrastructure, without being able to see one another’s data.”

That “without being able to see one another’s data” is the critical bit. Allowing multiple users to use a single computer has been possible for decades. Multi-user operating systems, for example, can allow multiple users to log in to a single computer at the same time. While this approach does allow multiple users to share a physical piece of IT infrastructure, it isn’t multitenancy.

In a multi-user OS, the multiple users Continue reading

Automate, orchestrate, survive: treating your network as a holistic entity

Organizations need to learn to think about networks as holistic entities. Networks are more than core routers or top-of-rack (ToR) switches. They’re composed of numerous connectivity options, all of which must play nice with one another. What role does automation play in making network heterogeneity viable? And does getting all the pieces from a single vendor really make management easier if that vendor has 15 different operating systems spread across their lineup of network devices?

Most network administrators are used to thinking about their networks in terms of tiers. Access is different from branch, which is different from campus, and so forth. Datacenter is something different again, and then there’s virtual networking complicating everything.

With networks being so big and sprawling that they frequently occupy multiple teams, it’s easy to focus on only one area at a time. Looking at the network holistically—both as it exists, and as it’s likely to evolve—is a much more complicated process, and increasingly important.

Networks grow, evolve and change. Some of this is organic; growth of the organization necessitates the acquisition of new equipment. Other times growth is more unmanaged; something that’s especially common with mergers and acquisitions (M&As).

Regardless of reason, change in Continue reading

Why edge computing needs open networking

Edge computing deployments need to be compact, efficient, and easy to administer. Hyperconverged infrastructure (HCI) has proven to be a natural choice for handling compute and storage at the edge, but what considerations are there for networking?

To talk about edge computing it helps to define it. Edge computing is currently in a state very similar to “cloud computing” in 2009: If you asked five different technologists to define it, you’d get back eight different answers. Just as cloud computing incorporated both emerging technologies and a limited set of established practices, edge computing does the same.

The broadest definition of edge computing is that it’s any situation in which an organization places workloads inside someone else’s infrastructure, but isn’t at one of the major public clouds. This comes with the caveat that the major public cloud providers are, of course, heavily investing in edge computing offerings of their own, muddying the waters.

Traditional IT practices that fall into the realm of edge computing today include colocation, content delivery networks (CDNs), most things involving geographically remote locations and so forth—the “edge” of modern networks. But edge computing also covers the emerging practices of using mobile networks (among others) for Internet of Continue reading

It’s a fact: choosing your own hardware means lower TCO

An essential part of open networking is the ability to choose your own hardware. This allows for customization of your network to suit business needs, and it can also dramatically reduce your Total Cost of Ownership (TCO). On average, open networking with Cumulus helps customers reduce their capital expenditures (CapEx) by about 45% and operational expenditures (OpEx) in the range of approximately 50% to 75%.

Choosing the right hardware is a big part of these savings. If you compare bare-metal networking equipment with a similar product from a proprietary networking vendor, you’ll quickly find that bare-metal hardware is much less expensive. One reason for this is competition between hardware vendors in the open networking space.

Open networking is a multi-vendor ecosystem. More than 100 switches are certified to work with Cumulus Linux; they’re manufactured by vendors such as Dell, HPE, Mellanox, Supermicro, and others. Unlike with proprietary switches, there’s no vendor lock-in creating a monopoly situation. In the open networking space, vendors compete for sales, and this keeps costs down.

Another factor in lowering costs is the degree of customization available when you have many products to choose from. Choosing your own hardware means buying what you need—and only Continue reading

Prevent lateral compromise with microsegmentation

It’s an unfortunate reality of information security: Eventually, everyone gets compromised. Manufacturers, banks, tech support companies, retail giants, power plants, municipal governments … these are just some of the sectors that have been affected by high-profile data breaches in recent months. Everyone gets hacked. You will, too.

This isn’t cause for despair. It simply means that effective security has to focus on more than just intrusion prevention. Hackers will eventually get into any network, if they’re willing to spend enough time and money doing so. But whether or not they get anything useful once they’ve gained entry—that’s another story.

Good network design can minimize the damage incurred during an attack. There are more ways to approach this than will fit in a single article, so this blog will only focus on network segmentation, and its smaller sibling, microsegmentation.

What is network segmentation?

Network segmentation is the practice of dividing a network into one or more subsections. Each subsection usually contains different kinds of resources and has different policies about who has access to that segment. There are a variety of ways to accomplish the division.

Network segmentation runs along a spectrum from the purely physical to the purely logical. The Continue reading

The case for open standards: an M&A perspective

Very few organizations use IT equipment supplied by a single vendor. Where heterogeneous IT environments exist, interoperability is key to achieving maximum value from existing investments. Open networking is the most cost effective way to ensure interoperability between devices on a network.

Unless your organization was formed very recently, chances are that your organization’s IT has evolved over time. Even small hardware upgrades are disruptive to an organization’s operations, making network-wide “lift and shift” upgrades nearly unheard of.

While loyalty to a single vendor can persist through regular organic growth and upgrade cycles, organizations regularly undergo mergers and acquisitions (M&As). M&As almost always introduce some level of heterogeneity into a network, meaning that any organization of modest size is almost guaranteed to have to integrate IT from multiple vendors.

While every new type of device from every different vendor imposes operational management overhead, the impact of heterogeneous IT isn’t universal across device types. The level of automation within an organization for different device classes, as well as the ubiquity and ease of use of management abstraction layers, both play a role in determining the impact of heterogeneity.

The Impact of Standards

Consider, for a moment, the average x86 server. Each Continue reading

How open standards help with defense in depth

If you ask an ordinary person about information security, they’ll probably talk to you about endpoints. Most people are aware of virus scanners for notebooks or PCs, and may have encountered some kind of mobile device management on a work-provided phone. These endpoint solutions naturally come to mind if someone mentions cyber security. However, this is backward from the way that infosec professionals think about the issue.

Someone who works in infosec will tell you that the endpoint should be the absolute last line of defense. If a virus scanner finds malware on your work notebook, the malware should have had to defeat a long list of other security precautions in order to get that far. This layered approach to security is known as defense in depth.

The term “defense in depth” originally was applied to military strategy. It described the practice of trying to slow an enemy down, disperse their attack, and cause casualties; rather than trying to stop their attack at a single, heavily fortified point. The enemy might breach the first layer of defenses, but would find additional layers beyond. While they struggled to advance, they could be surrounded and then counter-attacked.

Infosec in Depth

The information Continue reading