Author Archives: Trevor Pott
Author Archives: Trevor Pott
How do you defend what you don’t know exists? In IT, this is more than just an existential question, or fuel for a philosophical debate. The existence of a complete network inventory—or the lack thereof—has a real-world impact on an organization’s ability to secure their network. Establishing and maintaining a network inventory is both a technological and a business process problem, and serves as an excellent example of the importance of open standards to a modern organization.
Consider for a moment NASA’s Jet Propulsion Laboratory (JPL). In April 2018 the JPL experienced a cybersecurity event. Upon investigation, it was determined that this was caused by someone smuggling an unauthorized Raspberry Pi onto the premises and connecting it to the network.
This incident triggered a security audit, and the results of that June 2019 report were, though not unexpected, still rather disappointing. The auditors’ biggest concern was that the JPL didn’t have a comprehensive, accurate picture of what devices were on its networks, nor did it know whether or not those devices were authorized to be there.
This lack of an up-to-date and automated network inventory led to a successful hack of the JPL via the unauthorized Raspberry Pi. Some Continue reading
Taking full advantage of all that IT automation and orchestration have to offer frequently involves combining IT infrastructure automation with in-house application development. To this end, open source software is often used to speed development. Unfortunately, incorporating third-party software into your application means incorporating that third-party software’s vulnerabilities, too.
Scanning for, identifying, and patching open source dependencies in an application’s codebase is known as dependency management, and it’s increasingly considered a critical part of modern development. A recent report found that 60% of open source programs audited had a vulnerability that’s already been patched. With 96% of all code using open source libraries, this is a problem that impacts everyone.
There are many dependency management products available; too many to list in a single blog post. That said, we’ll look at some examples of well-known dependency management products that fall into three broad categories: free, open source software; commercial software with a free tier; and commercial software without a free tier.
Some dependency management products rely on open source vulnerability lists (the most famous of which is supplied by the National Institute of Standards and Technology [NIST]). Some products are commercial, and use closed databases (often in combination with the Continue reading
Continuing Integration and Continuing Development (CI/CD), and containers are both at the heart of modern software development. CI/CD developers regularly break up applications into microservices, each running in their own container. Individual microservices can be updated independently of one another, and CI/CD developers aim to make those updates frequently.
This approach to application development has serious implications for networking.
There are a lot of things to consider when talking about the networking implications of CI/CD, containers, microservices and other modern approaches to application development. For starters, containers offer more density than virtual machines (VMs); you can stuff more containers into a given server than is possible with VMs.
Meanwhile, containers have networking requirements just like VMs do, meaning more workloads per server. This means more networking resources are required per server. More MAC addresses, IPs, DNS entries, load balancers, monitoring, intrusion detection, and so forth. Network plumbing hasn’t changed, so more workloads means more plumbing to instantiate and keep track of.
Containers can live inside a VM or on a physical server. This means that they may have different types of networking requirements than traditional VMs, (only talking to other containers within the same VM, for example) than other workloads. Continue reading
Who controls containers: developers, or operations teams? While this might seem like something of an academic discussion, the question has very serious implications for the future of IT in any organization. IT infrastructure is not made up of islands; each component interacts with, and depends on, others. Tying all components of all infrastructures together is the network.
If operations teams control containers, they can carefully review the impact that the creation of those containers will have on all the rest of an organization’s infrastructure. They can carefully plan for the consequences of new workloads, assign and/or reserve resources, map out lifecycle, and plan for the retirement of the workload, including the return of those resources.
If developers control containers, they don’t have the training to see how one small piece fits into the wider puzzle, and almost certainly don’t have the administrative access to all the other pieces of the puzzle to gain that insight. Given the above, it might seem like a no-brainer to let operations teams control containers, yet in most organizations deploying containers, developers are responsible for the creation and destruction of containers, which they do as they see fit.
This is not as irrational as it Continue reading
Containers are unlike any other compute infrastructure. Prior to containers, compute infrastructure was composed of a set of brittle technologies that often took weeks to deploy. Containers made the automation of workload deployment mainstream, and brought workload deployment down to minutes, if not seconds.
Now, to be perfectly clear, containers themselves aren’t some sort of magical automation sauce that changed everything. Containers are something of a totem for IT operations automation, for a few different reasons.
Unlike the Virtual Machines (VMs) that preceded them, containers don’t require a full operating system for every workload. A single operating system can host hundreds or even thousands of containers, moving the necessary per-workload RAM requirement from several gigabytes to a few dozen megabytes. Similarly, containerized workloads share certain basic functions – libraries, for instance – from the host operating system, which can make maintaining key aspects of the container operating environment easier. When you update the underlying host, you update all the containers running on it.
Unlike VMs, however, containers are feature poor. For example, they have no resiliency: traditional vMotion-like workload migration doesn’t exist, and we’re only just now – several years after containers went mainstream – starting to get decent persistent Continue reading
Systems administrators are the heart of any IT team. Since IT is arguably what keeps most modern organizations operating, then in some ways sysadmins are the heart of modern organizations. Of course network automation can make the lives of network engineers easier, but it can also benefit enterprises as a whole. Yet here’s an interesting quandary: does network automation even benefit systems administrators?
Sysadmins shepherd hardware, virtualization platforms, Operating System Environments (OSEs), and more. They must master multiple disciplines within IT, and are under the constant pressure to learn even more. The life of a systems administrator is one of trying to carefully balance the solving of immediate problems while investing with their future via automation to prevent these problems from recurring, additionally with lifelong learning.
As a profession, systems administration has been cyclical. In one part of the cycle the generalist sysadmin is championed. In the next, specialization is all the rage. The past decade has seen the generalist brought once more to the fore, as specialties such as “storage administrator” are automated away by clever software.
However, throughout the decades the physical networks have largely remained the jurisdiction of dedicated network administrators. If the networks belong to Continue reading