Welcome to part four of this series. This this final part, we will explore our options for networking a composed application, from a de-composed monolith or set of microservices.
Here is a logical set of options:
Proxy: Having a network kernel, ADC or proxy for every component to handle implementation of the service chain. Sidecars quickly solve an issue, but double component count within a mesh. Proxies work well in public and private clouds, but for commercial applications may incur license costs as well as higher resource utilisation to cover the sidecar container.
Language specific libraries: which wrap your application packets in a NSH handling outer encapsulation. No sidecar required, no modification of a host. This adds complexity to software development in terms of modified socket libraries, but a well designed and implemented library does not expose the complexity. All your code has to do, is accept connections through a modified socket library. This works in the cloud providing security policies and routing domains allow it.
Overlay: Add flow data to forwarding entities. Let’s face it, this isn’t going to happen in a cloud environment unless you’ve implemented a full overlay. An OpenVSwitch (OVS) overlay network would Continue reading
Applications are ever evolving and so are the architecture patterns:
MONOLITH -> MICROSERVICES -> FUNCTIONS + FLOWS
Monoliths were easy. Route to them and send the returned packets back to their source.
Microservices (MS) sees a monolith or new application being reduced to smaller self-contained parts, which may talk east-west or north-south. It’s quite common to see a proxy deal with inbound connections and internal communication between components hidden from external interactions. Internal communication typically is either point-to-point (also could be through a load balancer/proxy) or via a message bus of some description.
Functions & Flows makes life even more interesting. We further break down the components of microservices to individual functions that deliver pages, computation and web application components etc. More flow information exists on the whole and the number of points involved in an interaction with an application increase with every de-aggregated component deployed.
For brevity, I’m going to call Functions & Flows, F2. I’ve never seen it shortened to this, so if you see it elsewhere, let me know!
To add to this, MS and F2 components may reside on different infrastructure, separated by the internet and differing policies. Thus, deduced, different IP underlying capabilities.
OpenFlow (OF) adoption failed due to scalability of forwarding tables on ASICS, not so great controllers, lack of applications and a non-existent community. OpenFlow however is still useful today for overriding forwarding decision making on a hop-by-hop basis and handling exceptions from what would otherwise be a normal steady state forwarding decision. Exceptions like bypassing limited throughput devices like DPI nodes for large known file transfers are a classic use case. We don’t care beyond simple authentication (maybe) who the client is, so take our file and don’t consume resources doing it.
OpenFlow presents flow state to an ASIC, state that can be granular. If we use it for forwarding equivalency classes (FECs) then it’s no different to normal routing and frame forwarding. That wasn’t the goal and thus, it added to the list of failure reasons. A controller programs flows via an OpenFlow interface on a network element, flows which could time out automatically or be long-lived, requiring the controller to remove them. Also, flows can be programmed proactively from a network design, or reactively from the controller receiving a header packet and deciding what to do with it. Vendors naturally added to Continue reading
This is part one of a series of posts on Application Composition within Network Service Meshes, otherwise known as Service Function Chaining, but at L7 ad not L3/L4.
In Network Service Meshes (NSM), it is a complex affair steering L7 requests and responses through the correct network of components. The current approach at the time of writing (November 27th 2019) is to accept requests on a proxy entity and couple that proxy to an application component through a data-plane. Ideally the model works in both private on-premises and cloud deployment models.
For the sake of building a mental image, this is a graph network that has both control-plane and data-plane attributes on nodes and vertexes.
In IP networking, IP packets are routed to their destination and return to their source, based on their destination IP header field and when policy requires it, we can use other fields like source IP, protocol and port numbers etc. In large networks (like the internet), it’s the destination field in the IP header. In both IPv4 and IPv6 there exists a means to steer packets through a network based on additional fields being present at the point of ingress to a network edge and Continue reading
From the days of old, setting fire to a large torch would signal to a neighbouring town something was going on. On the Great Wall in China, reports of signals reaching some 470 miles can be read on Wikipedia! Back to the future and modern day times, signals are transmitted and received as part of every application we touch. Signals underpin a system’s communications, irrelevant of what that system is. Software gives off many signals of a wide variety in normal operations and through signal correlation, we can yield useful events. Signals can also be used to achieve an outcome in a remote system as well as direct application API calls.
Being a fan of systems that have a natural synergy to them, I also look for ways to tie application functionality into natural system interactions.
For this post, I want to talk about the separation of concerns between an application’s functionality via it’s primary operational interface, likely an API of some sort, versus the application’s operational configuration, which allows it start on the correct TCP/IP port and consume the correct credential information.
Why not just get the application to refresh its configuration through the operational interface? The best way Continue reading
Network engineers for the last twenty years have created networks from composable logical constructs, which result in a network of some structure. We call these constructs “OSPF” and “MPLS”, but they all inter-work to some degree to give us a desired outcome. Network vendors have contributed to this composability and network engineers have come to expect it by default. It is absolute power from both a design and an implementation perspective, but it’s also opinionated. For instance, spanning-tree has node level opinions on how it should participate in a spanning-tree and thus how a spanning-tree forms, but it might not be the one you desire without some tweaks to the tie-breaker conditions for the root bridge persona.
Moving to the automated world primarily means carrying your existing understanding forward, adding a sprinkle of APIs to gain access to those features programmatically and then running a workflow, task or business process engine to compose a graph of those features to build your desired networks in a deterministic way.
This is where things get interesting in my opinion. Take Cisco’s ACI platform. It’s closed and proprietary in the sense of you can’t change the way it works internally. You’re lumped with a Continue reading
For the last five or six years, I’ve not really done any networking and have focussed on software, automation and the mechanisation of processes so that they may be manifested as network driving workflows. I try to keep up with networking technology and working for Juniper has really made me level up in this aspect. I’m lucky to be surrounded by an army of real experts and it’s humbling. What’s still a thorn in my side is the beginner expert community around automation, and I’m working to bring awareness to this through providing questions and insight with methodologies to bootstrap the journey. More on that another time. This paragraph is to position some emotions for what’s about to follow!
To get to the crux of this post, now shift your view to your every day life. How many times a day does an app crash on your phone, laptop or tablet? When was the last time a feature wasn’t available on your TV because you didn’t upgrade to the latest version of software? Right at the beginning of my career, I worked in real time electronics. Machinery that should not die randomly, or just become obsolete because of the hardware Continue reading
In this post I’ll explore replacing the heart of a network operating system’s configuration mechanism with the software developers take on version control. It can be argued that network operating systems, or at least good ones, already have a version control system. It’s that very system that allows you to roll back and carry out operations like commit-confirmed. More specifically, this is a version control system like Git but not specifically git.
As my day job rotates around Junos, I’ll concentrate on that. So why would anyone want to rip out the heart of Junos and replace it with a git backed directory full of configuration snippets? Software developers and now automation skilled engineers want the advantages of being able to treat the network like any other service delivering node. Imagine committing human readable configuration snippets to a network configuration directory and having the network check it out and do something with it.
Junos already has a configuration engine capable of rollbacks and provides sanity through semantic and syntax commit time checks. Mgd (the service you interact with) provides mechanisms to render interfaces through YANG models and generates the very configuration tree you interact with. You could say mgd takes Continue reading
Week of 24th June 2019 was interesting. We had #ferrogate which made a lot of network engineers very unhappy and also an ongoing social media thread on code comments. For this discussion, I’m going with the title of "leaving comments in code expressed artefacts" because code represents more than writing software. I feel quite passionately about this having been on the raw end of no code comments and also being guilty of leaving plenty of crappy and unhelpful comments too.
Let’s set a scene. You’ve had a long day and you’re buckled in for what can only be described as a mentally exhausting night. The system architecture is clearly formed in your head and you’re beginning to see issues ahead of time. You can’t quite justify any premature optimisation, but you know this current design has a ceiling. You also know there are system wide intricacies that are not obvious at the component level.
Normality in these scenarios is to insert context based comments, which make perfect sense at 2am, but next day 9am exhausted you may be confused as to what on earth happened in the early hours. We’ve all been there.
There are multiple trains Continue reading
Workflows vary from seriously simple to notoriously complex and as humans, we might not even consciously observe the subtleties of what a workflow comprises of. Workflows are the source of control semantics and comprise of many elements, some obvious some not so. This post is a primer to help you think about the kind of workflows you encounter drawn from my experiences. This post offers a view with conviction backed by experience.
To set the tone, workflows have logical flow, temporal behaviour, consume and transmit data, for processing triggers, acting on decision points and returning states. Since the 1970s, I believe we haven’t actually come that far from a workflow orchestration standpoint. Atomic units of code exist that do one thing well, a real win for the 1970s and good automation systems understand how to instantiate, feed these atomic blobs of logic data and grab their exit state and content. On a *nix system, it’s possible to use bash to create a single chain of tasks using the | operator. One blob of logic effectively feeds it’s output to the next blob of logic. Who needs an orchestrator? It’s sensible to include detection logic within each blob of code to Continue reading
JSNAPY is an open source tool released by Juniper Networks circa 2015 that is the Python version of the Juniper Snapshot Administrator. This tool in the most simplest sense gives us the ability to have unit-tests when working with Junos, much in the same way a developer would write tests against their code. JSNAPy creates snapshots of a device’s operational or configurational state, the content of which depends on tests. JSNAPy then can diff and check these snapshots, which when combined with your test logic, means you can detect when things change or don’t change as per your desire. It’s a simple but effective tool when working with Junos. In fact, if you have another system to take the snapshot, JSNAPy is really an XML snippet checking tool and thus, it can be used for multi-vendored environments!!!
JSNAPy is a great tool for not only dealing with operational changes, but also also for steady state change operations too through the use of both
preand
posttests and the logical operators JSNAPy supports. It’s worth mentioning you can call the snaps and tests anything you want. Bob and Alice are both valid examples of a snap name, but the advice Continue reading
There is still an ongoing debate over the need for network engineers to pick up some software skills. Everything network engineers touch in more recent times has some programmatic means of control and these interfaces can be used to scale out engineer workflows or for abstract systems to drive. The bottom up view is to write scripts or use tools like Terraform or Ansible to use them. In engineer driven workflows, I see regular usage of Salt Stack as an abstraction layer over the top of a target group of devices to do very human tasks with! The latter use case is interesting because it follows a very basic system rule of high gain from abstraction. In this instance, the programmatic interfaces are used to amplify human capabilities. If that’s the bottom up view, the top down view is to embrace the world of RPA (Robotic Process Automation). We’ve been calling this "big button" automation for years now and we can view this as human driven tasks, mechanised to run on a platform or framework. It’s a case of "Back to the Future" and it comes straight out the 1970s.
When a network engineer goes on a Python course to Continue reading
When it comes to expressing intent in automation workflows, there is validation in both using a task or workflow engine and also knocking it together using scripting in some language. I try not to get involved in tool or language wars, but quite honestly sometimes can’t help myself. I’ve even been known to throw fire on the fuel and get the marshmallows out.
Sometimes a framework or tool can feel constrained and by design can force you to work in a way that is computable. Let’s take what Ansible or Mistral does. It has a set of ordered tasks, an entry point, some input variables that "flow" through the lists of tasks and some calls to some modules that deal with outputs. I can understand how network engineers don’t like some of these approaches because it feels like dynamic feedback is missing from the engineering. Testing through both verification and validation phases is supposed to replace that immediate dynamic feedback and it can take some time to get used to.
These kinds of automation tools require installation and also the correct modules for integration against the networking components. The tool build can also be automated and Continue reading
Event-Driven automation is an umbrella term much like "coffee" (also see here, it turns out I’ve used coffee anecdotes way too much). How many times do you go to a popular chain and just blurt out "coffee". At 5am, it might be the nonsensical mysterious noise automagically leaving one’s mouth but once we decide it’s bean time, we get to the specifics.
There are multiple tools that give you different capabilities. Some are easier to get started with than others and some are feature rich and return high yields of capability against invested time.
Friendly dictator advice; try not to get wrapped up in the message bus used or data encapsulation methodologies. Nerdy fun, but fairly pointless when it comes to convincing anyone or organisation to make a fundamental shift.
Event-Driven is about receiving a WebHook and annoying people on Slack
This is a terrible measure and one we needed to have dropped yesterday. In more programming languages than I can remember, I’ve written the infamous "Hello World" and played with such variables, struct instances and objects as the infamous "foo" and the much revered "bar". Using an automation platform to receive an HTTP post and updating a support Continue reading
For years, thanks to the gift of misaligned perception, I’ve been mentally blocked. I’ve avoided things like Machine Learning because my perceived skill with mathematics is weak, avoided programming languages like C# because the perceived uphill hike to get familiar is high and avoided front end web development because of the perceived browser nightmares.
Technology has come a long way since I last touched C# and web development and there are some great ML libraries out there which minimize the requirement for hardcore mathematical skill sets. My perceived problems have remained yet the actual blockers have moved and morphed. I’ve lived on old ideas without re-grouping and forming a refreshed attack. More on my foolish ways later.
For many people and organizations, it pains me to admit that perception of network automation is also misplaced. It spans from “Ansible is the answer, sorry, what were you asking?” to “Python will save the day”, following “The automation is the design!”.
Ivan Pepelnjak as usual has wrote some great content on topic as per usual. Read this post for a rather targeted view on expert beginners. TL;DR: “I got hello-world working for one tool, me now expert”.
Currently I also Continue reading
Time is the enemy of everything in the field of IT. It doesn’t matter whether you are a designer or operator. Time is your sworn enemy.
In all the training and certifications I’ve ever done, all that is missing is a knighting ceremony in which a sword is laid on your shoulders and you’re sworn in to be an enemy of the phenomena.
Time is a speculative investment, time is relative, highly subjective and makes us emotional. It offers us unsolvable yet predictable challenges. We only ever hear "we need more time". Managers demand it and engineers beg for it. Everything costs time and nothing will give us time back. Given that last statement, high return time investments are key.
In software, we’ve moved to agile, which lets us split software releases up in to super tiny chunks. Instead of a huge development cycle followed by huge deployment and troubleshooting window, we’ve moved to a tiny slice model, in which we do a tiny amount of design, a tiny amount of coding and a tiny amount of deployment and troubleshooting. This move allows us to target the highest priorities quicker and target more accurately, which results in appearing to Continue reading
Sometimes during exploration or projects, I want to take a YANG model and convert it along with related dependencies to a Swagger format (think OpenAPI if you’re not familiar with this) so I can create a REST or RESTConf API interface. OpenDayLight does something very similar for it’s Swagger based North Bound Interface (NBI), more information here and just being able to look at the model this way is sometimes helpful. If you’re wondering how helpful this could be, think about developing a client. Using this approach, it’s possible to create stub client and server code for a software implementation, leaving just the logic of what to do when a POST is made or a GET is requested etc.
You may be familiar enough with YANG to know that YANG is a modeling language with its own extensible type system. These YANG models are mostly used for modeling how a programmatic interface to control a feature should be on routers and switches. More recently thanks to the wave of automation sweeping across the globe, YANG models are now used for modeling services, which in turn are rendered over one or more nodes by something else. We’re not going to cover Continue reading
Human beings as we are, struggle sometimes to think multi-dimensionally about tasks. Our brains seem to have a conscious layer and a sub-conscious layer. Whether you think in words, noise or images, your brain is a single threaded engine with a silent co-processor that can either assist or annoy. Experience has shown that we look at network automation challenges through this shaped lens and try and solve things that makes sense to humans, but not necessarily for mechanized processes.
In an attempt not to lose my own thread, I’ll try and explain some different view points through examples.
Making a a cup of tea is a very English thing to do and the process of making one will suffice for this example.
Let’s look at the process involved:
// { type: activity} (Start)-><a>[kettle empty]->(Fill Kettle)->|b| <a>-(note: Kettle activities) <a>[kettle full]->|b|->(Boil Kettle)->|c| |b|->(Add Tea Bag)-><d>[Sugar: yes]->(Add Sugar)->(Add Milk) <d>[Sugar: no]->(Add Milk) <d>-(note: Sweet tooth?) (Add Milk)->|c|->(Pour Boiled Water) (Pour Boiled Water)->(Enjoy)->(Stop)
Fig.1
This makes us a relative standard cup of English breakfast tea.
Let’s assume macros exist for milk and sugar quantity and the dealing of a mug or best china Continue reading
This post acts as the introduction for two other posts, which cover Junos data collection tools for Kafka and InfluxDB. The code is open-sourced and licensed under MIT. Both applications are ready for release and I’ve spent considerable spare time building and testing both pieces of software.
Back in yesteryear, I used to be a C developer and enthusiast. Thanks to the infamous K&R C book, it made perfect sense when I needed a language that provided syntax what I thought of at the time as one level higher than assembly.
Roll the clock forwards a decade and Go has become my ‘go to’ (multi-pun intended) language. It’s powerful from its simplicity, easy to debug and super easy to observe when things aren’t going as you planned. The concurrency capabilities seem to make perfect sense and development cycles are short thanks to the powerful “batteries included” tool-chain. Building binaries couldn’t be easier and building containers for the likes of Docker is a piece of cake. Thanks also to the language’s popularity, tools like Travis-CI are easy to work with. Powerful enough to do almost anything, easy enough to learn in days and offers Continue reading
The world changes. The hit novel “Who moved my cheese?” written twenty years ago, has sold over 25 million copies to help with people experiencing change. For those who work with networking technology, we’re experiencing seismic activity in the world of change and new continents are forming from scattered islands. Some of these continents so to speak are unchartered and misunderstood. This generation of engineers are the explorers of the new world and the lands are ripe for pillaging.
Common feedback around learning includes:
Some of this feedback has lead me to write and publish this article based on my own sanity saving methodology.
The relationship between change and progress is interesting. Not all change is progress, but all progress is change. In IT, sometimes we’ve played both polar opposite parts in the “Change for change’s sake” murder novel.
Change, rate of change, disruption Continue reading