This is a guest blog post by Dave Crown, Lead Data Center Engineer at the State of Delaware. He can be found automating things when he's not in meetings or fighting technical debt.
Over the course of the last year or so, I’ve been working on building a solution to deploy and manage Cisco’s ACI using Ansible and Git, with Python to spackle in cracks. The goal I started with was to take the plain-text description of our network from a Git server, pull in any requirements, and use the solution to configure the fabric, and lastly, update our IPAM, Netbox. All this without using the GUI or CLI to make changes. Most importantly, I want to run it with a simple invocation so that others can run it and it could be moved into Ansible Tower when ready.
Read more ...
The network operator is positioning the marketplace for two distinct use cases: turnkey...
This blog covers three quick and effective ways to connect your existing Ansible inventory into Ansible Tower:
If you don’t have Ansible Tower yet and want to download and try it out, please visit: https://www.ansible.com/products/tower
If you’re using dynamic inventory, you don't need to import your inventory into Ansible Tower. Dynamic inventory retrieves your inventory from an Continue reading
The DockerCon Agenda builder is live! So grab a seat and a cup of coffee and take a look at the session lineup coming to San Francisco April 29th – May 2nd. This year’s DockerCon delivers the latest updates from the Docker product team, lots of how to sessions for developers and IT Infrastructure and Ops, and customer use cases. Search talks by tracks to build your agenda today.
Use the agenda builder to select the sessions that work for you:
As expected, Intel will be the prime contractor for the first exascale supercomputer in the United States, which Argonne National Laboratory expects to be operational and capable of sustained exaflops performance by the end of 2021. …
Intel To Take On OpenPower For Exascale Dominance With Aurora was written by Nicole Hemsoth at .
Microsoft contributed the open source Software for Open Networking in the Cloud (SONiC) to OCP in...


The practice of HTTPS interception continues to be commonplace on the Internet. HTTPS interception has encountered scrutiny, most notably in the 2017 study “The Security Impact of HTTPS Interception” and the United States Computer Emergency Readiness Team (US-CERT) warning that the technique weakens security. In this blog post, we provide a brief recap of HTTPS interception and introduce two new tools:
In a basic HTTPS connection, a browser (client) establishes a TLS connection directly to an origin server to send requests and download content. However, many connections on the Internet are not directly from a browser to the server serving the website, but instead traverse through some type of proxy or middlebox (a “monster-in-the-middle” or MITM). There are many reasons for this behavior, both malicious and benign.
One common HTTPS interceptor is TLS-terminating forward proxies. (These are a subset of all forward proxies; non-TLS-terminating forward proxies forward TLS connections without any ability to inspect encrypted traffic). A TLS-terminating forward proxy sits Continue reading
The update includes additional application intelligence capabilities, a cloud-based orchestrator,...
Jason Foster is an IT Manager at the Center for Advanced Public Safety at the University of Alabama. The Center for Advanced Public Safety (CAPS) originally developed a software that provided crash reporting and data analytics software for the State of Alabama. Today, CAPS specializes in custom software mostly in the realm of law enforcement and public safety. They have created systems for many states and government agencies across the country.
Bryan Salek, Networking and Security Staff Systems Engineer, spoke with Jason about network virtualization and what led the Center for Advanced Public Safety to choosing VMware NSX Data Center and what the future holds for their IT transformation.
As part of a large modernize data center initiative, the forward-thinking CAPS IT team began to investigate micro-segmentation. Security is a primary focus at CAPS due to the fact that the organization develops large software packages for various state agencies. The applications that CAPS writes and builds are hosted together, but contain confidential information and need to be segmented from one another.
Once CAPS rolled out the micro-segmentation use-case, the IT team decided to leverage NSX Data Center for disaster recovery purposes as Continue reading
To understand how far natural language processing (NLP) has progressed in the past decade and how fast it is evolving now, we need to update Alan Turing’s thought experiment on how to test an AI for conversational intelligence to a 21st Century context and methodology. …
Modernizing The Turing Test For 21st Century AI was written by Paul Teich at .
Until about 2017, the cloud was going to replace all on-premises data centers. As it turns out, however, the cloud has not replaced all on-premises data centers. Why not? Based on the paper under review, one potential answer is because containers in the cloud are still too much like “serverfull” computing. Developers must still create and manage what appear to be virtual machines, including:
Serverless solves these problems by placing applications directly onto the cloud, or rather a set of libraries within the cloud.
The authors define serverless by contrasting it with serverfull computing. While software is run based on an event in serverless, software runs until stopped in a cloud environment. While an application does not have a maximum run time in a serverfull environment, there is some maximum set by the provider in a serverless Continue reading
Competing visions: The World Economic Forum’s blog looks at four competing visions of the Internet that it sees emerging. These include Silicon Valley’s open Internet, Beijing’s paternal Internet, Brussels’ bourgeois Internet, and Washington’s commercial Internet. Will one vision win out?
Searching for fakes: WhatsApp, the popular messaging app owned by Facebook, is testing reverse image search in its efforts to battle fake news, TheNextWeb reports. The chat app may use Google APIs to compare the targeted image with similar pictures as a way to filter out doctored images.
Working against itself: An Artificial Intelligence that can right fake news articles may also be useful for spotting them, the MIT Technology Review says. Recently, OpenAI withheld the release of its new language model on fears that it could be used to spread misinformation, but researchers say the tool may be useful for the opposite effect.
Privacy laundering: Lawfareblog.com take a hard look at Facebook’s recent announcement that it was moving to end-to-end encryption. The social media giant won’t fix its privacy problems with the move, however, the article says. “Facebook’s business model is the quintessential example of ‘surveillance capitalism,’ with user data serving as the main product that Facebook sells to Continue reading
Today's Network Break discusses new Facebook switches released through the Open Compute Project, examines two significant acquisitions--F5 buying NGINX and NVIDIA ponying up for Mellanox--and reviews more tech and IT news.
The post Network Break 226: Facebook Announces New Open Compute Switches; F5 Buys NGINX appeared first on Packet Pushers.
When Intel starts shipping its “Cascade Lake” Xeons in volume soon, it will mark a turning point in the server space. …
Researchers Scrutinize Optane Memory Performance was written by Michael Feldman at .
The post Web Applications compromise detection using Flow Data appeared first on Noction.
TL&DR: We ran two workshops in Zurich last week – a quick peek into using Ansible for network automation and updated Building Private Cloud Infrastructure. You can access workshop materials with any paid ipSpace.net subscription.
Now for the fun part…
Read more ...Datacenter RPCs can be general and fast Kalia et al., NSDI’19
We’ve seen a lot of exciting work exploiting combinations of RDMA, FPGAs, and programmable network switches in the quest for high performance distributed systems. I’m as guilty as anyone for getting excited about all of that. The wonderful thing about today’s paper, for which Kalia et al. won a best paper award at NSDI this year, is that it shows in many cases we don’t actually need to take on that extra complexity. Or to put it another way, it seriously raises the bar for when we should.
eRPC (efficient RPC) is a new general-purpose remote procedure call (RPC) library that offers performance comparable to specialized systems, while running on commodity CPUs in traditional datacenter networks based on either lossy Ethernet or lossless fabrics… We port a production grade implementation of Raft state machine replication to eRPC without modifying the core Raft source code. We achieve 5.5 µs of replication latency on lossy Ethernet, which is faster than or comparable to specialized replication systems that use programmable switches, FPGAs, or RDMA.
eRPC just needs good old UDP. Lossy Ethernet is just fine (no need for fancy lossness Continue reading
As the industry grapples with the unfulfilled potential of 5G and waits for more distinct use cases...