BrandPost: Deploying highly secure, easy to deploy and cost-effective Micro Data Centers

Industry trends such as the Internet of Things (IoT) and content distribution networks (CDNs) are driving the need for edge computing. That’s because these solutions often require low latency, high bandwidth, greater reliability, and strong security.It’s a tall order meant for Micro Data Centers (MDCs) to fulfill. An MDC is a self-contained data center architecture that offers complete IT infrastructure within a stand-alone, secure enclosure.MDCs offer a number of key advantages beyond today’s server rooms and traditional data centers. First, because they’re distributed closer to bandwidth-intensive content, MDCs can significantly reduce latency and lower costs. Secondly, it’s easy and cost-effective to add data center capacity to them anywhere and anytime it is needed – in both IT room and non-climate-controlled environments. And because MDC IT equipment is pre-installed before shipment in a self-contained and secure enclosure, it can provide physical security and protection of critical business applications.To read this article in full, please click here

Sponsored Post: Software Buyers Council, InMemory.Net, Triplebyte, Etleap, Stream, Scalyr

Who's Hiring? 


  • Triplebyte lets exceptional software engineers skip screening steps at hundreds of top tech companies like Apple, Dropbox, Mixpanel, and Instacart. Make your job search O(1), not O(n). Apply here.

  • Need excellent people? Advertise your job here! 

Fun and Informative Events

  • Join Etleap, an Amazon Redshift ETL tool to learn the latest trends in designing a modern analytics infrastructure. Learn what has changed in the analytics landscape and how to avoid the major pitfalls which can hinder your organization from growth. Watch a demo and learn how Etleap can save you on engineering hours and decrease your time to value for your Amazon Redshift analytics projects. Register for the webinar today.

  • Advertise your event here!

Cool Products and Services

  • Shape the future of software in your industry. The Software Buyers Council is a panel of engineers and managers who want to share expert knowledge, contribute to improvement of software, and help startups in their industry. Receive occasional invitations to chat with for 30 minutes about your area of expertise and software usage. No obligations, no marketing emails or sales calls. Upcoming topics include infrastructure and application monitoring, AI/ML platforms, and more. Learn Continue reading

Test Driving transmission for multi-site file sync

As the industry moves towards more distributed deployment of services, syncing files across multiple location is a problem that often needs to be solved. In the world of file synching there are two algorithms that are outstanding. One being rsync which is a very efficient tool for synching files. It works great when you have … Continue reading Test Driving transmission for multi-site file sync

DoE plans world’s fastest supercomputer

The U.S. Department of Energy says it is working on a supercomputer that will break the target of exaFLOP computation – a quintillion (1018)  floating-point computations per second – in order to handle high-performance computing and artificial intelligence.Being built in conjunction with Intel and Cray Computing,  the Aurora supercomputer will  cost more than half a billion dollars and be turned over to Argonne National Laboratory in Chicago in 2021, according to a statement by the DoE. [Click here to see the current top 10 fastest supercomputers.]To read this article in full, please click here

DoE plans world’s fastest supercomputer

The U.S. Department of Energy says it is working on a supercomputer that will break the target of exaFLOP computation – a quintillion (1018)  floating-point computations per second – in order to handle high-performance computing and artificial intelligence.Being built in conjunction with Intel and Cray Computing,  the Aurora supercomputer will  cost more than half a billion dollars and be turned over to Argonne National Laboratory in Chicago in 2021, according to a statement by the DoE. [Click here to see the current top 10 fastest supercomputers.]To read this article in full, please click here

Concluding the IETF Rough Guide, Long Live the IETF Blog

For many years we have produced a series of blog posts as a Rough Guide to each upcoming IETF meeting usually in the week prior to the meeting. The Rough Guides were intended to provide a snapshot of IETF activity of interest to the Internet Society because of programmatic activity that we were engaged in. They were also an opportunity to highlight the activities sponsored directly by the Internet Society that were happening adjacent to the upcoming IETF meeting.

Rough Guides were intended to help guide a non-specialist but technically minded audience to the hot topics and debates of interest at each upcoming IETF meeting with pointers to the agenda and remote participation possibilties. Originally intended to help spur meeting attendance by those interested in the key topics, they became a way to highlight important discussions taking place and ways to get involved in person or remotely.

As we are now less than a week away from the IETF 104 meeting in Prague it seemed like the right time to share an update regarding our plans for writing about IETF activity. We have decided to discontinue producing the Rough Guides. Instead, we will be helping to supply relevant, high-quality content Continue reading

3 companies developing wearable tech for the enterprise

Earlier this month, I wrote that “even as smartwatch shipments continue to grow, significant industrial and business use cases for these internet-connected devices have yet to appear.”And then a few days later, as if on cue, International Data Corporation (IDC) put out a press release about the latest edition of the Worldwide Quarterly Wearable Device Tracker. The release quoted Ramon T. Llamas, research director for IDC's Wearables team, saying, "Two major drivers for the wearables market are healthcare and enterprise adoption.”To read this article in full, please click here

Automating Cisco ACI Environment with Python and Ansible

This is a guest blog post by Dave Crown, Lead Data Center Engineer at the State of Delaware. He can be found automating things when he's not in meetings or fighting technical debt.


Over the course of the last year or so, I’ve been working on building a solution to deploy and manage Cisco’s ACI using Ansible and Git, with Python to spackle in cracks. The goal I started with was to take the plain-text description of our network from a Git server, pull in any requirements, and use the solution to configure the fabric, and lastly, update our IPAM, Netbox. All this without using the GUI or CLI to make changes. Most importantly, I want to run it with a simple invocation so that others can run it and it could be moved into Ansible Tower when ready.

Read more ...

BrandPost: SD-WAN Without WAN Optimization is Like Peanut Butter Without Jelly

SD-WAN vs. WAN Optimization Optimization of applications and data traffic has been an integral part of the WAN since its inception. WAN optimization accelerates application traffic by overcoming latency and reducing the amount of data traversing the WAN by applying techniques like protocol acceleration, deduplication, compression, reduced latency and caching to dramatically increase the amount of available bandwidth.Today, enterprises are rapidly adopting SD-WAN as a preferred solution when rearchitecting their WANs. SD-WAN is transforming the way networks support enterprise applications, dramatically increasing application performance by intelligently controlling and more efficiently utilizing all available WAN transport resources.To read this article in full, please click here

Three quick ways to move your Ansible inventory into Red Hat Ansible Tower

3 Ways quick ways to your Ansible inventory to Red Hat Ansible Tower

If you’ve been using Ansible at the command line for a while, you probably have a lot of servers, network devices, and other target nodes listed in your inventory. You know that Red Hat Ansible Tower makes it easier for everyone on your team to run your Ansible Playbooks. So you’ve thought about using Ansible Tower to take your automation to the next level, but you want to retain all the data and variables in your existing inventory file or directory. Are you worried about transferring your inventory from command-line use to Ansible Tower? Let me show you how easy it is to import your existing Ansible inventory into Ansible Tower!


This blog covers three quick and effective ways to connect your existing Ansible inventory into Ansible Tower:

  1. Migrating an inventory file from the Ansible Tower control node (awx-manage)
  2. Migrating an inventory file from anywhere with a playbook
  3. Setting Tower to access a git source-controlled inventory file

If you don’t have Ansible Tower yet and want to download and try it out, please visit: https://www.ansible.com/products/tower

If you’re using dynamic inventory, you don't need to import your inventory into Ansible Tower. Dynamic inventory retrieves your inventory from an Continue reading

Build your Agenda for DockerCon 2019

The DockerCon Agenda builder is live! So grab a seat and a cup of coffee and take a look at the session lineup coming to San Francisco April 29th – May 2nd. This year’s DockerCon delivers the latest updates from the Docker product team, lots of how to sessions for developers and IT Infrastructure and Ops, and customer use cases. Search talks by tracks to build your agenda today.

Build Your Agenda

Use the agenda builder to select the sessions that work for you:

  • Using Docker for Developers: How to talks for Developers, from beginner to intermediate. You’ll get practical advice on how implement Docker into your current deployment.
  • Using Docker for IT Infrastructure and Ops: Practical sessions for IT teams and enterprise architects looking for how best to design and architect your Docker container platform environment.
  • Docker Tech Talks: Delivered by the Docker team, these talks share the latest tech on the Docker Platform. You’ll learn about new features, product roadmap and more.
  • Customer Case Studies: Looking to learn from companies who have been there and learned a few things along the way? In this track, industry leaders share how Docker transformed their organization – from business use cases,technical Continue reading

Monsters in the Middleboxes: Introducing Two New Tools for Detecting HTTPS Interception

Monsters in the Middleboxes: Introducing Two New Tools for Detecting HTTPS Interception
Monsters in the Middleboxes: Introducing Two New Tools for Detecting HTTPS Interception

The practice of HTTPS interception continues to be commonplace on the Internet. HTTPS interception has encountered scrutiny, most notably in the 2017 study “The Security Impact of HTTPS Interception” and the United States Computer Emergency Readiness Team (US-CERT)  warning that the technique weakens security. In this blog post, we provide a brief recap of HTTPS interception and introduce two new tools:

  1. MITMEngine, an open-source library for HTTPS interception detection, and
  2. MALCOLM, a dashboard displaying metrics about HTTPS interception we observe on Cloudflare’s network.

In a basic HTTPS connection, a browser (client) establishes a TLS connection directly to an origin server to send requests and download content. However, many connections on the Internet are not directly from a browser to the server serving the website, but instead traverse through some type of proxy or middlebox (a “monster-in-the-middle” or MITM). There are many reasons for this behavior, both malicious and benign.

Types of HTTPS Interception, as Demonstrated by Various Monsters in the Middle

One common HTTPS interceptor is TLS-terminating forward proxies. (These are a subset of all forward proxies; non-TLS-terminating forward proxies forward TLS connections without any ability to inspect encrypted traffic). A TLS-terminating forward proxy sits Continue reading