Getting Started: Workflow Job Templates

Welcome to another post in the Getting Started series! Today we’re going to get into the topic of Workflow Job Templates. If you don’t know what regular Job Templates are in Red Hat Ansible Tower, please read the previously published article that describes them. It’ll provide you with some technical details that’ll be a useful jumping-off point for the topic of workflows.

Once you’re familiar with the basics, read on! We’ll be covering what exactly Workflow Job Templates are, what makes them useful, how to generate/edit one, and a few extra pointers as well as best practices to make the most out of this great tool.

What is a Workflow Job Template?

The word “workflow” says it all. This particular feature in Ansible Tower (available as of version 3.1) enables users to create sequences consisting of any combination of job templates, project syncs, and inventory syncs that are linked together in order to execute them as a single unit. Because of this, workflows can help you organize playbooks and job templates into separate groups.

Why are Workflows Useful?

By utilizing this feature, you can set up ordered structures for different teams to use. For example, two different environments (i. Continue reading

Research: P Fat Trees

Link speeds in data center fabrics continue to climb, with 10g, 25g, 40g, and 100g widely available, and 400g promised in just a few short years. What isn’t so obvious is how these higher speeds are being reached. A 100g link, for instance, is really four 25g links bundled as a single link at the physical layer. If the optics are increasing in speed, and the processors are increasing in their ability to switch traffic, why are these higher speed links being built in this way? According to the paper under investigation today, the reason is the speed of the chips that serialize traffic from and deserialize traffic off the optical medium. The development of the Complementary metal–oxide–semiconductor, of CMOS, chips required to build ever faster optical interfaces seems to have stalled out at around 25g, which means faster speeds must be achieved by bundling multiple lower speed links.

Mellette, William M., Alex C. Snoeren, and George Porter. “P-FatTree: A Multi-Channel Datacenter Network Topology.” In Proceedings of the 15th ACM Workshop on Hot Topics in Networks, 78–84. HotNets ’16. New York, NY, USA: ACM, 2016. https://doi.org/10.1145/3005745.3005746.

The authors then point out that many data operators Continue reading

Qualcomm/Facebook gigabit Wi-Fi field trials to start in 2019

How should a company develop when its growth is dependent on availability of internet? Build out the internet is probably the answer. And that’s just what Facebook intends to do.The social network has just nabbed Qualcomm to help build its 2016-announced 60GHz urban Wi-Fi network, says Qualcomm. The chip maker recently announced that that the companies intend to start trials of the high-speed broadband solution sometime around mid-2019.“This terrestrial connectivity system aims to improve the speed, efficiency, and quality of internet connectivity around the world at only a fraction of the cost of fiber,” Qualcomm says in its release.To read this article in full, please click here

BrandPost: What is Fiber Densification?

Ciena Byline: Helen XenosSenior Director, Portfolio Marketing There is a new term that is increasingly cropping up in networking conversations: densification. Ciena’s Helen Xenos explains what this is and how it is elevating the end user experience.The term “network densification” is being used more often in relation to wireless network deployments, and more recently, “fiber densification” has become a hot a topic of discussion.  So, what exactly is densification? To read this article in full, please click here

While no one was looking, California passed its own GDPR

The European Union’s General Data Protection Regulation (GDPR) is widely viewed as a massively expensive and burdensome privacy regulation that can be a major headache and pitfall for American firms doing business in Europe. Many firms, including Facebook, have sought ways around the law to avoid having to deal with the burden of compliance.Well, there is no weaseling out now. Last week, with no fanfare, California Governor Jerry Brown signed into law AB375, the California Consumer Privacy Act of 2018, the California equivalent of GDPR that mirrors the EU law in many ways.To read this article in full, please click here

While no one was looking, California passed its own GDPR

The European Union’s General Data Protection Regulation (GDPR) is widely viewed as a massively expensive and burdensome privacy regulation that can be a major headache and pitfall for American firms doing business in Europe. Many firms, including Facebook, have sought ways around the law to avoid having to deal with the burden of compliance.Well, there is no weaseling out now. Last week, with no fanfare, California Governor Jerry Brown signed into law AB375, the California Consumer Privacy Act of 2018, the California equivalent of GDPR that mirrors the EU law in many ways.To read this article in full, please click here

While no one was looking, California passed its own GDPR

The European Union’s General Data Protection Regulation (GDPR) is widely viewed as a massively expensive and burdensome privacy regulation that can be a major headache and pitfall for American firms doing business in Europe. Many firms, including Facebook, have sought ways around the law to avoid having to deal with the burden of compliance.Well, there is no weaseling out now. Last week, with no fanfare, California Governor Jerry Brown signed into law AB375, the California Consumer Privacy Act of 2018, the California equivalent of GDPR that mirrors the EU law in many ways.To read this article in full, please click here

Debugging Serverless Apps

Debugging Serverless Apps

The Workers team have already done an amazing job of creating a functional, familiar edit and debug tooling experience in the Workers IDE. It's Chrome Developer Tools fully integrated to Workers.

console.log in your Worker goes straight to the console, just as if you were debugging locally! Furthermore, errors and even log lines come complete with call-site info, so you click and navigate straight to the relevant line.
In this blog post I’m going to show a small and powerful technique I use to make debugging serverless apps simple and quick.

Debugging Serverless Apps

There is a comprehensive guide to common debugging approaches and I'm going to focus on returning debug information in a header. This is a great tip and one that I use to capture debug information when I'm using curl or Postman, or integration tests. It was a little finicky to get right the first time, so let me save you some trouble.

If you've followed part 1 or part 2 of my Workers series, you'll know I'm using Typescript, but the approach would equally apply to Javascript. In the rest of this example, I’ll be using the routing framework I created in part 2.

Requesting Debug Info

I Continue reading

When Firepower Management Center Goes Offline

A typical Firepower deployment consists of a management component and a managed device. The management component is known as Firepower Management Center (FMC). The managed device is the NGIPS or NGFW itself and would be leveraging the Firepower or the Firepower Threat Defense (FTD) operating system. Both layers of the topology include provisions for redundant deployments. Firepower Management Center is available in a two-node HA configuration. Firepower Threat Defense, the NGFW managed device, can be either HA or clustered.

One question that often comes up is, “What happens when FMC goes offline?” The general response is traffic continues to flow but the managed device cannot be managed. While this is not a good position to be in, it does provide an opportunity to assess the impact of waiting for a maintenance window (or a replacement).

TL;DR

  • Firepower continues to pass traffic when FMC is offline
  • Events captured on the Firepower device will be passed to the FMC when it is available
  • Event Storage on the managed device is finite, events may be lost during an extended outage
  • Malware Cloud Lookups/Block functionality depends on FMC, plan HA and File Policy accordingly
  • Firepower managed device cannot be managed until FMC is available

Continue reading

When Firepower Management Center Goes Offline

A typical Firepower deployment consists of a management component and a managed device. The management component is known as Firepower Management Center (FMC). The managed device is the NGIPS or NGFW itself and would be leveraging the Firepower or the Firepower Threat Defense (FTD) operating system. Both layers of the topology include provisions for redundant deployments. Firepower Management Center is available in a two-node HA configuration. Firepower Threat Defense, the NGFW managed device, can be either HA or clustered.

One question that often comes up is, “What happens when FMC goes offline?” The general response is traffic continues to flow but the managed device cannot be managed. While this is not a good position to be in, it does provide an opportunity to assess the impact of waiting for a maintenance window (or a replacement).

TL;DR

  • Firepower continues to pass traffic when FMC is offline
  • Events captured on the Firepower device will be passed to the FMC when it is available
  • Event Storage on the managed device is finite, events may be lost during an extended outage
  • Malware Cloud Lookups/Block functionality depends on FMC, plan HA and File Policy accordingly
  • Firepower managed device cannot be managed until FMC is available

Continue reading

Tracking DNSSEC: See the Deployment Maps

Did you know the Internet Society Deploy360 Programme provides a weekly view into global DNSSEC deployment? Each Monday, we generate new maps and send them to a public DNSSEC-Maps mailing list. We also update the DNSSEC Deployment Maps page periodically, usually in advance of ICANN meetings.

DNS Security Extensions — commonly known as DNSSEC — allow us to have more confidence in our online activities at work, home, and school. DNSSEC acts like tamper-proof packaging for domain name data, helping to ensure that you are communicating with the correct website or service. However, DNSSEC must be deployed at each step in the lookup from the root zone to the final domain name. Signing the root zone, generic Top Level Domains (gTLDs) and country code Top Level Domains (ccTLDs) is vital to this overall process. These maps help show what progress the Internet technical community is making toward the overall goal of full DNSSEC deployment.

These maps are a bit different from other DNSSEC statistics sites in that they contain both factual, observed information and also information based on news reports, presentations, and other collected data. For more information about how we track the deployment status of TLDs, please read our page Continue reading

EnclaveDB: a secure database using SGX

EnclaveDB: A secure database using SGX Priebe et al., IEEE Security & Privacy 2018

This is a really interesting paper (if you’re into this kind of thing I guess!) bringing together the security properties of Intel’s SGX enclaves with the Hekaton SQL Server database engine. The result is a secure database environment with impressive runtime performance. (In the read-mostly TATP benchmarks, overheads are down around 15%, which is amazing for this level of encryption and security). The paper does a great job showing us all of the things that needed to be considered to make EnclaveDB work so well in this environment.

One of my favourite takeaways is that we don’t always have to think of performance and security as trade-offs:

In this paper, we show that the principles behind the design of a high performance database engine are aligned with security. Specifically, in-memory tables and indexes are ideal data structures for securely hosting and querying sensitive data in enclaves.

Motivation and threat model

We host databases in all sorts of untrusted environments, potentially with unknown database administrators, server administrators, OS and hypervisors. How can we guarantee data security and integrity in such a world? Or even how Continue reading