Since Docker democratized software containers four years ago, a whole ecosystem grew around containerization and in this compressed time period it has gone through two distinct phases of growth. In each of these two phases, the model for producing container systems evolved to adapt to the size and needs of the user community as well as the project and the growing contributor ecosystem.
The Moby Project is a new open-source project to advance the software containerization movement and help the ecosystem take containers mainstream. It provides a library of components, a framework for assembling them into custom container-based systems and a place for all container enthusiasts to experiment and exchange ideas.
Let’s review how we got where we are today. In 2013-2014 pioneers started to use containers and collaborate in a monolithic open source codebase, Docker and few other projects, to help tools mature.
Then in 2015-2016, containers were massively adopted in production for cloud-native applications. In this phase, the user community grew to support tens of thousands of deployments that were backed by hundreds of ecosystem projects and thousands of contributors. It is during this phase, that Docker evolved its production model to an open component based approach. In Continue reading
Network virtualization is making microsegmentation possible and allowing networks to isolate security breaches.
Microsegmentation of virtual networks spanning both private and public clouds is critical to Deluxe’s return on investment.
We’ve been working with registrars and registries in the IETF on making DNSSEC easier for domain owners, and over the next two weeks we’ll be starting out by enabling DNSSEC automatically for .dk domains.
Before we get into the details of how we've improved the DNSSEC experience, we should explain why DNSSEC is important and the function it plays in keeping the web safe.
DNSSEC’s role is to verify the integrity of DNS answers. When DNS was written in the early 1980’s, it was only a few researchers and academics on the internet. They all knew and trusted each other, and couldn’t imagine a world in which someone malicious would try to operate online. As a result, DNS relies on trust to operate. When a client asks for the address of a hostname like www.cloudflare.com, without DNSSEC it will trust basically any server that returns the response, even if it wasn’t the same server it originally asked. With DNSSEC, every DNS answer is signed so clients can verify answers haven’t been manipulated over transit.
If DNSSEC is so important, why do so few domains support it? First, for a domain to Continue reading
Fortinet's SD-WAN security was built in-house.
I’ve been reading a lot about the repeal of the rules putting the FCC in charge of privacy for access providers in the US recently—a lot of it rising to the level of hysteria and “the end is near” level. As you have probably been reading these stories, as well, I thought it worthwhile to take a moment and point out two pieces that seem to be the most balanced and thought through out there.
Essentially—yes, privacy is still a concern, and no, the sky is not falling. The first is by Nick Feamster, who I’ve worked with in the past, and has always seemed to have a reasonable take on things. The second is by Shelly Palmer, who I don’t always agree with, but in this case I think his analysis is correct.
Last week, the House and Senate both passed a joint resolution that prevent’s the new privacy rules from the Federal Communications Commission (FCC) from taking effect; the rules were released by the FCC last November, and would have bound Internet Service Providers (ISPs) in the United States to a set of practices concerning the collection and sharing of data about consumers. The rules were widely heralded Continue reading
I’ve been reading a lot about the repeal of the rules putting the FCC in charge of privacy for access providers in the US recently—a lot of it rising to the level of hysteria and “the end is near” level. As you have probably been reading these stories, as well, I thought it worthwhile to take a moment and point out two pieces that seem to be the most balanced and thought through out there.
Essentially—yes, privacy is still a concern, and no, the sky is not falling. The first is by Nick Feamster, who I’ve worked with in the past, and has always seemed to have a reasonable take on things. The second is by Shelly Palmer, who I don’t always agree with, but in this case I think his analysis is correct.
Last week, the House and Senate both passed a joint resolution that prevent’s the new privacy rules from the Federal Communications Commission (FCC) from taking effect; the rules were released by the FCC last November, and would have bound Internet Service Providers (ISPs) in the United States to a set of practices concerning the collection and sharing of data about consumers. The rules were widely heralded Continue reading
Network professionals are the front line in cyber-defence by defining and operating the perimeter. While it is only a first layer of static defense, its well worth understanding the wider threat landscape that you are defending against. Many companies publish regular reports and this one is from McAfee.
McAfee Labs Threats Report – April 2017 – Direct Link
Landing page is https://secure.mcafee.com/us/security-awareness/articles/mcafee-labs-threats-report-mar-2017.aspx
Note: Intel has spun McAfee out to a private VC firm in the last few weeks so its possible that we will see a resurgence of the McAfee brand. I’m doubtful that McAfee can emerge but lets wait and see.
Some points I observed when reading this report:
This article is the 4th in Layer 2 security series. We will be discussing a very common layer 2 attack which is MAC flooding and its TMtigation “Port Security MAC limiting” If you didn’t read the previous 3 articles; DHCP snooping, Dynamic ARP Inspection, and IP Source Guard; I recommend that you take a quick […]
The post Mac Flooding Attack , Port Security and Deployment Considerations appeared first on Cisco Network Design and Architecture | CCDE Bootcamp | orhanergun.net.
The University of Pittsburgh is one of the oldest institutions in the country, dating to 1787 when it was founded as the Pittsburgh Academy. The University has produced the pioneers of the MRI and the television, winners of Nobel and Pulitzer prizes, Super Bowl and NBA champions, and best-selling authors.
As with many businesses today, the University continues to digitize its organization to keep up with the demands of over 35,000 students, 5,000 faculty, and 7,000 staff across four campuses. While the first thing that comes to mind may be core facilities such as classrooms, this also includes keeping up with the evolving technology on the business side of things, such as point-of-sale (POS) systems. When a student buys a coffee before studying or a branded sweatshirt for mom using their student ID, those transactions must be facilitated and secured by the University.
What does it mean to secure financial transactions? For one, just as with a retail store operation, the University must achieve PCI compliance to facilitate financial transactions for its customers. What does this mean? Among other tasks, PCI demands that the data used by these systems is completely isolated from other IT operations. However, locking everything down Continue reading
Welcome to Technology Short Take #81! I have another collection of links, articles, and thoughts about key data center technologies, and hopefully I’ve managed to include something here that will prove useful or thought-provoking. Enjoy!
Docker Store is the place to discover and procure trusted, enterprise-ready containerized software – free, open source and commercial.
Docker Store is the evolution of the Docker Hub, which is the world’s largest container registry, catering to millions of users. As of March 1, 2017, we crossed 11 billion pulls from the public registry! Docker Store leverages the public registry’s massive user base and ensures our customers – developers, operators and enterprise Docker users get what they ask for. The Official Images program was developed to create a set of curated and trusted content that developers could use as a foundation for building containerized software. From the lessons learned and best practices, Docker recently launched a certification program that enables ISVs, around the world to take advantage of Store in offering great software, packaged to operate optimally on the Docker platform.
The Docker Store is designed to bring Docker users and ecosystem partners together with
Organizations across industries are embarking on their journey of Digital Transformation. Time-to-market has become very crucial to the bottom-line and companies need to accelerate their application/services delivery and go from concept to production in record time.
Organizations are embracing containers, micro-service based architectures, Continuous Delivery and Integration tools as they are completely trying to change how they develop, deploy and deliver applications.
However, moving from monolith application architectures to microservices-based ones is no ordinary feat.
Many of these organizations leverage Pivotal’s expertise to deliver a modern application development environment. Pivotal’s flagship cloud-native platform Pivotal Cloud Foundry provides a modern app-centric environment that lets developers focus on delivering applications with speed and frequency of delivery. To find out more about Pivotal Cloud Foundry, and the now generally available Pivotal Cloud Foundry 1.10.
Pivotal Cloud Foundry abstracts the underlying IaaS layer so that developers get a modern self-service application development environment, without worrying about the infrastructure. BOSH vSphere CPI plugin does a good job of consuming pre-created networks.
However, the truth is – “someone” always needs to do some provisioning – networks need to be carved out, load-balancers need to be configured, NAT rules need to be defined, reachability needs to Continue reading
The new version gives developers and operators to have a shared set of facts.
Portworx saves and stores container data.
The use of TLS interception by outbound proxy servers is causing serious problems in updating the TLS standard to Version 1.3.
At the same time, middlebox and antivirus products increasingly intercept (i.e., terminate and re-initiate) HTTPS connections in an attempt to detect and block malicious content that uses the protocol to avoid inspection . Previous work has found that some specific HTTPS interception products dramatically reduce connection security ; however, the broader security impact of such interception remains unclear. In this paper, we conduct the first comprehensive study of HTTPS interception in the wild, quantifying both its prevalence in traffic to major services and its effects on real-world security.
This is the same problem that middleboxes cause anywhere on the Internet – Firewalls, NAT gateways, Inspection, QOS, DPI. Because these complex devices are rarely updated and hard to maintain, they create failures in new protocols. IPv6 rollout has been slowed by difficult upgrades. The same problem is happening with TLS. Its undesirable to fall back to insecure TLS standards that “work” but are insecure.
The business need for proxy servers or protocol interception is for a small range of activities
CloudLens is available on AWS, and Azure is coming soon.
“Micro-Segmentation provides a way to build a zero-trust network – where all networks, perimeters and application are inherently untrusted.” – declared Forrester Consulting in 2015 with their white paper Leveraging Micro-Segmentation to build zero-trust model. The last mile in creating a truly zero-trust network implies not trusting each application and also tiers within an application (Figure 1). To complete the last mile, network, security and risk professionals are increasingly looking for tools to understand application communication patterns and providing access controls to them. With version 6.3.0, NSX has unveiled 2 new tools, namely, Application Rule Manager (ARM) and Endpoint Monitoring (EM), to help professionals understand application patterns.
Figure 1: Zero-Trust Model using NSX
From Theory to Practice
Micro-Segmenting each application requires understanding of application communication patterns. Users should allow the flows required by the application. To accomplish zero-trust, users should be closing all unwanted flows & ports. Figure 2., is a sample practical firewall policy model to achieve that. In this model, ARM/EM provides application patterns and a one-click conversion of those patterns into distributed firewall rules to achieve inter/intra application rules.
Figure 2: Firewall Policy Model
Generating Distributed Firewall Rules Rapidly
Any application in the datacenter can be Continue reading
This post originally appears as part of a series of VMware NSX in Healthcare blogs on Geoff Wilmington’s blog, vWilmo. To read more about VMware NSX and its applications in healthcare, check out Geoff’s blog series.
Originally this series on Micro-segmentation was only going to cover Log Insight, vRealize Network Insight (vRNI), and VMware NSX. With the release of VMware NSX 6.3, there is a new toolset within NSX that can be leveraged for quick micro-segmentation planning The Application Rule Manager (ARM) within NSX, provides a new way to help create security rulesets quickly for new or existing applications on a bigger scale than Log Insight, but smaller scale than vRNI. With that in mind, we’re going to take the previous post using Log Insight, and perform the same procedures with ARM in NSX to create our rulesets using the same basic methodologies.
The Application Rule Manager in VMware NSX leverages real-time flow information to discover the communications both in and out, and between an application workload so a security model can be built around the application. ARM can monitor up to 30 VMs in one session and have 5 sessions running at a time. Continue reading