Businesses Rethinking Security Software Strategy, Gartner Says
AI, machine learning driving new security strategies.
AI, machine learning driving new security strategies.
At DockerCon 2017 we introduced LinuxKit: A toolkit for building secure, lean and portable Linux subsystems. Here are the key principles and motivations behind the project:
For this Online Meetup, Docker Technical Staff member Rolf Neugebauer gave an introduction to LinuxKit, explained the rationale behind its development and gave a demo on how to get started using it.
You’ll find below a list of additional questions asked by attendees at the end of the online meetups:
You said the ONBOOT containers are run sequentially, does it wait for one to finish before it Continue reading
Presentation on using NTOP-NG as a security tool
The post Research: Network Security Using ntopng appeared first on EtherealMind.
ESG analysts offer guidance on hyperconverged infrastructure at Interop ITX.
Find out what users have to say about products in the emerging SDS market.
Modern Wi-Fi networks are complex beasts. Despite all the fancy new features in products, the technology is only becoming more complex and the demands on the network are only growing. Wi-Fi is the most heavily used method to transport user data today, eclipsing cellular and LAN traffic volumes according to multiple reports from analysis firms including Cisco, Ofcom, Mobidia, Ovum, and others. Meanwhile, the technical complexity contained within the IEEE 802.11 standard results in a technical document that is over 3,200 pages long! This means deploying a network right is no easy task.
One of the most difficult aspects to get right when deploying a Wi-Fi network is understanding capacity requirements. It is not sufficient enough to use rule-of-thumb guidelines based on number of clients per access point or number of access points per square foot/meter since they often result in networks that do not adequately meet actual end-user demands and perform poorly. More rigor is required while maintaining simplicity of use so that most network administrators can be confident of a successful outcome.
Essential to wireless network performance and capacity planning is understanding the interaction between access point capabilities, network configuration, client device capabilities, and the RF Continue reading
Neil Anderson collected career advice from 111 IT industry gurus (just getting all of them to respond must have been monumental effort). Well worth reading ;)
Have you struggled to find information on our current website? Have you found it difficult to know what actions you can take on important issues such as connecting the unconnected and building trust on the Internet?
You are not alone.
In one of the most visible and important changes we are making this year, we are working hard on giving our website a deep refresh. We are building it to be a direct vehicle for action. We are redesigning it from the ground up to help us achieve our objective of connecting everyone, everywhere to a globally connected, trusted Internet.
It will look different, it will feel different, it will be more accessible and will be more aligned with this strategic goal.
I’ve written several prior blogs on multi-site solutions with NSX-V discussing topics such as fundamentals, design options, multi-site security, and disaster recovery; see below links to review some of the prior material. In this post, I’ll discuss how VMware NSX-V and F5 BIG-IP DNS (prior known as F5 GTM) can be used together for Active/Active solutions where an application is spanning multiple sites and site-local ingress/egress for the application is desired. F5 offers both virtual and physical appliances; in this post I demonstrate using only the virtual (VE) F5 appliances. Big thanks to my friend Kent Munson at F5 Networks for helping with the F5 deployment in my lab and for providing some of the details to help with this blog post. This is the first of several blog posts to come on this topic. Continue reading
Back in early March of this year, I wrote a post on customizing the Docker Engine on CentOS Atomic Host. In that post, I showed how you could use systemd constructs like drop-in units to customize the behavior of the Docker Engine when running on CentOS Atomic Host. In this post, I’m going to build on that information to show how this can be done using cloud-init
on a public cloud provider (AWS, in this case).
Although I haven’t really blogged about it, I’d already taken the information in that first post and written some Ansible playbooks to do the same thing (see here for more information). Thus, one could use Ansible to do this when running CentOS Atomic Host on a public cloud provider. However, much like the original post, I wanted to find a very “cloud-native” way of doing this, and cloud-init
seemed like a pretty good candidate.
All in all, it was pretty straightforward—with one significant exception. As I was testing this, I ran into an issue where the Docker daemon wouldn’t start after cloud-init
had finished. Convinced I’d done something wrong, I kept going over the files, testing and re-testing (I’ve been working on this, off Continue reading
Back in early March of this year, I wrote a post on customizing the Docker Engine on CentOS Atomic Host. In that post, I showed how you could use systemd constructs like drop-in units to customize the behavior of the Docker Engine when running on CentOS Atomic Host. In this post, I’m going to build on that information to show how this can be done using cloud-init
on a public cloud provider (AWS, in this case).
Although I haven’t really blogged about it, I’d already taken the information in that first post and written some Ansible playbooks to do the same thing (see here for more information). Thus, one could use Ansible to do this when running CentOS Atomic Host on a public cloud provider. However, much like the original post, I wanted to find a very “cloud-native” way of doing this, and cloud-init
seemed like a pretty good candidate.
All in all, it was pretty straightforward—with one significant exception. As I was testing this, I ran into an issue where the Docker daemon wouldn’t start after cloud-init
had finished. Convinced I’d done something wrong, I kept going over the files, testing and re-testing (I’ve been working on this, off Continue reading
Back in early March of this year, I wrote a post on customizing the Docker Engine on CentOS Atomic Host. In that post, I showed how you could use systemd constructs like drop-in units to customize the behavior of the Docker Engine when running on CentOS Atomic Host. In this post, I’m going to build on that information to show how this can be done using cloud-init
on a public cloud provider (AWS, in this case).
Although I haven’t really blogged about it, I’d already taken the information in that first post and written some Ansible playbooks to do the same thing (see here for more information). Thus, one could use Ansible to do this when running CentOS Atomic Host on a public cloud provider. However, much like the original post, I wanted to find a very “cloud-native” way of doing this, and cloud-init
seemed like a pretty good candidate.
All in all, it was pretty straightforward—with one significant exception. As I was testing this, I ran into an issue where the Docker daemon wouldn’t start after cloud-init
had finished. Convinced I’d done something wrong, I kept going over the files, testing and re-testing (I’ve been working on this, off Continue reading