3 years ago, Michael DeHaan started the Ansible open source project. Michael has worked tirelessly and done a great job leading the Ansible vision of simple IT automation, and his efforts led to some amazing achievements. Ansible is now a mature open source project and vibrant community, with over 900 contributors (a new contributor almost every day!), thousands of users and millions of downloads. Ansible was recently named a Top 10 Open Source project for 2014, alongside projects like Hadoop, Docker, and OpenStack.
As of today, Michael will be transitioning from his daily operational involvement with Ansible, Inc. to an advisory capacity supporting the community and the Ansible team as needed. You can read more about Michael’s thoughts on the transition here.
As for Ansible, we are grateful for Michael’s vision and efforts and look forward to his continued contributions. He and the Ansible community have set a new standard for simple, agentless automation, and we will continue to build great things on that strong foundation.Nigel Poulton recently posted an article titled “ESXi vs. Hyper-V - Could Docker Support Be Significant,” in which he contemplates whether Microsoft’s announcement of Docker support on Windows will be a factor in the hypervisor battle between ESXi and Hyper-V. His post got me thinking—which is a good thing, I’d say—and I wanted to share my thoughts on the topic here.
Naturally, it’s worth pointing out that I work for VMware, although I do work in a business unit that makes a multi-hypervisor product (NSX).
Nigel makes a few key points in his article:
To be completely fair, the article fully admits that all this is assumption and is just thinking out loud (his statement, not a play on the title of this post). As I said, I think it’s a good thing to Continue reading
We are back after the Christmas Break with the Networking News.
The post Network Break 26 appeared first on Packet Pushers Podcast and was written by Greg Ferro.
Guest blogger Alex Hoff is the VP of Product Management at Auvik Networks, a cloud-based SaaS that makes it dramatically easier for small and mid-sized businesses to manage their networks. Our thanks to Auvik for sponsoring the Packet Pushers community blog today. It’s January and the network industry pundits are calling for 2015 to be the […]
The post SDN: What Small and Mid-Sized Businesses Need to Know in 2015 appeared first on Packet Pushers Podcast and was written by Sponsored Blog Posts.
The Wi-Fi industry seems dominated by discussions on the ever-increasing bandwidth capabilities and peak speeds brought with the latest product offerings based on 802.11ac. But while industry marketing touts Gigabit capable peak speeds, the underlying factors affecting WLAN performance have changed little.
Medium contention is the true driver in the success or failure of a WLAN and we must effectively understand its effect on WLAN performance in order to design and optimize our networks.
Read the full blog article over on the Aruba Airheads Technology Blog...
An open ecosystem has supported the server business for many years. One can build servers with components from various suppliers and run their choice of operating system. This same concept for the networking world, now called “open networking” for the disaggregated model of switch hardware and software, has long been on many wish lists.
The good news: this concept is a reality now, thanks to companies like Cumulus Networks whose OS, Cumulus Linux, is a Debian based distribution that is the OS for open networking on bare metal switches.
With Cumulus Linux, there are no additional or “enhanced” license fees akin to what traditional vendors have charged for years. The yearly renewal license fees cost the same each year – not a penny more. The yearly or multi-year license (option) can be ported from one switch Continue reading
IPv6 has been around a fair while, and we’re constantly encouraged to learn it and use it. I agree with the sentiment, but it’s been hard for most users, when few ISPs offer IPv6 for residential users. Hurricane Electric offers a great free IPv6 tunnel broker service, but that’s impractical for most people. What they need is for their ISP to offer native IPv6, by default.
The ISPs in New Zealand with the largest market share don’t offer IPv6, but some of the smaller ones do. The design of the ISP market here means that users can easily switch between a large range of suppliers, and choose the mix of price/service they want. When I last changed ISP a couple of years ago, I specifically chose an ISP that offers IPv6.
Last year that ISP disabled IPv6 for a few weeks due to some technical issues, and I was disappointed with the support they offered. I wanted to evaluate my other options, but couldn’t find any good source of data that showed which ISPs were offering IPv6. There’s plenty of talk out there about trials, and the like, but most of that hasn’t been updated in years.
So I pulled Continue reading
When we started planning a VMware NSX-focused podcast episode with Dmitri Kalintsev, I asked my readers what topics they’d like to see covered. Two comments that we really liked were “how do I get started with VMware NSX?” and “how do I troubleshoot this stuff?”
Read more ...I was recently asked by a Wi-Fi engineer about the potential need for multi-gigabit backhaul needs from an AP with the pending release of 802.11ac Wave 2. This seems to be a point of confusion for many in the wireless industry. Here's what I told them:
Industry claims of throughput capabilities exceed 1 Gbps are correct from a theoretical standpoint. However, real world client mixes on almost every WLAN will mean that backhaul never approaches even close to 1 Gbps of throughput.
First, when you combine clients of varying capabilities there is no chance of exceeding 1 Gbps backhaul. The only time you will need more than 1 Gbps of backhaul is in POC bakeoffs between vendors, lab tests, and very low-density locations where you have only a few users on an AP radio but they are using top of the line high-end wireless laptops and applications that can push large amounts of data (I'm thinking CAD users here for instance who collaborate and push files of several GBs across the network and want it done fast). This is somewhat counter-intuitive because most people would think off-hand that high-density areas is where you'll need the greater backhaul. But in high-density areas Continue reading
This blog post is probably more personal than the usual posts here. It’s about why I joined CloudFlare.
I’ve been working on DNSSEC evolution for a long time as implementor, IETF working group chair, protocol experimenter, DNS operator, consultant, and evangelist. These different perspectives allow me to look at the protocol in a holistic way.
First and foremost, it’s important to realize the exact role of DNSSEC. DNSSEC is actually a misnomer: it’s from an era when the understanding of different security technologies, and what role each plays, was not as good as today. Today, this protocol would be called DNSAUTH. This is because all it does is to provide integrity protection to the answers from authoritative servers.
Over the years, the design of DNSSEC has changed. A number of people working on early versions of DNSSEC (myself included) didn’t know DNS all that well. Similarly, many DNS people at the time didn’t understand security, and in particular, cryptography all that well. To make things even more complex, general understanding of the DNS protocol was lacking in certain areas and needed to be clarified in order to do DNSSEC properly. This has led to three major versions of the Continue reading
The time has come, CCIE Collaboration hopefuls, to focus my blog on Quality of Service (QoS). I know, it’s everyone’s favorite subject, right? Well, you don’t have to like it; you just have to know it!
I would specifically like to focus on WAN QoS policies as they are going to be an essential piece of the lab blueprint to understand. Typically, the goal on a WAN interface is to queue traffic in such a way as to prioritize certain types of traffic over other types of traffic. Voice traffic will usually be placed in some type of expedited or prioritized queue while other types of traffic (video, signaling, web, etc.) will use other queues to provide minimum bandwidth guarantees. Policies such as this will utilize the Modular QoS Command Line Interface (MQC) for implementation.
To begin, let’s use our three-site topology (HQ, SB, and SC) to provide a backdrop for this example. The HQ site (R1) has a Frame Relay connection to both the SB (R2) and SC (R3) sites through the same physical Serial interface, which has a total of 1.544 Mbps of bandwidth available. Assume that both R2 and R3 have connections to R1 using Continue reading
As many US Government programs look to adopt DevOps and agile development methodologies, there’s a need for tools to manage the application lifecycle, and make it easier and more predictable to deploy and manage entire application environments.
So why do Government customers chose Ansible?
Agentless
Ansible does not require a software agent to be running on the remote hosts it manages. Instead, it relies on the trusted management ports you’re already using on a daily basis to log into your servers: secure shell (SSH) on Linux, and Windows Remote Management (WinRM) on Microsoft-based systems. This means that you don’t need to change existing firewall port filtering rules, which removes a large barrier to entry that other tools that run an agent require.
Additionally, agentless management means that there is little likelihood of a library conflict. What happens when a management tool agent requires one version of a library, but your application requires another?
Finally, Ansible’s agentless model does not increase your system’s security footprint or attack profile. Ansible relies on the operating system’s encryption tooling, and ensures that there are no separate agents that require vulnerability patching.
More Than Just CM
Configuration Management in the Government space is nothing new. Continue reading
For an introduction to DNSSEC, see our previous post
Today is a big day for CloudFlare! We are publishing our first two DNSSEC signed zones for the community to analyze and give feedback on:
We've been testing our implementation internally for some time with great results, so we now want to know from outside users how it’s working!
Here’s an example of what you should see if you pull the records of, for example, www.cloudflare-dnssec-auth.com.
$ dig www.cloudflare-dnssec-auth.com A +dnssec
; <<>> DiG 9.10.1-P1 <<>> www.cloudflare-dnssec-auth.com A +dnssec
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29654
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 4096
;; QUESTION SECTION:
;www.cloudflare-dnssec-auth.com. IN A
;; ANSWER SECTION:
www.cloudflare-dnssec-auth.com. 300 IN A 104.28.29.67
www.cloudflare-dnssec-auth.com. 300 IN A 104.28.28.67
www.cloudflare-dnssec-auth.com. 300 IN RRSIG A Continue reading
As network security engineers have attempted to categorize blocks of IP addresses associated with spam or malware for subsequent filtering at their firewalls, the bad guys have had to evolve to continue to target their victims. Since routing on the global Internet is based entirely on trust, it’s relatively easy to commandeer IP address space that belongs to someone else. In other words, if the bad guys’ IP space is blocked, well then they can just steal someone else’s and continue on as before.
In an attempt to cover their tracks, these criminals will sometimes originate routes using autonomous system numbers (ASNs) that they don’t own either. In one of the cases described below, perpetrators hijacked the victim’s ASN to originate IP address space that could have plausibly been originated by the victim. However, in this case, the traffic was misdirected to the bad guy and an unsophisticated routing analysis would have probably shown nothing amiss.
The weakness of all spoofing techniques is that, at some point, the routes cross over from the fabricated to the legitimate Internet — and, when they do, they appear quite anomalous when compared against historical data and derived business Continue reading
Cloud builders are often using my ExpertExpress service to validate their designs. Tenant onboarding into a multi-tenant (private or public) cloud infrastructure is a common problem, and tenants frequently want to retain the existing network services appliances (firewalls and load balancers).
The Combine Physical and Virtual Appliances in a Private Cloud case study describes a typical solution that combines per-tenant virtual appliances with frontend physical appliances.