Some Thoughts on the Docker-Kubernetes Announcement

Today at DockerCon EU, Docker announced that the next version of Docker (and its upstream open source project, the Moby Project) will feature integration with Kubernetes (see my liveblog of the day 1 general session). Customers will be able to choose whether they leverage Swarm or Kubernetes for container orchestration. In this post, I’ll share a few thoughts on this move by Docker.

First off, you may find it useful to review some details of the announcement via Docker’s blog post.

Done reviewing the announcement? Here are some thoughts; some of them are mine, some of them are from others around the Internet.

  • It probably goes without saying that this announcement was largely anticipated (see this TechCrunch article, for example). So while the details of how Docker would go about adding Kubernetes support was not clear, many people expected some form of announcement around Kubernetes at the conference. I’m not sure that folks expected this level of integration, or that the integration would take this particular shape/form.
  • In looking back on the announcement and the demos from today’s general session and in thinking about the forces that drove Docker to provide Kubernetes integration, it occurs to me that Continue reading

Cheap Stuff is not Cheap

I have often fallen for the temptation of buying cheap instead of buying quality. This might be a saw, a drill, a lawnmower or just about anything imaginable. When I look at what professionals use I see them buying well-known and commercial grade products. For example, I wouldn’t expect to see my lawn care team buying a consumer lawnmower at Evil Big Box Store. They actually buy expensive commercial grade zero turn models that are roughly eight to ten times the cost of any mower I would consider.

My lawn care professionals mow lawns to make money, so what gives? Some might assume that these commercial grade products simply allow them to do their jobs faster. In nearly all cases, that is only half of the story. These products last much longer and hold up under the extremes of daily use. Their decks are heavy duty and the blades are less susceptible to being bent. The bottom line that these units mow faster AND they last longer. They spend less time in the shop and do the job they were purchased to do.

I find these quality issues with many consumer grade products. They’re basically cheap and disposable. The end result is Continue reading

IDG Contributor Network: 6 AI ingredients every wireless networking strategy needs

Artificial intelligence is all the rage these days. There’s broad consensus that AI is the next game-changing technology, poised to impact virtually every aspect of our lives in the coming years, from transportation to medical care to financial services. Gartner predicts that by 2020, AI will be pervasive in almost every new software product and service and the technology will be a top five investment priority for more than 30 percent of CIOs.An area where AI is already showing enormous value is wireless networking. The use of machine learning can transform WLANs into neural networks that simplify operations, expedite troubleshooting and provide unprecedented visibility into the user experience.To read this article in full or to leave a comment, please click here

IDG Contributor Network: 6 AI ingredients every wireless networking strategy needs

Artificial intelligence is all the rage these days. There’s broad consensus that AI is the next game-changing technology, poised to impact virtually every aspect of our lives in the coming years, from transportation to medical care to financial services. Gartner predicts that by 2020, AI will be pervasive in almost every new software product and service and the technology will be a top five investment priority for more than 30 percent of CIOs.An area where AI is already showing enormous value is wireless networking. The use of machine learning can transform WLANs into neural networks that simplify operations, expedite troubleshooting and provide unprecedented visibility into the user experience.To read this article in full or to leave a comment, please click here

Western Digital plans 40TB drives, but it’s still not enough

Hard disk makers are using capacity as their chief bulwark against the rise of solid-state drives (SSDs), since they certainly can’t argue on performance, and Western Digital — the king of the hard drive vendors — has shown off a new technology that could lead to 40TB drives.Western Digital already has the largest-capacity drive on the market. It recently introduced a 14TB drive, filled with helium to reduce drag on the spinning platters. But thanks to a new technology called microwave-assisted magnetic recording (MAMR), the company hopes to reach 40TB by 2025. The company promised engineering samples of drive by mid-2018.Also on Network World: Get ready for new storage technologies and media MAMR technology is a new method of cramming more data onto the disk. Western Digital’s chief rival, Seagate, is working on a competitive product called HAMR, or heat-assisted magnetic recording. I’ll leave it to propeller heads like AnandTech to explain the electrical engineering of it all. What matters to the end user is that it should ship sometime in 2019, and that’s after 13 years of research and development. To read this article in full or to leave a comment, please click here

Western Digital plans 40TB drives, but it’s still not enough

Hard disk makers are using capacity as their chief bulwark against the rise of solid-state drives (SSDs), since they certainly can’t argue on performance, and Western Digital — the king of the hard drive vendors — has shown off a new technology that could lead to 40TB drives.Western Digital already has the largest-capacity drive on the market. It recently introduced a 14TB drive, filled with helium to reduce drag on the spinning platters. But thanks to a new technology called microwave-assisted magnetic recording (MAMR), the company hopes to reach 40TB by 2025. The company promised engineering samples of drive by mid-2018.Also on Network World: Get ready for new storage technologies and media MAMR technology is a new method of cramming more data onto the disk. Western Digital’s chief rival, Seagate, is working on a competitive product called HAMR, or heat-assisted magnetic recording. I’ll leave it to propeller heads like AnandTech to explain the electrical engineering of it all. What matters to the end user is that it should ship sometime in 2019, and that’s after 13 years of research and development. To read this article in full or to leave a comment, please click here

Arista EOS CloudVision

Arista EOS® CloudVision® provides a centralized point of visibility, configuration and control for Arista devices. The CloudVision controller is available as a virtual machine or physical appliance.


Fabric Visibility on Arista EOS Central describes how to use industry standard sFlow instrumentation in Arista switches to deliver real-time flow analytics. This article describes the steps needed to integrate flow analytics into CloudVision.

Log into the CloudVision node and run the following cvp_install_fabricview.sh script as root:
#!/bin/sh
# Install Fabric View on CloudVision Portal (CVP)

VER=`wget -qO - http://inmon.com/products/sFlow-RT/latest.txt`
wget http://www.inmon.com/products/sFlow-RT/sflow-rt-$VER.noarch.rpm
rpm --nodeps -ivh sflow-rt-$VER.noarch.rpm
/usr/local/sflow-rt/get-app.sh sflow-rt fabric-view

ln -s /cvpi/jdk/bin/java /usr/bin/java

sed -i '/^# http.hostname=/s/^# //' /usr/local/sflow-rt/conf.d/sflow-rt.conf
echo "http.html.redirect=./app/fabric-view/html/" >> /usr/local/sflow-rt/conf.d/sflow-rt.conf

cat <<EOT > /etc/nginx/conf.d/locations/sflow-rt.https.conf
location /sflow-rt/ {
auth_request /aeris/auth;
proxy_buffering off;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Prefix /sflow-rt/;
proxy_set_header Host \$host;
proxy_pass http://localhost:8008/;
proxy_redirect ~^http://[^/]+(/.+)\$ /sflow-rt\$1;
}
EOT

systemctl restart nginx.service

firewall-cmd --zone public --add-port=6343/udp --permanent
firewall-cmd --reload

systemctl enable sflow-rt.service
systemctl start sflow-rt.service

wget http://www.inmon.com/products/sFlow-RT/cvp-eapi-topology.py
chmod +x cvp-eapi-topology.py

echo "configure and run cvp-eapi-topology.py"
Edit the cvp-api-topology.py script to Continue reading

Real world use cases for NSX and Pivotal Cloud Foundry

Pivotal Cloud Foundry (PCF) is the leading PaaS solution for enterprise customers today, providing a fast way to convert their ideas from conception to production. This is achieved by providing a platform to run their code in any cloud and any language taking care of all the infrastructure “stuff” for them.

From building the container image, compiling it with the required runtime , deploying it in a highly available mode and connecting it to the required services, PCF allows dev shops to concentrate on developing their code.

While the platform is providing developers with the most simplified experience conceivable, under the hood there are many moving parts that make that happen and plumbing all these parts can be complex. That’s where customers are really enjoying the power of VMware’s SDDC, and the glue between the PaaS and SDDC layers is NSX, it is the enabler that makes it all work.

In this blog post I detail some of the main uses cases customers have already deployed NSX for PCF on top of vSphere and how PCF and NSX are much better together in the real world.

The use cases customers are deploying with NSX for PCF are varied and ill Continue reading

What is hybrid cloud computing?

Hybrid cloud: Many believe it’s the eventual state that most businesses will operate in – some infrastructure resources on premises, others in the public cloud. Others believe it's a term that has been muddled by varied definitions from a range of vendors, diminishing the term to now be vague and nebulous.So, what does hybrid cloud really mean and how can users implement it?What is hybrid cloud computing? While there is no one single agreed-upon definition of hybrid cloud computing, perhaps the closest we have is from the National Institutes for Standards in Technology (NIST):(Hybrid) cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability.. To read this article in full or to leave a comment, please click here

Container-Relevant Kernel Developments

This is a liveblog of a Black Belt track session at DockerCon EU in Copenhagen. The session is named “Container-Relevant Kernel Developments,” and the presenter is Tycho Andersen.

Andersen first presents a disclaimer that the presentation is mostly a brain dump, and the he’s not personally responsible for a lot of the work presented here. In fact, all of the work Andersen will talk about is not yet merged upstream in the Linux kernel, and he doesn’t expect that they will be accepted upstream and see availability for average users.

The first technology Andersen talks about IMA (Integrity Management Association, I think?), which prevents user space from even opening files if they have been tampered with or modified in some fashion that violates policy. IMA is also responsible for allowing the Linux kernel to take advantage of a system’s Trusted Platform Module (TPM).

Pertinent to containers, Andersen talks about work that’s happening within the kernel development community around namespacing IMA. There are a number of challenges here, not all of which have been addressed or resolved yet, and Andersen refers attendees to the Linux Kernel mailing list (LKML) for more information.

Next, Andersen talks about the Linux audit log. Continue reading