MetLife is a 150 year old company in the business of securing promises and the information management of over 100M customers and their insurance policies. As a global company, MetLife delivers promises into every corner of the world – some of them built to last a lifetime. With this rich legacy comes a diverse portfolio of IT infrastructure to maintain those promises.
In April, Aaron Aedes from MetLife spoke about their first foray into Docker containerization with a new application, GSSP, delivered through Azure. Six months later, MetLife returns to the DockerCon stage to share their journey since this initial deployment motivated them to find other ways to leverage Docker Enterprise Edition [EE] within MetLife.
Jeff Murr, Director of Engineering for Containers and Open Source at MetLife spoke in the Day 1 DockerCon keynote session about how they are looking to scale containerization with Docker as they scale . He states that new technology typically adds more cost and overhead to an already taxed IT budget. But the Docker Modernize Traditional Apps [MTA] Program presented an opportunity to reduce the costs of their existing applications.
The MTA project at MetLife started with a single Linux Java-based application that handled the “Do Not Continue reading
SAN JOSE, California — Quanta Cloud Technology’s (QCT) latest data center technology brings the benefits of disaggregation and composable infrastructure to cloud service providers and telco service providers, said Mike Yang, president of QCT at today’s Q.synergy 2017 event. The Rackgo R portfolio is based on the Intel Rack Scale Design (RSD) software framework, which... Read more →
I had a great time this week recording the first episode of a new series with my co-worker Rich Stroffolino. The Gestalt IT Rundown is hopefully the start of some fun news stories with a hint of snark and humor thrown in.
One of the things I discussed in this episode was my belief that no data is truly secure any more. Thanks to recent attacks like WannaCry and Bad Rabbit and the rise of other state-sponsored hacking and malware attacks, I’m totally behind the idea that soon everyone will know everything about me and there’s nothing that anyone can do about it.
Personal data is important. Some pieces of personal data are sacrificed for the greater good. Anyone who is in IT or works in an area where they deal with spam emails and robocalls has probably paused for a moment before putting contact information down on a form. I have an old Hotmail address I use to catch spam if I’m relative certain that something looks shady. I give out my home phone number freely because I never answer it. These pieces of personal data have been sacrificed in order to provide me Continue reading
Introduction When Bob Dylan wrote back in the 60’s “times they are a-changin” it’s very possible he knew how true that would be today. Last week, we saw a few things announced in the container technology space during the DockerCon event in Copenhagen – but one thing that I believe came as a surprise to... Read more →
Introduction
When Bob Dylan wrote back in the 60’s “times they are a-changin” it’s very possible he knew how true that would be today. Last week, we saw a few things announced in the container technology space during the DockerCon event in Copenhagen – but one thing that I believe came as a surprise to many was Docker’s announcement to begin including Kubernetes in Docker Enterprise edition sometime in early 2018. This doesn’t concede or mark the death of Docker’s own scheduling and orchestration platform, Docker Swarm, but it does underscore what we’ve heard from many of our customers for quite some time now – almost every IT organization that is using/evaluating containers has jumped on the Kubernetes bandwagon. In fact, many of you are probably already familiar with the integration supported today with NSX-T 2.0 and Kubernetes from the post that Yves did earlier in the year…
In the past few years, we’ve heard a lot about this idea of digital transformation and what it means for today’s enterprise. Typically, a part of this transformation is something called infrastructure modernization, and this happens because most IT environments today have some hurdles that need to Continue reading
Welcome to Technology Short Take 89! I have a collection of newer materials and some older materials this time around, but hopefully all of them are still useful. (I needed to do some housekeeping on my Instapaper account, which is where I bookmark stuff that frequently lands here.) Enjoy!
Hmm…I didn’t find anything again this time around. Perhaps I should remove this section?
firewall-cmd, and found this article to Continue reading![]() |
| Fig 1.1- Inter-AS option A |
In oVirt 4.2 we have introduced support for the Link Layer Discovery Protocol (LLDP). It is used by network devices for advertising the identity and capabilities to neighbors on a LAN. The information gathered by the protocol can be used for better network configuration.Learn more about LLDP.
When adding a host into oVirt cluster, the network administrator usually needs to attach various networks to it. However, a modern host can have multiple interfaces, each with its non-descriptive name.
In the screenshot below, taken from the Administration Portal, a network administrator has to know
to which interface to attach the network named m2 with VLAN_ID 162. Should it be interface
enp4s0, ens2f0 or even ens2f1? With oVirt 4.2, the administrator can hover over enp4s0
and see that this interface is connected to peer switch rack01-sw03-lab4, and learn that this
peer switch does not support VLAN 162 on this interface. By looking at every interface, the
administrator can choose which interface is the right option for networkm2.

A similar situation arises with the configuration of mode 4 bonding (LACP). Configurating LACP usually starts with network administrator defining a port group Continue reading
During Cisco Live Berlin 2017 Peter Jones (chair of several IEEE task forces) and myself went on a journey through 40 years of Ethernet history (and Token Bus and a few other choice technologies).
The sound quality is what you could expect from something recorded on a show floor with pigeons flying around, but I hope you’ll still enjoy our chat.
Seymour Cray loved vector supercomputers, and made the second part of that term a household word because of it. NEC, the last of the pure vector supercomputer makers, is so excited about its new “Aurora” SX-10+ vector processor and the “Tsubasa” supercomputer that will use it that it forgot to announce the processor to the world when it previewed the system this week.
Here at The Next Platform, we easily forgive such putting of carts before horses – so long as someone eventually explains the horse to us before the cart starts shipping for real. NEC is expected to …
Can Vector Supercomputing Be Revived? was written by Timothy Prickett Morgan at The Next Platform.
The recent bug in WPA2 has a worst case outcome that is the same as using a wifi without a password: People can sniff, maybe inject… it’s not great but you connect to open wifi at Starbucks anyway, and you’re fine with that because you visit sites with HTTPS and SSH. Eventually your client will get a fix too, so the whole thing is pretty “meh”.
But there’s a reason I call it “WPA2 bug” and I call the recent issue with Infineon key generation “the Infineon disaster”. It’s much bigger. It seems like the whole of Estonia needs to re-issue ID cards, and several years worth of PC-, smartcard-, Yubikey, and other production have been generating bad keys. And these keys will stick around.
From now until forever when you generate, use, or accept RSA keys you have to check for these weak keys. I assume OpenSSH will if it hasn’t already.
But then what? It’s not like servers can just reject these keys, or it’ll lock people out. And it’s not clear that an adversary even has your public key for SSH. And you can’t crack the key if you don’t have the public half. Maybe a Continue reading
The private-equity firm will purchase Gigamon at a premium of 21 percent.
It must be tough for the hyperscalers that are expanding into public cloud and the public cloud builders that also use their datacenters to run their own businesses to decide whether to hoard all of the new technologies that they can get their hands on for their own benefit, or to make money selling that capacity to others.
For any new, and usually constrained, kind of capacity, such as shiny new “Skylake” Xeon SP processors from Intel or “Volta” Tesla GPU accelerators from Nvidia, it has to be a hard call for Google, Amazon, Microsoft, Baidu, Tencent, and Alibaba to …
AWS First Up With Volta GPUs In The Cloud was written by Timothy Prickett Morgan at The Next Platform.
It needed to tease apart state versus compute.