In this SDxCentral eBrief, we look at the types of security threats that are becoming more prevalent and examine some of the latest techniques and tools that enterprises are employing to make sure that their business assets in the cloud are secure.

Last month the Linux Foundation announced the 2018 Open Container Initiative (OCI) election results of the Technical Oversight Board (TOB). Members of the TOB then voted to elect our very own Michael Crosby as the new Chairman. The result of the election should not come as a surprise to anyone in the community given Michael’s extensive contributions to the container ecosystem.
Back in February 2014, Michael led the development of libcontainer, a Go library that was developed to access the kernel’s container APIs directly, without any other dependencies. If you look at this first commit of libcontainer, you’ll see that the JSONspec is very similar to the latest version of the 1.0 runtime specification.
In the interview below, we take a closer look at Michael’s contributions to OCI, his vision for the future and how this benefits all Docker users.
I think that it is important to be part of the TOB to ensure that the specifications that have been created are generally useful and not specific to any one use case. I also feel it is important to ensure that the specifications are stable so that Continue reading

A deep, comprehensive review of BPF
Encryption is an important technical building block for Internet trust. It secures our infrastructure, enables e-commerce, ensures the confidentiality of our data and communications, and much more. Yet, because bad actors can also use encryption to hide their activities, it can present challenges for law enforcement.
How, or even if, law enforcement should gain access to encrypted content has remained a divisive issue for the last twenty years. Yet, even as encryption tools have grown in variety and use, the public debate has become over-simplified into a battle between those for and against encryption. That public debate often fails to address the nuances of the digital-communications and data-storage landscape, or how it has evolved. With both sides largely talking at each other, rather than listening to one another, there has been little headway towards a solution, or set of solutions, that is acceptable to all.
In October of 2017, the Internet Society and Chatham House convened an experts roundtable under the Chatham House Rule to deconstruct the encryption debate. They explored ways to bridge two important societal objectives: the security of infrastructure, devices, data, and communications; and the needs of law enforcement. The roundtable brought together a diverse set of Continue reading
Thanks to all who joined us for the Dell EMC webinar, Putting NFV Into Production with Ease – A Service Provider Perspective.
New application architectures like microservices and containers will drive new network architectures to enable automation.
Last year Cisco announced that they would revise their certifications more often and in smaller increments instead of doing only major revisions which had problems keeping up with the pace of the industry.
This is exactly what they are now doing to the CCIE Datacenter certification which is being updated from version 2.0 to 2.1.
The full list of changes can be seen in this link.
Some highlights of the change below:
It is clear that ACI and cloud are important going forward and some older technologies had to be removed to make room for the new additions. Seems like a good updated to me. I’m happy to see these minor revisions coming in instead of the major ones which usually only took place every four years or so.
The post CCIE Datacenter Updated to Version 2.1 appeared first on Daniels Networking Blog.
The network automation evangelists love to tell you that automation is more than just device configuration management. They’re absolutely right… but it’s nonetheless amazing how much good you could do with simple tools solving simple problems.
Here’s what I got from Nicky Davey:
Read more ...Watching for software inefficiencies with Witch Wen et al., ASPLOS’18
(The link above is to the ACM Digital Library, if you don’t have membership you should still be able to access the paper pdf by following the link from The Morning Paper blog post directly.)
Inefficiencies abound in complex, layered software.
These inefficiencies can arise during design (poor choice of algorithm), implementation, or translation (e.g., compiler optimisations or lack thereof). At the level of the hardware, inefficiencies involving the memory subsystem are some of the most costly…
Repeated initialization, register spill and restore on hot paths, lack of inlining hot functions, missed optimization opportunities due to aliasing, computing and storing already computed or sparingly changing values, and contention and false sharing (in multi-threaded codes), are some of the common prodigal uses of the memory subsystem.
Coarse grained profilers (e.g., gprof) have comparatively little overhead and can detect hotspots, but fail to distinguish between efficient and inefficient resource usage. Fine-grained profilers (e.g. DeadSpy) can detect inefficiencies, but typically introduce high overheads (10-80x slowdown and 6-100x extra memory). These high overheads prevent such tools from being widely used. Witch is a fine-grained inefficiency detection Continue reading
Boston City Hospital and Boston University Medical Center Hospital merged in 1996 to form Boston Medical Center (BMC). This 497-bed teaching hospital in the South End of Boston provides primary and critical care to a diverse population and houses the largest Level 1 trauma center in New England.
As a 24-hour hub for surgeries and life-sustaining medical care, BMC relies heavily on technology to support all operations, from appointment scheduling to vital health monitoring and imaging systems. Boston Medical Center has standardized on vSphere as a virtualization platform for its data centers. With their server infrastructure almost 90% virtualized, BMC uses VMware vCloud Suite, Site Recovery Manager, vRealize Operations Manager, and has recently added NSX to better secure its Epic Electronic Medical Records platform.
In 2015, BMC implemented the Dell DRIVE system, including VMware, to consolidate and digitize medical records storage and delivery on Epic. While the Epic records must be constantly accessible to health care providers, who require immediate access to essential patient information throughout the hospital system, those same records must also be protected from intrusion or misuse. According to David Bass, SDDC Engineer at Boston Medical Center, “The type of data that Continue reading
Background:
One of the most widely used protocols for authentication of user connections is PPPoE (or Point-to-Point over Ethernet). Traditionally, PPPoE was used in DSL deployments but became one of the most adopted forms of customer device authentication in many networks. Often used with a AAA system such as RADIUS, the ability to authenticate, authorize and account for customer connections made the use of PPPoE so appealing.
The protocol itself resides at the data link layer (OSI Layer 2) and provides control mechanisms between the connection endpoints. Within this process lies several other moving parts, if you would like to read more you can visit this wiki page which explains PPPoE rather well (https://en.wikipedia.org/wiki/Point-to-Point_Protocol_over_Ethernet ). For the purpose of this article though, I will be sticking to a very specific problem that arises; how to build redundancy when using PPPoE.
PPPoE is a layer 2 connection protocol widely used in service provider networks. Connections initiated from a client terminate on what is known as a BRAS (Broadband Remote Authentication Server), or Access Concentrator (AC) from herein. The function of the AC is to negotiate the link parameters between itself and the client and Continue reading