Automated localization for unreproducible builds

Automated localization for unreproducible builds Ren et al., ICSE’18

Reproducible builds are an important component of integrity in the software supply chain. Attacks against package repositories and build environments may compromise binaries and produce packages with backdoors (see this report for a recent prominent example of compromised packages on DockerHub). If the same source files always lead to the same binary packages, then an infected binary can be much more easily detected. Unfortunately, reproducible builds have not traditionally been the norm. Non-determinism creeping into build processes means that rebuilding an application from the exact same source, even within a secure build environment, can often lead to a different binary.

Due to the significant benefits, many open-source software repositories have initiated their validation processes. These repositories include GNU/Linux distributions such as Debian and Guix, as well as software systems like Bitcoin.

If you have a non-reproducible build, finding out why can be non-trivial. It takes time and a lot of effort to hunt down and eradicate the causes. For example, Debian unstable for AMD64 still had 2,342 packages with non-reproducible builds as of August 2017. (The number today as I’m writing this is 2,826). You can see a stubbornly persistent Continue reading

Another 10 Years Later

The evolutionary path of any technology can often take strange and unanticipated turns and twists. At some points simplicity and minimalism can be replaced by complexity and ornamentation, while at other times a dramatic cut-through exposes the core concepts of the technology and removes layers of superfluous additions. The evolution of the Internet appears to be no exception and contains these same forms of unanticipated turns and twists. In thinking about the technology of the Internet over the last ten years, it appears that it’s been a very mixed story about what’s changed and what’s stayed the same.

SharkFest 2018!! Woot woot! So Excited!

On Saturday I leave North Carolina to head to Sunnyvale, California for…… (insert drumroll here)… SharkFest!  I’m am so pumped and excited!  I have wanted to attend SharkFest since 2009 when I first learned about it!  I’m finally going!  Woot woot!

It is not uncommon that I find myself having to explain what SharkFest is… even to diehard WireShark users and enthusiasts.  So let me take a step back and explain what SharkFest is.

What is SharkFest?

SharkFest™, launched in 2008, is a series of annual educational conferences staged in various parts of the globe and focused on sharing knowledge, experience and best practices among the Wireshark® developer and user communities.

SharkFest attendees hone their skills in the art of packet analysis by attending lecture and lab-based sessions delivered by the most seasoned experts in the industry. Wireshark core code contributors also gather during the conference days to enrich and evolve the tool to maintain its relevance in ensuring the productivity of modern networks.

https://sharkfestus.wireshark.org/about

Teehee.  So basically it is a major WireShark geek fest!!!!  And I am STOKED!  Who wouldn’t be?  Just look at the classes I’ve Continue reading

NSX Cloud Blog Series: Part 1

On the heels of announcing general availability of NSX Cloud on June 5th, we’re pleased to announce that NSX Cloud was selected as Best of Show runners-up in the cloud computing category at Interop Tokyo. The full list of all winners is available here. For those unfamiliar, Interop Tokyo is a major event of over 140,000 attendees, and this award requires an hour presentation including a demo and Q&A to the Interop Committee, so this award came with some scrutiny and we’re proud to have received it.

 

Let’s deep dive a little into what NSX Cloud is all about. As enterprises make the transition to a hybrid cloud model, new challenges inherent to managing this hybrid cloud model arise, including: how to extend enterprise network policies seamlessly to the cloud, how to have complete operational visibility into traffic flows across your hybrid environment, and how to maintain a consistent security policy across private and public clouds. These are key concerns for Network and Security administrators as well as cloud architects. NSX Cloud is designed to address these requirements.

 

NSX Cloud Model

 

NSX Cloud together with NSX Data Center provides a uniform operational model across Public Cloud and on-premises Continue reading

BiB 046: Arista 7170 Multi-function Programmable Network Switches

Ethan Banks attended a technical webinar held by Arista Networks talking about their recently announced 7170 series multi-function programmable network switches. In this webinar, Arista explained what the new 7170 switch line was all about. The central reason this switch line exists is programmability.

The post BiB 046: Arista 7170 Multi-function Programmable Network Switches appeared first on Packet Pushers.

Details Emerge On Post-K Exascale System With First Prototype

Japanese computer maker Fujitsu, which has four different processors under development at the same time aimed at different workloads in the datacenter – five if you count its digital annealer quantum chip – has unveiled some of the details about the future Arm processor, as yet unnamed, that is being created for the Post-K exascale supercomputer at RIKEN, the research and development arm of the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT).

Details Emerge On Post-K Exascale System With First Prototype was written by Timothy Prickett Morgan at .

Microsoft launches undersea, free-cooling data center

A free supply of already-cooled deep-sea water is among the benefits to locating pre-packaged data centers underwater, believes Microsoft, which recently announced the successful launch of a submarine-like data center off the coast of the Orkney Islands in Scotland.The shipping-container-sized, self-contained server room, called Project Natick, submerged earlier this month on a rock shelf 117 feet below the water’s surface also has the benefit of potentially taking advantage of bargain-basement real estate near population centers — there’s no rent in open sea.“Project Natick is an out-of-the-box idea to accommodate exponential growth in demand for cloud computing infrastructure near population centers,” John Roach writes on Microsoft’s website.To read this article in full, please click here

Microsoft launches undersea, free-cooling data center

A free supply of already-cooled deep-sea water is among the benefits to locating pre-packaged data centers underwater, believes Microsoft, which recently announced the successful launch of a submarine-like data center off the coast of the Orkney Islands in Scotland.The shipping-container-sized, self-contained server room, called Project Natick, submerged earlier this month on a rock shelf 117 feet below the water’s surface also has the benefit of potentially taking advantage of bargain-basement real estate near population centers — there’s no rent in open sea.“Project Natick is an out-of-the-box idea to accommodate exponential growth in demand for cloud computing infrastructure near population centers,” John Roach writes on Microsoft’s website.To read this article in full, please click here

Microsoft launches undersea, free-cooling data center

A free supply of already-cooled deep-sea water is among the benefits to locating pre-packaged data centers underwater, believes Microsoft, which recently announced the successful launch of a submarine-like data center off the coast of the Orkney Islands in Scotland.The shipping-container-sized, self-contained server room, called Project Natick, submerged earlier this month on a rock shelf 117 feet below the water’s surface also has the benefit of potentially taking advantage of bargain-basement real estate near population centers — there’s no rent in open sea.“Project Natick is an out-of-the-box idea to accommodate exponential growth in demand for cloud computing infrastructure near population centers,” John Roach writes on Microsoft’s website.To read this article in full, please click here

Disruption from Within: Driving Innovation at Franklin American with Docker EE

When you think of mortgage companies, you think of paperwork. You probably don’t think agile and responsive to customers. But Franklin American Mortgage wanted to disrupt that model. With their investments in innovation, microservices and Docker Enterprise Edition, they’ve been able to quickly create a platform to make technology core to their success.

Don Bauer, the DevOps lead at Franklin American is part of an innovation team the mortgage company had established last year to challenge the status quo. Franklin American was doing amazing things and transforming their business — all with Docker Enterprise Edition as the foundation.

Don presented at DockerCon 2018 along with Franklin American’s VP of Innovation, Sharon Frazier. They’ve been able to quickly build a DevOps culture around four pillars: Visibility, Simplification, Standardization and Experimentation. Experimentation is key. It lets them fail fast, and fearlessly.

In an interview at DockerCon, Don explained how they use Docker Enterprise Edition to drive innovation.

“Docker has allowed us to fail fearlessly. We can test new things easily and quickly and if they work, awesome. But if they don’t, we didn’t spend weeks or months on it.” – Don Bauer, DevOps Lead 

For the company, innovation is about Continue reading

More signs the Qualcomm Centriq is in trouble

Last month there were rumors that Qualcomm was looking to exit the data center business and abandon the Centriq processor, an ARM-based 48-core chip designed to take on Intel in the enterprise server market. The news seemed surprising, given Qualcomm had put years of work into the chip and had only just launched it a few months earlier.Now Bloomberg adds further fuel to the fire with a report that the company is preparing to lay off almost 280 employees, and most of them are in the data center group. Bloomberg got wind of the layoffs due to filings with the state governments in North Carolina and California, which require advanced notice of significant layoffs.To read this article in full, please click here

More signs the Qualcomm Centriq is in trouble

Last month there were rumors that Qualcomm was looking to exit the data center business and abandon the Centriq processor, an ARM-based 48-core chip designed to take on Intel in the enterprise server market. The news seemed surprising, given Qualcomm had put years of work into the chip and had only just launched it a few months earlier.Now Bloomberg adds further fuel to the fire with a report that the company is preparing to lay off almost 280 employees, and most of them are in the data center group. Bloomberg got wind of the layoffs due to filings with the state governments in North Carolina and California, which require advanced notice of significant layoffs.To read this article in full, please click here