Announcing Four NDSS 2018 Workshops on Binary Analysis, IoT, DNS Privacy, and Security

The Internet Society is excited to announce that four workshops will be held in conjunction with the upcoming Network and Distributed System Security (NDSS) Symposium on 18 February 2018 in San Diego, CA. The workshop topics this year are:

A quick overview of each of the workshops is provided below. Submissions are currently being accepted for emerging research in each of these areas. Watch for the final program details in early January!

The first workshop is a new one this year on Binary Analysis Research (BAR). It is exploring the reinvigorated field of binary code analysis in light of the proliferation of interconnected embedded devices. In recent years there has been a rush to develop binary analysis frameworks. This has occurred in a mostly uncoordinated manner with researchers meeting on an ad-hoc basis or working in obscurity and isolation. As a result, there is little sharing or results and solution reuse among tools. The importance of formalized and properly vetted methods and tools for binary code analysis in order to deal with the scale of growth in these interconnected embedded devices cannot be overstated. Continue reading

Rough Guide to IETF 100: Internet Infrastructure Resilience

As we approach IETF 100 in Singapore next week, this post in the Rough Guide to IETF 100 has much progress to report in the world of Internet Infrastructure Resilience. After several years of hard work, the last major deliverable of the Secure Inter-Domain Routing (SIDR) WG is done – RFC 8205, the BGPSec Protocol Specification, was published in September 2017 as standard. BGPsec is an extension to the Border Gateway Protocol (BGP) that provides security for the path of autonomous systems (ASes) through which a BGP update message propagates.

There are seven RFCs in the suite of BGPsec specifications:

  • RFC 8205 (was draft-ietf-sidr-bgpsec-protocol) – BGPsec Protocol Specification
  • RFC 8206 (was draft-ietf-sidr-as-migration) – BGPsec Considerations for Autonomous System (AS) Migration
  • RFC 8207 (was draft-ietf-sidr-bgpsec-ops) – BGPsec Operational Considerations
  • RFC 8208 (was draft-ietf-sidr-bgpsec-algs) – BGPsec Algorithms, Key Formats, and Signature Formats
  • RFC 8209 (was draft-ietf-sidr-bgpsec-pki-profiles) – A Profile for BGPsec Router Certificates, Certificate Revocation Lists, and Certification Requests
  • RFC 8210 (was draft-ietf-sidr-rpki-rtr-rfc6810-bis) – The Resource Public Key Infrastructure (RPKI) to Router Protocol, Version 1
  • RFC 8211 (was draft-ietf-sidr-adverse-actions) – Adverse Actions by a Certification Authority (CA) or Repository Manager in the Resource Public Key Infrastructure (RPKI)

You can read more Continue reading

HPE’s Superdome Gets An SGI NUMAlink Makeover

When Hewlett Packard Enterprise bought supercomputer maker SGI back in August 2016 for $275 million, it had already invested years in creating its own “DragonHawk” chipset to build big memory Superdome X systems that were to be the follow-ons to its PA-RISC and Itanium Superdome systems. The Superdome X machines did not support HPE’s own VMS or HP-UX operating systems, but venerable Tandem NonStop fault tolerant distributed database platform was put on the road to Intel’s Xeon processors four years ago.

Now, HPE is making another leap, as we suspected it would, and anointing the SGI UV-300 platform as its

HPE’s Superdome Gets An SGI NUMAlink Makeover was written by Timothy Prickett Morgan at The Next Platform.

Dell EMC Strengthens and Expands All-Flash Midrange Storage, Backing it with Industry-Leading Satisfaction Guarantee

all-flash midrange storage News summary Dell EMC expands its No. 1 market-leading midrange storage portfolio with new SC All-Flash appliances offering premium performance and data services to help accelerate data center modernization Free software upgrades for Dell EMC Unity customers; includes inline data deduplication, synchronous file replication and ability to perform online “data-in-place” storage controller upgrades New Future-Proof... Read more →

How to Deploy 800 Servers in 8 Hours

This is a liveblog of the session titled “How to deploy 800 nodes in 8 hours automatically”, presented by Tao Chen with T2Cloud (Tencent).

Chen takes a few minutes, as is customary, to provide an overview of his employer, T2cloud, before getting into the core of the session’s content. Chen explains that the drive to deploy such a large number of servers was driven in large part by a surge in travel due to the Spring Festival travel rush, an annual event that creates high traffic load for about 40 days.

The “800 servers” count included 3 controller nodes, 117 storage nodes, and 601 compute nodes, along with some additional bare metal nodes supporting Big Data workloads. All these nodes needed to be deployed in 8 hours or less in order to allow enough time for T2cloud’s customer, China Railway Corporation, to test and deploy applications to handle the Spring Festival travel rush.

To help with the deployment, T2cloud developed a “DevOps” platform consisting of six subsystems: CMDB, OS installation, OpenStack deployment, task management, automation testing, and health check/monitoring. Chen doesn’t go into great deal about any of these subsystems, but the slide he shows does give away some information:

How to Deploy 800 Servers in 8 Hours

This is a liveblog of the session titled “How to deploy 800 nodes in 8 hours automatically”, presented by Tao Chen with T2Cloud (Tencent).

Chen takes a few minutes, as is customary, to provide an overview of his employer, T2cloud, before getting into the core of the session’s content. Chen explains that the drive to deploy such a large number of servers was driven in large part by a surge in travel due to the Spring Festival travel rush, an annual event that creates high traffic load for about 40 days.

The “800 servers” count included 3 controller nodes, 117 storage nodes, and 601 compute nodes, along with some additional bare metal nodes supporting Big Data workloads. All these nodes needed to be deployed in 8 hours or less in order to allow enough time for T2cloud’s customer, China Railway Corporation, to test and deploy applications to handle the Spring Festival travel rush.

To help with the deployment, T2cloud developed a “DevOps” platform consisting of six subsystems: CMDB, OS installation, OpenStack deployment, task management, automation testing, and health check/monitoring. Chen doesn’t go into great deal about any of these subsystems, but the slide he shows does give away some information:

Lunduke’s Theory of Computer Mockery — no technology is sacred

“Any sufficiently popular, or important, computer technology will be mercilessly mocked 20 years later.” I call that Lunduke’s Theory of Computer Mockery. (Yes, I named it after myself. Because… why not?)The more important the technology, the more ruthlessly and brutally it will be mocked. It helps if the technology was, itself, a bit flawed when new. But even when a piece of tech is well received initially, 20 years later it will be fully brutalized. Let’s take a look at some examples: Windows 95  Would you use Windows 95 in 2017? Of course not. Would you make fun of it without regard for its feelings? Of course you would. To read this article in full or to leave a comment, please click here

IPv6 Primer for Deployments

This is a liveblog of the OpenStack Summit Sydney session titled “IPv6 Primer for Deployments”, led by Trent Lloyd from Canonical. IPv6 is a topic with which I know I need to get more familiar, so attending this session seemed like a reasonable approach.

Lloyd starts with some history. IPv6 was released in 1980, and uses 32-bit address (with a total address space of around 4 billion). IPv4, as most people know, is still used for the majority of Internet traffic. IPv6 was released in 1998, and uses 128-bit addresses (for a theoretical total address space of 3.4 x 10 to the 38th power). IPv5 was an experimental protocol, which is why the IETF used IPv6 as the version number for the next production version of the IP protocol.

Lloyd shows a graph showing the depletion of IPv4 address space, to help attendees better understand the situation with IPv4 address allocation. The next graph Lloyd shows illustrates IPv6 adoption, which—according to Google—is now running around 20% or so. (Lloyd shared that he naively estimated IPv4 would be deprecated in 2010.) In Australia it’s still pretty difficult to get IPv6 support, according to Lloyd.

Next, Lloyd reviews decimal and Continue reading

Fujitsu, NetApp partner for hyperconverged systems

Smaller players in the IT hardware space can often be overlooked because dominant players cast such a long shadow. So as someone who roots for the underdog, I do enjoy shining a little light on an overlooked bit of news.Fujitsu is not the first name in data center hardware here in the U.S. Its primary place of business is its native Japan. For example, it built the RIKEN supercomputer, one of the 10 fastest in the world. But it has some good hardware offerings, such as its Primergy and Primequest server line. Well, now the company has partnered with NetApp to offer them converged and hyperconverged systems.Also on Network World: Hyperconvergence: What’s all the hype about? Converged and hyperconverged infrastructure (CI/HCI) is a fancy way of saying tightly integrated systems that combine compute, networking and storage into pre-tested and pre-configured stacks for a single turnkey solution rather than buying and assembling one.To read this article in full or to leave a comment, please click here

Fujitsu, NetApp partner for hyperconverged systems

Smaller players in the IT hardware space can often be overlooked because dominant players cast such a long shadow. So as someone who roots for the underdog, I do enjoy shining a little light on an overlooked bit of news.Fujitsu is not the first name in data center hardware here in the U.S. Its primary place of business is its native Japan. For example, it built the RIKEN supercomputer, one of the 10 fastest in the world. But it has some good hardware offerings, such as its Primergy and Primequest server line. Well, now the company has partnered with NetApp to offer them converged and hyperconverged systems.Also on Network World: Hyperconvergence: What’s all the hype about? Converged and hyperconverged infrastructure (CI/HCI) is a fancy way of saying tightly integrated systems that combine compute, networking and storage into pre-tested and pre-configured stacks for a single turnkey solution rather than buying and assembling one.To read this article in full or to leave a comment, please click here

Battle Scars from OpenStack Deployments

This is the first liveblog from day 2 of the OpenStack Summit in Sydney, Australia. The title of the session is “Battle Scars from OpenStack Deployments.” The speakers are Anupriya Ramraj, Rick Mathot, and Farhad Sayeed (two vendors and an end-user, respectively, if my information is correct). I’m hoping for some useful, practical, real-world information out of this session.

Ramraj starts the session, introducing the speakers and setting some context for the presentation. Ramraj and Mathot are with DXC, a managed services provider. Ramraj starts with a quick review of some of the tough battles in OpenStack deployments:

  • Months to deploy OpenStack at scale
  • Chaos during incidents due to lack of OpenStack skills and knowledge
  • Customers spend lengthy periods with support for troubleshooting basic issues
  • Applications do not get onboarded to OpenStack
  • Marooned on earlier version of OpenStack
  • OpenStack skills are hard to recruit and retain

Ramraj recommends using an OpenStack distribution versus “pure” upstream OpenStack, and recommends using new-ish hardware as opposed to older hardware. Given the last bullet, this complicates rolling out OpenStack and resolving OpenStack issues. A lack of DevOps skills and a lack of understanding around OpenStack APIs can impede the process of porting applications Continue reading