Why no cyber 9/11 for 15 years?

This The Atlantic article asks why hasn't there been a cyber-terrorist attack for the last 15 years, or as it phrases it:
National-security experts have been warning of terrorist cyberattacks for 15 years. Why hasn’t one happened yet?
As a pen-tester who has broken into power grids and found 0dayss in control center systems, I thought I'd write up some comments.


Instead of asking why one hasn't happened yet, maybe we should instead ask why national-security experts keep warning about them.

One possible answer is that national-security experts are ignorant. I get the sense that "national" security experts have very little expertise in "cyber" security. That's why I include a brief resume at the top of this article, I've actually broken into a power grid and found 0days in critical power grid products (specifically, the ABB implementation of ICCP on AIX -- it's rather an obvious buffer-overflow, *cough* ASN.1 *cough*, I don't know if they ever fixed it).

Another possibility is that they are fear mongering in order to support their agenda. That's the problem with "experts", they get their expertise by being employed to achieve some goal. The ones who know most about an issue are simultaneously the Continue reading

The FuzzyLog: a partially ordered shared log

The FuzzyLog: a partially ordered shared log Lockerman et al., OSDI’18

If you want to build a distributed system then having a distributed shared log as an abstraction to build upon — one that gives you an agreed upon total order for all events — is such a big help that it’s practically cheating! (See the “Can’t we all just agree” mini-series of posts for some of the background on consensus).

Services built over a shared log are simple, compact layers that map a high-level API to append/read operations on the shared log, which acts as the source of strong consistency, durability, failure atomicity, and transactional isolation. For example, a shared log version of ZooKeeper uses 1K lines of code, an order of magnitude lower than the original system.

There’s a catch of course. System-wide total orders are expensive to maintain. Sometimes it may be impossible (e.g. in the event of a network partition). But perhaps we don’t always need a total ordering. Oftentimes for example causal consistency is strong enough. FuzzyLog aims to provide the simplicity of a shared log without imposing a total order: it provides partial ordering instead. It’s designed for a world Continue reading

The Internet Society’s Hot Topics at IETF 103

The 103rd meeting of the IETF starts tomorrow in Bangkok which is the first time that an IETF meeting has been held in the city.

The Internet Society’s Internet Technology Team is as always highlighting the latest IPv6, DNSSEC, Securing BGP, TLS, and IoT related developments, and we’ll also be covering DNS Privacy and NTP Security from now on. This is discussed in detail in our Rough Guide to IETF 103, but we’ll also be bringing you daily previews of what’s happening each day as the week progresses.

Below are the sessions that we’ll be covering in the coming week. Note this post was written in advance so please check the official IETF 103 agenda for any updates, room changes, or final details.

Monday, 5 November 2018

Tuesday, 6 November 2018

Large-scale network simulations in Kubernetes, Part 1 – Building a CNI plugin

Building virtualised network topologies has been one of the best ways to learn new technologies and to test new designs before implementing them on a production network. There are plenty of tools that can help build arbitrary network topologies, some with an interactive GUI (e.g. GNS3 or EVE-NG/Unetlab) and some “headless”, with text-based configuration files (e.g. vrnetlab or topology-converter). All of these tools work by spinning up multiple instances of virtual devices and interconnecting them according to a user-defined topology.

Problem statement

Most of these tools were primarily designed to work on a single host. This may work well for a relatively small topology but may become a problem as the number of virtual devices grows. Let’s take Juniper vMX as an example. From the official hardware requirements page, the smallest vMX instance will require:

  • 2 VMs - one for control and one for data plane
  • 2 vCPUs - one for each of the VMs
  • 8 GB of RAM - 2GB for VCP and 6GB for VFP

This does not include the resources consumed by the underlying hypervisor, which can easily eat up another vCPU + 2GB of RAM. It’s easy to imagine how quickly Continue reading

7 Guiding Principles for Leading Data Center Networks

Whether you’re starting out on a fresh playing field or diving into a mud pool of decades-old complexity, designing and deploying a new or modernized data center is a rewarding endeavor; not just for the engineers and architects, but also for the businesses that reap the benefits of agility, scalability, and performance that come along with it.

And the first step on that road is to talk. The initial conversations with thought leaders, business strategists, and technical architects are the most pivotal in the discovery phase of any large project. It is at this phase that the box is forming, and questions must be asked outside of it to shape its dimensions. To transform the network, you must be prepared to ask challenging questions that drive conversations around open networking, automation, modularity, scalability, segmentation and re-usability. Before vendor selection, it is essential to compile a list of business and technical requirements founded upon a set of guiding principles.

Here are seven to keep in your pocket:
1. The network architecture should use standards-based protocols and services
2. The network should be serviceable without downtime
3. The network architecture should promote automation
4. The network should be consumable
5. Physical boundaries Continue reading

What to expect from Wi-Fi 6 in 2019

Wi-Fi 6 – aka 802.11ax – will begin to make its way into new installations in 2019, bringing with it a host of technological upgrades aimed at simplifying wireless-network problems.The first and most notable feature of the standard is that it’s designed to operate in today’s increasingly congested radio environments. It supports multi-user, multiple-input, multiple-output (MU-MIMO) technology, meaning that a given access point can handle traffic from up to eight users at the same time and at the same speed. Previous-generation APs still divide their attention and bandwidth among simultaneous users.[ Also see Wi-Fi 6 is coming to a router near you. | Get regularly scheduled insights by signing up for Network World newsletters. ] Better still is orthogonal frequency division multiple access (OFDMA), a technology borrowed from the licensed, carrier-driven half of the wireless world. What this does is subdivide each of the available independent channels available on a given AP by a further factor of four, meaning even less slowdown for APs servicing up to a couple dozen clients at the same time.To read this article in full, please click here

Ohio Supercomputer Center Picks CPUs, GPUs, and Liquid Cooling For Pitzer Cluster

The Ohio Supercomputer Center’s mission to supply supercomputing capabilities to educational institutions and companies throughout the state is about to get a significant boost in the form of a powerful and highly-efficient cluster based on Dell EMC servers and leveraging liquid-cooling technology from CoolIT.

Ohio Supercomputer Center Picks CPUs, GPUs, and Liquid Cooling For Pitzer Cluster was written by Jeffrey Burt at .