adriancolyer

Author Archives: adriancolyer

SafeKeeper: protecting web passwords using trusted execution environments

SafeKeeper: protecting web passwords using trusted execution environments Krawiecka et al., WWW’18

(If you don’t have ACM Digital Library access, the paper can be accessed either by following the link above directly from The Morning Paper blog site, or from the WWW 2018 proceedings page).

Today’s paper is all about password management for password protected web sites / applications. Even if we assume that passwords are salted and hashed in accordance with best practice (NIST’s June 2017 digital identity guidelines now mandate the use of keyed one-way functions such as CMAC), an adversary that can obtain a copy of the back-end database containing the per-user salts and the hash values can still mount brute force guessing attacks against individual passwords.

SafeKeeper goes a lot further in its protection of passwords. What really stands out is the threat model. SafeKeeper keeps end user passwords safe even when we assume that an adversary has unrestricted access to the password database. Not only that, the adversary is able to modify the content sent to the user from the web site (including active content such as client-side scripts). And not only that! The adversary is also able to read all Continue reading

Semantics and complexity of GraphQL

Semantics and complexity of GraphQL Hartig & Pérez, WWW’18

(If you don’t have ACM Digital Library access, the paper can be accessed either by following the link above directly from The Morning Paper blog site, or from the WWW 2018 proceedings page).

GraphQL has been gathering good momentum since Facebook open sourced it in 2015, so I was very interested to see this paper from Hartig and Pérez exploring its properties.

One of the main advantages (of GraphQL) is its ability to define precisely the data you want, replacing multiple REST requests with a single call…

One of the most interesting questions here is what if you make a public-facing GraphQL-based API (as e.g. GitHub have done), and then the data that people ask for happens to be very expensive to compute in space and time?

Here’s a simple GraphQL query to GitHub asking for the login names of the owners of the first two repositories where ‘danbri’ is an owner.

From here there are two directions we can go in to expand the set of results returned : we can increase the breadth by asking for more repositories to be considered (i.e., changing first:2 Continue reading

Re-coding Black Mirror, Part V

This is the final part of our tour through the papers from the Re-coding Black Mirror workshop exploring future technology scenarios and their social and ethical implications.

(If you don’t have ACM Digital Library access, all of the papers in this workshop can be accessed either by following the links above directly from The Morning Paper blog site, or from the WWW 2018 proceedings page).

Towards trust-based decentralized ad-hoc social networks

Koidl argues that we have ‘crisis of trust’ in social media caused which manifests in filter bubbles, fake news, and echo chambers.

  • “Filter bubbles are the result of engagement-based content filtering. The underlying principle is to show the user content that relates to the content the user has previously engaged on. The result is a content stream that lacks diversification of topics and opinions.”
  • “Echo chambers are the result of content recommendations that are based on interests of friends and peers. This results in a content feed that is strongly biased towards grouped opinion (e.g. Group Think).”
  • “Fake news, and related expressions of the same, such as alternative facts, is Continue reading

Re-coding Black Mirror Part IV

This is part IV of our tour through the papers from the Re-coding Black Mirror workshop exploring future technology scenarios and their social and ethical implications.

(If you don’t have ACM Digital Library access, all of the papers in this workshop can be accessed either by following the links above directly from The Morning Paper blog site, or from the WWW 2018 proceedings page).

Is this the era of misinformation yet? Combining social bots and fake news to deceive the masses

In 2016, the world witnessed the storming of social media by social bots spreading fake news during the US Presidential elections… researchers collected Twitter data over four weeks preceding the final ballot to estimate the magnitude of this phenomenon. Their results showed that social bots were behind 15% of all accounts and produced roughly 19% of all tweets… What would happen if social media were to get so contaminated by fake news that trustworthy information hardly reaches us anymore?

Fake news and hoaxes have been Continue reading

Re-coding Black Mirror Part III

This is part III of our tour through the papers from the Re-coding Black Mirror workshop exploring future technology scenarios and their social and ethical implications.

(If you don’t have ACM Digital Library access, all of the papers in this workshop can be accessed either by following the links above directly from The Morning Paper blog site, or from the WWW 2018 proceedings page).

Shut up and run: the never-ending quest for social fitness

In this paper we explore possible negative drawbacks in the use of wearable sensors, i.e., wearable devices used to detect different kinds of activity, e.g., from step and calories counting to heart rate and sleep monitoring.

The core of the paper consists of three explored scenarios: Alice’s insurance, Bob’s mortgage, and Charlie’s problem.

Alice is looking to buy health insurance, which requires completing a screening process with potential insurers. Company A scanned Alice’s social media, found out that her mother has diabetes, adjusted risk upwards and hence offered a costly plan beyond what Alice can afford. Company Continue reading

Re-coding Black Mirror, Part II

We’ll be looking at a couple more papers from the re-coding Black Mirror workshop today:

(If you don’t have ACM Digital Library access, all of the papers in this workshop can be accessed either by following the links above directly from The Morning Paper blog site, or from the WWW 2018 proceedings page).

Pitfalls of affective computing

It’s possible to recognise emotions from a variety of signals including facial expressions, gestures and voices, using wearables or remote sensors, and so on.

In the current paper we envision a future in which such technologies perform with high accuracy and are widespread, so that people’s emotions can typically be seen by others.

Clearly, this could potentially reveal information people do not wish to reveal. Emotions can be leaked through facial micro-expressions and body language making concealment very difficult. It could also weaken social skills if it is believed that there is no need to speak or move to convey emotions. “White lies” might become impossible, removing a person’s responsibility to be compassionate. It could also lead to physical harm:

The ability Continue reading

Re-coding Black Mirror, Part I

In looking through the WWW’18 proceedings, I came across the co-located ‘Re-coding Black Mirror’ workshop.

Re-coding Black Mirror is a full day workshop which explores how the widespread adoption of web technologies, principles and practices could lead to potential societal and ethical challenges as the ones depicted in Black Mirror‘s episodes, and how research related to those technologies could help minimise or even prevent the risks of those issues arising.

The workshop has ten short papers exploring either existing episodes, or Black Mirror-esque scenarios in which technology can go astray. As food for thought, we’ll be looking at a selection of those papers this week. In the MIT media lab, Black Mirror episodes are assigned watching for new graduate students in the Fluid Interfaces research group.

Today we’ll be looking at:

(If you don’t have ACM Digital Library access, all of the papers in this workshop can be accessed either by following the links above directly from The Morning Paper blog site, or from the WWW 2018 proceedings page).

Both papers pick Continue reading

Inaudible voice commands: the long-range attack and defense

Inaudible voice commands: the long-range attack and defense Roy et al., NSDI’18

Although you can’t hear them, I’m sure you heard about the inaudible ultrasound attacks on always-on voice-based systems such as Amazon Echo, Google Home, and Siri. This short video shows a ‘DolphinAttack’ in action:

To remain inaudible, the attack only works from close range (about 5ft). And it can work at up to about 10ft when partially audible. Things would get a whole lot more interesting if we could conduct inaudible attacks over a longer range. For example, getting all phones in a crowded area to start dialling your premium number, or targeting every device in an open plan office, or parking your car on the road and controlling all voice-enabled devices in the area. “Alexa, open my garage door…”. In today’s paper, Roy et al. show us how to significantly extend the range of inaudible voice command attacks. Their experiments are limited by the power of their amplifier, but succeed at up to 25ft (7.6m). Fortunately, the authors also demonstrate how we can construct software-only defences against the attacks.

We test our attack prototype with 984 commands to Amazon Echo and 200 commands to smartphones Continue reading

Progressive growing of GANs for improved quality, stability, and variation

Progressive growing of GANs for improved quality, stability, and variation Karras et al., ICLR’18

Let’s play “spot the celebrity”! (Not your usual #themorningpaper fodder I know, but bear with me…)

In each row, one of these is a photo of a real person, the other image is entirely created by a GAN. But which is which?

The man on the left, and the woman on the right, are both figments of a computer’s imagination.

In today’s paper, Karras et al. demonstrate a technique for producing high-resolution (e.g. 1024×1024) realistic looking images using GANs:

The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality.

You can find all of the code, links to plenty of generated images, and videos of image interpolation here: https://github.com/tkarras/progressive_growing_of_gans. This six-minute results video really showcases the work in a way that it’s hard to describe without seeing. Well worth the time if this topic interests you.

Progression

Recall that in a GAN setup we pitch a Continue reading

Photo-realistic single image super-resolution using a generative adversarial network

Photo-realistic single image super-resolution using a generative adversarial network Ledig et al., arXiv’16

Today’s paper choice also addresses an image-to-image translation problem, but here we’re interested in one specific challenge: super-resolution. In super-resolution we take as input a low resolution image like this:

And produce as output an estimation of a higher-resolution up-scaled version:

For the example above, here’s the ground truth hi-resolution image from which the low-res input was initially generated:

Especially challenging of course, is to recover / generate realistic looking finer texture details when super-resolving at large upscaling factors. (Look at the detail around the hat band and neckline in the above figures for example).

In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors.

In a mean-opinion score test, the scores obtained by SRGAN are closer to those of the original high-resolution images than those obtained by any other state-of-the-art method.

Here’s an example of the fine-detail SRGAN can create, even when upscaling by a factor of 4. Note how close it is to the original.

Your Loss is my GA(i)N

A Continue reading

Image-to-image translation with conditional adversarial networks

Image-to-image translation with conditional adversarial networks Isola et al., CVPR’17

It’s time we looked at some machine learning papers again! Over the next few days I’ve selected a few papers that demonstrate the exciting capabilities being developed around images. I find it simultaneously amazing to see what can be done, and troubling to think about a ‘post-reality’ society in which audio, images, and videos can all be cheaply synthesised to tell any story, with increasing realism. Will our brains really be able to hold the required degree of skepticism? It’s true that we have a saying “Don’t believe everything you hear,” but we also say “It must be true, I’ve seen it with my own eyes…”.

Anyway, back to the research! The common name for the system described in today’s paper is pix2pix. You can find the code and more details online at https://github.com/phillipi/pix2pix. The name ‘pix2pix’ comes from that fact that the network is trained to map from input pictures (images) to output pictures (images), where the output is some translation of the input. Lots of image problems can be formulated this way, and the figure below shows six examples:

The really fascinating part about pix2pix Continue reading

Equality of opportunity in supervised learning

Equality of opportunity in supervised learning Hardt et al., NIPS’16

With thanks to Rob Harrop for highlighting this paper to me.

There is a a lot of concern about discrimination and bias entering our machine learning models. Today’s paper choice introduces two notions of fairness: equalised odds, and equalised opportunity, and shows how to construct predictors that are fair under these criteria. One very appealing feature of the model is that in the case of uncertainty caused by under-representation in the training data, the cost of less accurate decision making in that demographic is moved from the protected class (who might otherwise for example not be offered loans), to the decision maker. I’m going to approach the paper backwards, and start with the case study, as I find a motivating example really helps with the intuition.

Loans, race, and FICO scores

We examine various fairness measures in the context of FICO scores with the protected attribute of race. FICO scores are a proprietary classifier widely used in the United States to predict credit worthiness. Our FICO data is based on a sample of 301,536 TransUnion TransRisk scores from 2003.

We’re interesting in comparing scores, the Continue reading

Performance analysis of cloud applications

Performance analysis of cloud applications Ardelean et al., NSDI’18

Today’s choice gives us an insight into how Google measure and analyse the performance of large user-facing services such as Gmail (from which most of the data in the paper is taken). It’s a paper in two halves. The first part of the paper demonstrates through an analysis of traffic and load patterns why the only real way to analyse production performance is using live production systems. The second part of the paper shares two techniques that Google use for doing so: coordinated bursty tracing and vertical context injection.

(Un)predictable load

Let’s start out just by consider Gmail requests explicitly generated by users (called ‘user visible requests,’ or UVRs, in the paper). These are requests generated by mail clients due to clicking on messages, sending messages, and background syncing (e.g., IMAP).

You can see a clear diurnal cycle here, with the highest QPS when both North America and Europe are active in the early morning, and lower QPS at weekends. (All charts are rescaled using some unknown factor, to protect Google information).

Request response sizes vary by about 20% over time. Two contributing factors are bulk mail senders, Continue reading

Stateless datacenter load-balancing with Beamer

Stateless datacenter load-balancing with Beamer Olteanu et al., NSDI’18

We’ve spent the last couple of days looking at datacenter network infrastructure, but we didn’t touch on the topic of load balancing. For a single TCP connection, you want all of the packets to end up at the same destination. Logically, a load balancer (a.k.a. ‘mux’) needs to keep some state somewhere to remember the mapping.

Existing load balancer solutions can load balance TCP and UDP traffic at datacenter scale at different price points. However, they all keep per-flow state; after a load balancer decides which server should handle a connection, that decision is “remembered” locally and used to handle future packets of the same connection. Keeping per-flow state should ensure that ongoing connections do not break when servers and muxes come or go…

There are two issues with keeping this state though. Firstly , it can sometimes end up incomplete or out of date (especially under periods of rapid network change, such as during scale out and scale in). Secondly, there’s only a finite amount of resource to back that state, which opens the door to denial of service attacks such as SYN flood attacks.

Beamer is Continue reading

Andromeda: performance, isolation, and velocity at scale in cloud network virtualization

Andromeda: performance, isolation, and velocity at scale in cloud network virtualization Dalton et al., NSDI’18

Yesterday we took a look at the Microsoft Azure networking stack, today it’s the turn of the Google Cloud Platform. (It’s a very handy coincidence to have two such experience and system design report papers appearing side by side so that we can compare). Andromeda has similar design goals to AccelNet: performance close to hardware, serviceability, and the flexibility and velocity of a software-based architecture. The Google team solve those challenges in a very different way though, being prepared to make use of host cores (which you’ll recall the Azure team wanted to avoid).

We opted for a high-performance software-based architecture instead of a hardware-only solution like SR-IOV because software enables flexible, high-velocity feature deployment… Andromeda consumes a few percent of the CPU and memory on-host. One physical CPU core is reserved for the Andromeda dataplane… In the future, we plan to increase the dataplane CPU reservation to two physical cores on newer hosts with faster physical NICs and more CPU cores in order to improve VM network throughput.

High-level design

Both the control plane and data plane use a hierarchical structure. The control Continue reading

Azure accelerated networking: SmartNICs in the public cloud

Azure accelerated networking: SmartNICs in the public cloud Firestone et al., NSDI’18

We’re still on the ‘beyond CPUs’ theme today, with a great paper from Microsoft detailing their use of FPGAs to accelerate networking in Azure. Microsoft have been doing this since 2015, and hence this paper also serves as a wonderful experience report documenting the thought processes that led to an FPGA-based design, and lessons learned transitioning an all-software team to include hardware components.

There’s another reminder here too of the scale at which cloud vendors operate, which makes doing a project like this viable. The bulk purchase of FPGAs keeps their cost low, and the scale of the project makes the development investment worthwhile.

One question we are often asked is if FPGAs are ready to serve as SmartNICs more broadly outside Microsoft… We’ve observed that necessary tooling, basic IP blocks, and general support have dramatically improved over the last few years. But this would still be a daunting task for a new team… The scale of Azure is large enough to justify the massive development efforts — we achieved a level of performance and efficiency simply not possible with CPUs, and programmability far beyond an ASIC, Continue reading

NetChain: Scale-free sub-RTT coordination

NetChain: Scale-free sub-RTT coordination Jin et al., NSDI’18

NetChain won a best paper award at NSDI 2018 earlier this month. By thinking outside of the box (in this case, the box is the chassis containing the server), Jin et al. have demonstrated how to build a coordination service (think Apache ZooKeeper) with incredibly low latency and high throughput. We’re talking 9.7 microseconds for both reads and writes, with scalability on the order of tens of billions of operations per second. Similarly to KV-Direct that we looked at last year, NetChain achieves this stunning performance by moving the system implementation into the network. Whereas KV-Direct used programmable NICs though, NetChain takes advantage of programmable switches, and can be incrementally deployed in existing datacenters.

We expect a lightning fast coordination system like NetChain can open the door for designing a new generation of distributed systems beyond distributed transactions.

It’s really exciting to watch all of the performance leaps being made by moving compute and storage around (accelerators, taking advantage of storage pockets e.g. processing-in-memory, non-volatile memory, in-network processing, and so on). The sheer processing power we’ll have at our disposal as all of these become mainstream is staggering to Continue reading

SmoothOperator: reducing power fragmentation and improving power utilization in large-scale datacenters

SmoothOperator: reducing power fragmentation and improving power utilization in large-scale datacenters Hsu et al., ASPLOS’18

What do you do when your theory of constraints analysis reveals that power has become your major limiting factor? That is, you can’t add more servers to your existing datacenter(s) without blowing your power budget, and you don’t want to build a new datacenter just for that? In this paper, Hsu et al. analyse power utilisation in Facebook datacenters and find that overall power budget utilisation can be comparatively low, even while peak requirements are at capacity. We can’t easily smooth the workload (that’s driven by business and end-user requirements), but maybe we can do something to smooth the power usage.

Our experiments based on real production workload and power traces show that we are able to host up to 13% more machines in production, without changing the underlying power infrastructure. Utilizing the unleashed power headroom with dynamic reshaping, we achieve up to an estimated total of 15% and 11% throughput improvement for latency-critical service and batch service respectively at the same time, with up to 44% of energy slack reduction.

No more headroom and low utilisation…

There’s a maximum safe amount of power Continue reading

Skyway: connecting managed heaps in distributed big data systems

Skyway: connecting managed heaps in distributed big data systems Nguyen et al., ASPLOS’18

Yesterday we saw how to make Java objects persistent using NVM-backed heaps with Espresso. One of the drawbacks of using that as a persistence mechanism is that they’re only stored in the memory of a single node. If only there was some way to create a cluster of JVMs, and efficiently copy objects across remote heaps in the cluster… Meet Skyway!

Skyway is aimed at JVM-based big data systems (think Spark, Flink) that end up spending a lot of their time serializing and deserializing objects to move them around the cluster (e.g., to and from workers – see ‘Making sense of performance in data analytics frameworks’). Java comes with a default serialization mechanism, and there are also many third party libraries. Kyro is the recommended library for use with Spark.

Consider a small Spark cluster (3 worker nodes each with a 20 GB heap) running a triangle counting algorithm over the LiveJournal graph (about 1.2GB). With both the standard Java serializers and Kyro, serialization and deserialization combined account for a significant portion of the overall execution time (more than 30%).

Where Continue reading

Espresso: brewing Java for more non-volatility with non-volatile memory

Espresso: brewing Java for more non-volatility with non-volatile memory Wu et al., ASPLOS’18

What happens when you introduce non-volatile memory (NVM) to the world of Java? In theory, with a heap backed by NVM, we should get persistence for free? It’s not quite that straightforward of course, but Espresso gets you pretty close. There are a few things to consider, for example:

  • we probably don’t want all of our objects backed by persistent memory, as it still has higher latency than DRAM
  • we don’t want to make intrusive changes to existing code, and ideally would be able to continue using JPA (but why go through an expensive ORM mapping if we’re not targeting a relational store?)
  • we need to ensure any persistent data structures remain consistent after a crash

Espresso adds a new type of heap, a persistent Java heap (PJH) backed by NVM, and a persistent Java object (PJO) programming abstraction which is backwards compatible with JPA. PJO gives a 3.24x speedup even over JPA backed by H2.

JPA, PCJ, and NVM

JPA is the standard Java Persistence API. Java classes are decorated with persistence annotations describing their mapping to an underlying relational database. It’s an Continue reading