Archive

Category Archives for "Security"

Fear the reaper: characterization and fast detection of card skimmers

Fear the reaper: characterization and fast detection of card skimmers Scaife et al., USENIX Security 2018

Until I can get my hands on a Skim Reaper I’m not sure I’ll ever trust an ATM or other exposed card reading device (e.g., at garages) again!

Scaife et al. conduct a study of skimming devices found by the NYPD Financial Crimes Task Force over a 16 month period. The bad news is that you and I don’t really have much chance of detecting a deployed card skimming device (most of the folk wisdom about how to do so doesn’t really work). The good news is that the Skim Reaper detection device developed in this research project was able to effectively detect 100% of the devices supplied by the NYPD. That’s great if you happen to have a Skim Reaper handy to test with before using an ATM. The NYPD are now actively using a set of such devices in the field.

Card skimmers and why they’re so hard for end-users to detect

Almost as well-know as (credit and debit) cards themselves is the ease with which fraud can be committed against them. Attackers often acquire card data using skimmers Continue reading

CLKscrew: Another side channel you didn’t know about

Network engineers focus on protocols and software, but somehow all of this work must connect to the hardware on which packets are switched, and data is processed. A big part of the physical side of what networks “do” is power—how it is used, and how it is managed. The availability of power is one of the points driving centralization; power is not universally available at a single price. If cloud is cheaper, it’s probably not because of the infrastructure, but rather because of the power and real estate costs.

A second factor in processing is the amount of heat produced in processing. Data center designers expend a lot of energy in dealing with heat problems. Heat production is directly related to power usage; each increase in power consumption for processing shows up as heat somewhere—heat which must be removed from the equipment and the environment.

It is important, therefore, to optimize power usage. To do this, many processors today have power management interfaces allowing software to control the speed at which a processor runs. For instance, Kevin Myers (who blogs here) posted a recent experiment with pings running while a laptop is plugged in and on battery—

Reply from 2607:f498:4109::867:5309:  Continue reading

Are Containers Replacing Virtual Machines?

With 20,000 partners and attendees converging at VMworld in Las Vegas this week, we often get asked if containers are replacing virtual machines (VMs). Many of our Docker Enterprise customers do run their containers on virtualized infrastructure while others run it on bare metal. Docker provides IT and operators choice on where to run their applications – in a virtual machine, on bare metal, or in the cloud. In this blog we’ll provide a few thoughts on the relationship between VMs and containers.

Containers versus Virtual Machines

Point #1: Containers Are More Agile than VMs

At this stage of container maturity, there is very little doubt that containers give both developers and operators more agility. Containers deploy quickly, deliver immutable infrastructure and solve the age-old “works on my machine” problem. They also replace the traditional patching process, allowing organizations to respond to issues faster and making applications easier to maintain.

Point #2: Containers Enable Hybrid and Multi-Cloud Adoption

Once containerized, applications can be deployed on any infrastructure – on virtual machines, on bare metal, and on various public clouds running different hypervisors. Many organizations start with running containers on their virtualized infrastructure and find it easier to then migrate to Continue reading

Debunking Trump’s claim of Google’s SOTU bias

Today, Trump posted this video proving Google promoted all of Obama "State of the Union" (SotU) speeches but none of his own. In this post, I debunk this claim. The short answer is this: it's not Google's fault but Trump's for not having a sophisticated social media team.


The evidence still exists at the Internet Archive (aka. "Wayback Machine") that archives copies of websites. That was probably how that Trump video was created, by using that website. We can indeed see that for Obama's SotU speeches, Google promoted them, such as this example of his January 12, 2016 speech:


And indeed, if we check for Trump's January 30, 2018 speech, there's no such promotion on Google's homepage:
But wait a minute, Google claims they did promote it, and there's even a screenshot on Reddit proving Google is telling the truth. Doesn't this disprove Trump?

No, it actually doesn't, at least not yet. It's comparing two different things. In the Obama example, Google promoted hours ahead of time that there was an upcoming event. In the Trump example, they didn't do that. Only once the event went Continue reading

Adaptive Micro-segmentation – Built-in Application Security from the Network to the Workload

A zero trust or least-privileged, security model has long been held as the best way to secure applications and data. At its core, a zero trust security model is based on having a whitelist of known good behaviors for an environment and enforcing this whitelist. This model is preferable to one that depends on identifying attacks in progress because attack methods are always changing, giving attackers the upper hand and leaving defenders a step behind.

The problem for IT and InfoSec teams has always been effectively operationalizing a zero trust model. As applications become increasingly distributed across hybrid environments and new application frameworks allow for constant change, a lack of comprehensive application visibility and consistent security control points is exacerbated for IT and InfoSec, making achieving a zero trust model even harder.

A modern application is not a piece of software running on a single machine — it’s a distributed system. Different pieces of software running on different workloads, networked together. And we have thousands of them, all commingled on a common infrastructure or, more lately, spanning multiple data centers and clouds. Our internal networks have evolved to be relatively flat — a decision designed to facilitate organic growth. But Continue reading

NSX Portfolio Realizing the Virtual Cloud Network for Customers

If you’re already in Las Vegas or heading there, we are excited to welcome you into the Virtual Cloud Network Experience at VMworld US 2018!

First, why is the networking and security business unit at VMware calling this a “Virtual Cloud Network Experience”? Announced May 1, the Virtual Cloud Network is the network model for the digital era. It is also the vision of VMware for the future of networking to empower customers to connect and protect applications and data, regardless of where they sit – from edge to edge.

At VMworld this year we’re making some announcements that are helping turn the Virtual Cloud Network vision into reality and showcasing customer that have embraced virtual cloud networking.

With that, here’s what’s new:

Public Cloud, Bare Metal, and Containers

NSX is only for VMs, right? Wrong! We’ve added support for native AWS and Azure workloads with NSX Cloud, support for applications running on bare metal servers (no hypervisor!), and increased support for containers (including containers running on bare metal). There’s much to get up to speed on so check out the can’t-miss 100-level sessions below, plus there are a bunch of 200 and 300 level sessions covering the Continue reading

Provisioning a headless Raspberry Pi

The typical way of installing a fresh Raspberry Pi is to attach power, keyboard, mouse, and an HDMI monitor. This is a pain, especially for the diminutive RPi Zero. This blogpost describes a number of options for doing headless setup. There are several options for this, including Ethernet, Ethernet gadget, WiFi, and serial connection. These examples use a Macbook as an example, maybe I'll get around to a blogpost describing this from Windows.

Burning micro SD card

We are going to edit the SD card before booting, so for completeness, I thought I'd describe the process of burning an SD card.

We are going to download the latest "raspbian" operating system. I download the "lite" version because I'm not using the desktop features. It comes as a compressed .zip file which we need to extract into an .img file. Just double-click on the .zip on Windows or Mac.

The next step is to burn the image to an SD card. On Windows I use Win32DiskImager. On Mac I use the following command-line steps:

$ sudo -s
# mount
# diskutil unmount /dev/disk2s1
# dd bs=1m if=~/Downloads/2018-06-27-raspbian-stretch-lite.img of=/dev/disk2 conv=sync

First, I need a root prompt. I then use the Continue reading

Deploying TLS 1.3

Last week saw the formal publication of the TLS 1.3 specification as RFC 8446. It’s been a long time coming – in fact it’s exactly 10 years since TLS 1.2 was published back in 2008 – but represents a substantial step forward in making the Internet a more secure and trusted place.

What is TLS and why is it needed?

Transport Layer Security (TLS) is widely used to encrypt data transmitted between Internet hosts, with the most popular use being for secure web browser connections (adding the ‘S’ to HTTP). It is also commonly (although less visibly) used to encrypt data sent to and from mail servers (using STARTTLS with SMTP and IMAP/POP etc..), but can be used in conjunction with many other Internet protocols (e.g. DNS-over-TLS, FTPS) where secure connections are required. For more information about how TLS works and why you should use it, please see our TLS Basics guide.

TLS is often used interchangeably with SSL (Secure Socket Layers) which was developed by Netscape and predates it as an IETF Standard, but many Certification Authorities (CAs) still market the X.509 certificates used by TLS as ‘SSL certificates’ due to their familiarity with Continue reading

1 78 79 80 81 82 177