The company acquired NAC vendor Bradford Networks earlier this summer. Today it’s essentially rebranding Bradford’s technology as FortiNAC.
Fear the reaper: characterization and fast detection of card skimmers Scaife et al., USENIX Security 2018
Until I can get my hands on a Skim Reaper I’m not sure I’ll ever trust an ATM or other exposed card reading device (e.g., at garages) again!
Scaife et al. conduct a study of skimming devices found by the NYPD Financial Crimes Task Force over a 16 month period. The bad news is that you and I don’t really have much chance of detecting a deployed card skimming device (most of the folk wisdom about how to do so doesn’t really work). The good news is that the Skim Reaper detection device developed in this research project was able to effectively detect 100% of the devices supplied by the NYPD. That’s great if you happen to have a Skim Reaper handy to test with before using an ATM. The NYPD are now actively using a set of such devices in the field.
Almost as well-know as (credit and debit) cards themselves is the ease with which fraud can be committed against them. Attackers often acquire card data using skimmers Continue reading
SDxCentral spoke with Nutanix CEO Dheeraj Pandey immediately after the company’s fourth quarter fiscal 2018 earnings call. Nutanix posted revenue of $303.7 million, up from $252.5 million a year ago.
It's common for hardware to have bugs. It's up to the kernel to provide mitigation.
The startup uses artificial intelligence and automation to detect and respond to security threats and ensure compliance in cloud environments.
Network engineers focus on protocols and software, but somehow all of this work must connect to the hardware on which packets are switched, and data is processed. A big part of the physical side of what networks “do” is power—how it is used, and how it is managed. The availability of power is one of the points driving centralization; power is not universally available at a single price. If cloud is cheaper, it’s probably not because of the infrastructure, but rather because of the power and real estate costs.
A second factor in processing is the amount of heat produced in processing. Data center designers expend a lot of energy in dealing with heat problems. Heat production is directly related to power usage; each increase in power consumption for processing shows up as heat somewhere—heat which must be removed from the equipment and the environment.
It is important, therefore, to optimize power usage. To do this, many processors today have power management interfaces allowing software to control the speed at which a processor runs. For instance, Kevin Myers (who blogs here) posted a recent experiment with pings running while a laptop is plugged in and on battery—
Reply from 2607:f498:4109::867:5309: Continue reading
With 20,000 partners and attendees converging at VMworld in Las Vegas this week, we often get asked if containers are replacing virtual machines (VMs). Many of our Docker Enterprise customers do run their containers on virtualized infrastructure while others run it on bare metal. Docker provides IT and operators choice on where to run their applications – in a virtual machine, on bare metal, or in the cloud. In this blog we’ll provide a few thoughts on the relationship between VMs and containers.
At this stage of container maturity, there is very little doubt that containers give both developers and operators more agility. Containers deploy quickly, deliver immutable infrastructure and solve the age-old “works on my machine” problem. They also replace the traditional patching process, allowing organizations to respond to issues faster and making applications easier to maintain.
Once containerized, applications can be deployed on any infrastructure – on virtual machines, on bare metal, and on various public clouds running different hypervisors. Many organizations start with running containers on their virtualized infrastructure and find it easier to then migrate to Continue reading
— Donald J. Trump (@realDonaldTrump) August 29, 2018
At VMworld and at home this week, all four of the top hyperconverged infrastructure vendors made news with their HCI platforms and partnerships.
This white paper looks at a new breed of modern, web-scale data protection solution – and examines how it makes data protection more manageable, reliable and affordable than legacy approaches.
“Proprietary is not a word in our dictionary,” said Andy Bechtolsheim, founder, chief development officer, and chairman at Arista.
In addition to scooping up a cloud-monitoring startup and developing an edge strategy VMware CEO Pat Gelsinger took some time to get a new tattoo before VMworld.
The Lavelle Networks SD-WAN software appliance sits within an NFV container in a Microsoft Windows environment for greater control and enhanced network management.
Updates to the hybrid cloud platform include deeper integration with NSX networking and security capabilities and a high-capacity storage option via integration with Amazon Elastic Block Store (EBS).
A zero trust or least-privileged, security model has long been held as the best way to secure applications and data. At its core, a zero trust security model is based on having a whitelist of known good behaviors for an environment and enforcing this whitelist. This model is preferable to one that depends on identifying attacks in progress because attack methods are always changing, giving attackers the upper hand and leaving defenders a step behind.
The problem for IT and InfoSec teams has always been effectively operationalizing a zero trust model. As applications become increasingly distributed across hybrid environments and new application frameworks allow for constant change, a lack of comprehensive application visibility and consistent security control points is exacerbated for IT and InfoSec, making achieving a zero trust model even harder.
A modern application is not a piece of software running on a single machine — it’s a distributed system. Different pieces of software running on different workloads, networked together. And we have thousands of them, all commingled on a common infrastructure or, more lately, spanning multiple data centers and clouds. Our internal networks have evolved to be relatively flat — a decision designed to facilitate organic growth. But Continue reading
If you’re already in Las Vegas or heading there, we are excited to welcome you into the Virtual Cloud Network Experience at VMworld US 2018!
First, why is the networking and security business unit at VMware calling this a “Virtual Cloud Network Experience”? Announced May 1, the Virtual Cloud Network is the network model for the digital era. It is also the vision of VMware for the future of networking to empower customers to connect and protect applications and data, regardless of where they sit – from edge to edge.
At VMworld this year we’re making some announcements that are helping turn the Virtual Cloud Network vision into reality and showcasing customer that have embraced virtual cloud networking.
With that, here’s what’s new:
Public Cloud, Bare Metal, and Containers
NSX is only for VMs, right? Wrong! We’ve added support for native AWS and Azure workloads with NSX Cloud, support for applications running on bare metal servers (no hypervisor!), and increased support for containers (including containers running on bare metal). There’s much to get up to speed on so check out the can’t-miss 100-level sessions below, plus there are a bunch of 200 and 300 level sessions covering the Continue reading
Last week saw the formal publication of the TLS 1.3 specification as RFC 8446. It’s been a long time coming – in fact it’s exactly 10 years since TLS 1.2 was published back in 2008 – but represents a substantial step forward in making the Internet a more secure and trusted place.
What is TLS and why is it needed?
Transport Layer Security (TLS) is widely used to encrypt data transmitted between Internet hosts, with the most popular use being for secure web browser connections (adding the ‘S’ to HTTP). It is also commonly (although less visibly) used to encrypt data sent to and from mail servers (using STARTTLS with SMTP and IMAP/POP etc..), but can be used in conjunction with many other Internet protocols (e.g. DNS-over-TLS, FTPS) where secure connections are required. For more information about how TLS works and why you should use it, please see our TLS Basics guide.
TLS is often used interchangeably with SSL (Secure Socket Layers) which was developed by Netscape and predates it as an IETF Standard, but many Certification Authorities (CAs) still market the X.509 certificates used by TLS as ‘SSL certificates’ due to their familiarity with Continue reading
Borrowing from the astrological meaning, the Goldilocks Zone refers to the space where organizations have the right amount of resources and combination of components to support network life.
The company’s open source blockchain-based security platform is working with enterprises to secure their IoT data and devices.