The University of Pittsburgh is one of the oldest institutions in the country, dating to 1787 when it was founded as the Pittsburgh Academy. The University has produced the pioneers of the MRI and the television, winners of Nobel and Pulitzer prizes, Super Bowl and NBA champions, and best-selling authors.
As with many businesses today, the University continues to digitize its organization to keep up with the demands of over 35,000 students, 5,000 faculty, and 7,000 staff across four campuses. While the first thing that comes to mind may be core facilities such as classrooms, this also includes keeping up with the evolving technology on the business side of things, such as point-of-sale (POS) systems. When a student buys a coffee before studying or a branded sweatshirt for mom using their student ID, those transactions must be facilitated and secured by the University.
What does it mean to secure financial transactions? For one, just as with a retail store operation, the University must achieve PCI compliance to facilitate financial transactions for its customers. What does this mean? Among other tasks, PCI demands that the data used by these systems is completely isolated from other IT operations. However, locking everything down Continue reading
Welcome to Technology Short Take #81! I have another collection of links, articles, and thoughts about key data center technologies, and hopefully I’ve managed to include something here that will prove useful or thought-provoking. Enjoy!
Docker Store is the place to discover and procure trusted, enterprise-ready containerized software – free, open source and commercial.
Docker Store is the evolution of the Docker Hub, which is the world’s largest container registry, catering to millions of users. As of March 1, 2017, we crossed 11 billion pulls from the public registry! Docker Store leverages the public registry’s massive user base and ensures our customers – developers, operators and enterprise Docker users get what they ask for. The Official Images program was developed to create a set of curated and trusted content that developers could use as a foundation for building containerized software. From the lessons learned and best practices, Docker recently launched a certification program that enables ISVs, around the world to take advantage of Store in offering great software, packaged to operate optimally on the Docker platform.
The Docker Store is designed to bring Docker users and ecosystem partners together with
Organizations across industries are embarking on their journey of Digital Transformation. Time-to-market has become very crucial to the bottom-line and companies need to accelerate their application/services delivery and go from concept to production in record time.
Organizations are embracing containers, micro-service based architectures, Continuous Delivery and Integration tools as they are completely trying to change how they develop, deploy and deliver applications.
However, moving from monolith application architectures to microservices-based ones is no ordinary feat.
Many of these organizations leverage Pivotal’s expertise to deliver a modern application development environment. Pivotal’s flagship cloud-native platform Pivotal Cloud Foundry provides a modern app-centric environment that lets developers focus on delivering applications with speed and frequency of delivery. To find out more about Pivotal Cloud Foundry, and the now generally available Pivotal Cloud Foundry 1.10.
Pivotal Cloud Foundry abstracts the underlying IaaS layer so that developers get a modern self-service application development environment, without worrying about the infrastructure. BOSH vSphere CPI plugin does a good job of consuming pre-created networks.
However, the truth is – “someone” always needs to do some provisioning – networks need to be carved out, load-balancers need to be configured, NAT rules need to be defined, reachability needs to Continue reading
The new version gives developers and operators to have a shared set of facts.
Portworx saves and stores container data.
The use of TLS interception by outbound proxy servers is causing serious problems in updating the TLS standard to Version 1.3.
At the same time, middlebox and antivirus products increasingly intercept (i.e., terminate and re-initiate) HTTPS connections in an attempt to detect and block malicious content that uses the protocol to avoid inspection . Previous work has found that some specific HTTPS interception products dramatically reduce connection security ; however, the broader security impact of such interception remains unclear. In this paper, we conduct the first comprehensive study of HTTPS interception in the wild, quantifying both its prevalence in traffic to major services and its effects on real-world security.
This is the same problem that middleboxes cause anywhere on the Internet – Firewalls, NAT gateways, Inspection, QOS, DPI. Because these complex devices are rarely updated and hard to maintain, they create failures in new protocols. IPv6 rollout has been slowed by difficult upgrades. The same problem is happening with TLS. Its undesirable to fall back to insecure TLS standards that “work” but are insecure.
The business need for proxy servers or protocol interception is for a small range of activities
CloudLens is available on AWS, and Azure is coming soon.
“Micro-Segmentation provides a way to build a zero-trust network – where all networks, perimeters and application are inherently untrusted.” – declared Forrester Consulting in 2015 with their white paper Leveraging Micro-Segmentation to build zero-trust model. The last mile in creating a truly zero-trust network implies not trusting each application and also tiers within an application (Figure 1). To complete the last mile, network, security and risk professionals are increasingly looking for tools to understand application communication patterns and providing access controls to them. With version 6.3.0, NSX has unveiled 2 new tools, namely, Application Rule Manager (ARM) and Endpoint Monitoring (EM), to help professionals understand application patterns.
Figure 1: Zero-Trust Model using NSX
From Theory to Practice
Micro-Segmenting each application requires understanding of application communication patterns. Users should allow the flows required by the application. To accomplish zero-trust, users should be closing all unwanted flows & ports. Figure 2., is a sample practical firewall policy model to achieve that. In this model, ARM/EM provides application patterns and a one-click conversion of those patterns into distributed firewall rules to achieve inter/intra application rules.
Figure 2: Firewall Policy Model
Generating Distributed Firewall Rules Rapidly
Any application in the datacenter can be Continue reading
This post originally appears as part of a series of VMware NSX in Healthcare blogs on Geoff Wilmington’s blog, vWilmo. To read more about VMware NSX and its applications in healthcare, check out Geoff’s blog series.
Originally this series on Micro-segmentation was only going to cover Log Insight, vRealize Network Insight (vRNI), and VMware NSX. With the release of VMware NSX 6.3, there is a new toolset within NSX that can be leveraged for quick micro-segmentation planning The Application Rule Manager (ARM) within NSX, provides a new way to help create security rulesets quickly for new or existing applications on a bigger scale than Log Insight, but smaller scale than vRNI. With that in mind, we’re going to take the previous post using Log Insight, and perform the same procedures with ARM in NSX to create our rulesets using the same basic methodologies.
The Application Rule Manager in VMware NSX leverages real-time flow information to discover the communications both in and out, and between an application workload so a security model can be built around the application. ARM can monitor up to 30 VMs in one session and have 5 sessions running at a time. Continue reading
Report derived from annual Global State of Information Security® performed by PWC.
Good for managers and executives who can ‘t speak technology to introduce them to the ideas around cloud-based data analytics and how its taking over the security infrastructure market.
When it comes to threat intelligence and information sharing, the cloud platform provides a centralized foundation for constructing, integrating and accessing a modern threat program.
See what I mean. Obvious stuff.
This graphic stood out because it highlights that lack of real IT Security tools in place.
Few capabilities are more fundamental to proactive threat intelligence than real-time monitoring and analytics. This year, more than half (51%) of respondents say they actively monitor and analyze threat intelligence to help detect risks and incidents.
Wowser. More than half, that’s real progress!!!
Its a good read for about 10 mins and worth passing into the higher layers. They might learn something.
Link: Key Findings from The Global State of Information Security® Survey 2017 – PWC http://www.pwc.com/gx/en/issues/cyber-security/information-security-survey/assets/gsiss-report-cybersecurity-privacy-possibilities.pdf
The post Research: Toward new possibilities in threat management – PWC appeared first on EtherealMind.
SD-WAN, security, and monitoring are components of SDx+M.
The new features are meant to be attractive to larger customers.
But fixed network infections declined.
Niki Vonderwell kindly invited me to Troopers 2017 and I decided to talk about security and reliability aspects of network automation.
The presentation is available on my web site, and I’ll post the link to the video when they upload it. An extended version of the presentation will eventually become part of Network Automation Use Cases webinar.
Many years ago, when multicast was still a “thing” everyone expected to spread throughout the Internet itself, a lot of work went into specifying not only IP multicast control planes, but also IP multicast control planes for interdomain use (between autonomous systems). BGP was modified to support IP multicast, for instance, in order to connect IP multicast groups from sender to receiver across the entire ‘net. One of these various efforts was a protocol called the Distance Vector Multicast Routing Protocol, or DVMRP. The general idea behind DVMRP was to extend many of the already well-known mechanisms for signaling IP multicast with interdomain counterparts. Specifically, this meant extending IGMP to operate across provider networks, rather than within a single network.
As you can imagine, one problem with any sort of interdomain effort is troubleshooting—how will an operator be able to troubleshoot problems with interdomain IGMP messages sources from outside their network? There is no way to log into another provider’s network (some silliness around competition, I would imagine), so something else was needed. Hence the idea of being able to query a router for information about its connected interfaces, multicast neighbors, and other information, was written up in draft-ietf-idmr-dvmrp-v3-11 (which Continue reading
This post is a starting point for anyone who wants to use 802.1X authentication with Aerohive APs and Microsoft NPS. I will provide configuration screen shots for both of Aerohive’s management platforms and for NPS running on Microsoft Windows 2008 Server. It is not intended to be an exhaustive guide, but should be a decent starting point. Every implementation will be different in some respect, and some of these steps may not be the exact manner in which you configure Microsoft NPS. The steps for Aerohive may also be different depending on what you are trying to accomplish. I’ll make sure to note my particular scenario when appropriate.
Versions Used:
HiveManager Classic/HM6/HMOL – 6.8r7a
HiveManager NG – 11.19.99.0 (March 2017)
Microsoft Windows 2008 Server
Assumptions:
Scenario
Company XYZ wants to authenticate Continue reading
Serious and easily exploited flaws in older Cisco IOS software. Commonly used, but old, switches used for Campus and SME Data Centres. Serious problem.
Thoughts:
The Cluster Management Protocol utilizes Telnet internally as a signaling and command protocol between cluster members. The vulnerability is due to the combination of two factors
- The failure to restrict the use of CMP-specific Telnet options only to internal, local communications between cluster members and instead accept and process such options over any Telnet connection to an affected device, and
- The incorrect processing of malformed CMP-specific Telnet Continue reading
It handles security from the chip to the cloud.