Latest Ransomware Techniques Show Need for Layered Security

I think everyone that touches security has had multiple conversations about the hardened edge and soft center, commonly found in networks. This usually accompanies some discussion around the overlapping concepts of difference in depth, layered security and security ecosystems. It seems like many of the recent exploits have used a C2 connection for instructions. In those cases, assuming a perfect NGFW product and configuration actually existed that caught 100% of the malicious traffic, it would have the capability to impact those attacks.

However on June 27, Cisco Talos published an article about a ransomware variant known as Nyetya. As of today, Talos has been able to find no evidence of the more common initial infection vehicles. Both Cisco and Microsoft have cited the upgrade process for a tax accounting package as the initial point of infection.

Per Cisco Talos:

The identification of the initial vector is still under investigation. We have observed no use of email or Office documents as a delivery mechanism for this malware. We believe that infections are associated with software update systems for a Ukrainian tax accounting package called MeDoc. Talos is investigating this currently.

So what does this mean to the majority of the world that Continue reading

InfiniBand And Proprietary Networks Still Rule Real HPC

With the network comprising as much as a quarter of the cost of a high performance computing system and being absolutely central to the performance of applications running on parallel systems, it is fair to say that the choice of network is at least as important as the choice of compute engine and storage hierarchy. That’s why we like to take a deep dive into the networking trends present in each iteration of the Top 500 supercomputer rankings as they come out.

It has been a long time since the Top 500 gave a snapshot of pure HPC centers that

InfiniBand And Proprietary Networks Still Rule Real HPC was written by Timothy Prickett Morgan at The Next Platform.

Fix Your NAS With Metadata

Enterprises are purchasing storage by the truckload to support an explosion of data in the datacenter. IDC reports that in the first quarter of 2017, total capacity shipments were up 41.4 percent year-over-year and reached 50.1 exabytes of storage capacity shipped. As IT departments continue to increase their spending on capacity, few realize that their existing storage is a pile of gold that can be fully utilized once enterprises overcome the inefficiencies created by storage silos.

A metadata engine can virtualize the view of data for an application by separating the data (physical) path from the metadata (logical) path. This

Fix Your NAS With Metadata was written by Timothy Prickett Morgan at The Next Platform.

Not The Cisco of John Chambers Anymore

I just got back from Cisco Live 2017 last night and I had a blast at the show. There was a lot of discussion about new architectures, new licensing models, and of course, Tech Field Day Extra. However, one of the most interesting topics went largely under the radar. I think we’re fully in the transition of Cisco away from being the Company of John Chambers.

Steering A Tall Ship

John Chambers wasn’t the first CEO of Cisco. But he’s the one that most people would recognize. He transformed the company into the juggernaut that it is today. He watched Cisco ascend to the leader in the networking space and helped it transform into a company that embraced voice, security, and even servers and compute as new business models.

John’s Cisco is a very unique animal. It’s not a single company. It’s a collection of many independent companies with their own structures and goals all competing with each other for resources. If John decided that UCS was more important to his goals this quarter, he shifted some of the support assets to focus on that business unit. It was a featured product, complete with healthy discounts to encourage user adoption.

Continue reading

Momentum is Building for ARM in HPC

2011 marked ARM’s first step into the world of HPC with the European Mont-Blanc project. The premise was simple: leverage the energy efficiency of ARM-based mobile designs for high performance computing applications.

Unfortunately, making the leap from the mobile market to HPC was not an easy feat. Long time players in this space, such as Intel and IBM, hold home field advantage: that of legacy software. HPC-optimized libraries, compilers and applications were already present for these platforms. This was not, however, the case for ARM. Early adopters had to start, largely from scratch, porting and building an ecosystem with a

Momentum is Building for ARM in HPC was written by Nicole Hemsoth at The Next Platform.

Cisco’s DevNet extends the value of its intent-based networking

Earlier this month, Cisco held a media and press event to launch its intent-based networking solution. To no surprise, its user event, Cisco Live 2017 was all about the network as Cisco looks to get customers to think more broadly about the role of the network in digital transformation.Brandon Butler did a great follow-up post to mine that talked about why intent-based networking is a big deal. He called out a number of benefits, including streamlined operations and better security.To read this article in full or to leave a comment, please click here

Cisco’s DevNet extends the value of its intent-based networking

Earlier this month, Cisco held a media and press event to launch its intent-based networking solution. To no surprise, its user event, Cisco Live 2017 was all about the network as Cisco looks to get customers to think more broadly about the role of the network in digital transformation.Brandon Butler did a great follow-up post to mine that talked about why intent-based networking is a big deal. He called out a number of benefits, including streamlined operations and better security.To read this article in full or to leave a comment, please click here

Cray adds big data software to its supercomputers

Cray has announced a new suite of big data and artificial intelligence (AI) software called Urika-XC for its top-of-the-line XC Series of supercomputers. Urika-XC is a set of analytics software that will let XC customers use Apache Spark, Intel’s BigDL deep learning library, Cray’s Urika graph analytics engine, and an assortment of Python-based data science tools.With the Urika-XC software suite, analytics and AI workloads can run alongside scientific modeling and simulations on Cray XC supercomputers, eliminating the need to move data between systems. Cray XC customers will be able to run converged analytics and simulation workloads across a variety of scientific and commercial endeavors, such as real-time weather forecasting, predictive maintenance, precision medicine and comprehensive fraud detection. To read this article in full or to leave a comment, please click here

Cray adds big data software to its supercomputers

Cray has announced a new suite of big data and artificial intelligence (AI) software called Urika-XC for its top-of-the-line XC Series of supercomputers. Urika-XC is a set of analytics software that will let XC customers use Apache Spark, Intel’s BigDL deep learning library, Cray’s Urika graph analytics engine, and an assortment of Python-based data science tools.With the Urika-XC software suite, analytics and AI workloads can run alongside scientific modeling and simulations on Cray XC supercomputers, eliminating the need to move data between systems. Cray XC customers will be able to run converged analytics and simulation workloads across a variety of scientific and commercial endeavors, such as real-time weather forecasting, predictive maintenance, precision medicine and comprehensive fraud detection. To read this article in full or to leave a comment, please click here