The WannaCry ransomware might have a link to North Korea

As security researchers investigate last Friday’s massive attack from the WannaCry ransomware, they’ve noticed clues that may link it with a North Korean hacking group that has been blamed for attacking banks across the world.The evidence is far from a smoking gun, and may prove inconclusive. But security researchers have noticed a similarity between an earlier version of WannaCry and a hacking tool used by the Lazarus Group.To read this article in full or to leave a comment, please click here

When Will AI Replace Traditional Supercomputing Simulations?

The science fiction of a generation ago predicted a future in which humans were replaced by the reasoning might of a supercomputer. But in an unexpected twist of events, it appears the it is the supercomputer’s main output—scientific simulations—that could be replaced by an even higher order of intelligence.

While we will always need supercomputing hardware, the vast field of scientific computing, or high performance computing, could also be in the crosshairs for disruptive change, altering the future prospects for scientific code developers, but opening new doors in more energy-efficient, finer-grained scientific discovery. With code that can write itself based

When Will AI Replace Traditional Supercomputing Simulations? was written by Nicole Hemsoth at The Next Platform.

I Will Be Presenting For the First Time at CLUS 2017!

Well, it looks like another major item will get struck from my bucket list this year. I've been accepted to present at Cisco Live in Las Vegas this summer! ? This session is designed to walk through an enterprise network and look at how EIGRP can be engineered with purpose to best suit the needs of the different areas of the network. I will focus a lot on stability and scaling EIGRP and will show the audience how, where, and when to leverage common EIGRP features such as summarization, fast timers, BFD, and wide metrics.

Paying the WannaCry ransom will probably get you nothing. Here’s why.

Last Friday’s massive WannaCry ransomware attack means victims around the world are facing a tough question: Should they pay the ransom?Those who do shouldn't expect a quick response -- or any response at all. Even after payment, the ransomware doesn’t automatically release your computer and decrypt your files, according to security researchers.  Instead, victims have to wait and hope WannaCry’s developers will remotely free the hostage computer over the internet. It's a process that’s entirely manual and contains a serious flaw: The hackers have no way to prove who paid off the ransom."The odds of getting back their files decrypted is very small," said Vikram Thakur, technical director at security firm Symantec. "It's better for [the victims] to save their money and rebuild the affected computers."To read this article in full or to leave a comment, please click here

Paying the WannaCry ransom will probably get you nothing. Here’s why.

Last Friday’s massive WannaCry ransomware attack means victims around the world are facing a tough question: Should they pay the ransom?Those who do shouldn't expect a quick response -- or any response at all. Even after payment, the ransomware doesn’t automatically release your computer and decrypt your files, according to security researchers.  Instead, victims have to wait and hope WannaCry’s developers will remotely free the hostage computer over the internet. It's a process that’s entirely manual and contains a serious flaw: The hackers have no way to prove who paid off the ransom."The odds of getting back their files decrypted is very small," said Vikram Thakur, technical director at security firm Symantec. "It's better for [the victims] to save their money and rebuild the affected computers."To read this article in full or to leave a comment, please click here

Why dynamic mapping is changing network troubleshooting for the better

Effective network troubleshooting requires experience and a detailed understanding of a network’s design. And while many great network engineers possess both qualities, they still face the daunting challenge of manual data collection and analysis.

The storage and backup industries have long been automated, yet, for the most part, automation has alluded the network, forcing engineering teams to troubleshoot and map networks manually. Estimates from a NetBrain poll indicate that network engineers spend 80% of their troubleshooting time collecting data and only 20% analyzing it. With the cost of downtime only getting more expensive, an opportunity to significantly reduce the time spent collecting data is critical.

To read this article in full, please click here

Why dynamic mapping is changing network troubleshooting for the better

Effective network troubleshooting requires experience and a detailed understanding of a network’s design. And while many great network engineers possess both qualities, they still face the daunting challenge of manual data collection and analysis.

The storage and backup industries have long been automated, yet, for the most part, automation has alluded the network, forcing engineering teams to troubleshoot and map networks manually. Estimates from a NetBrain poll indicate that network engineers spend 80% of their troubleshooting time collecting data and only 20% analyzing it. With the cost of downtime only getting more expensive, an opportunity to significantly reduce the time spent collecting data is critical.

To read this article in full or to leave a comment, please click here

Why dynamic mapping is changing network troubleshooting for the better

Effective network troubleshooting requires experience and a detailed understanding of a network’s design. And while many great network engineers possess both qualities, they still face the daunting challenge of manual data collection and analysis.The storage and backup industries have long been automated, yet, for the most part, automation has alluded the network, forcing engineering teams to troubleshoot and map networks manually. Estimates from a NetBrain poll indicate that network engineers spend 80% of their troubleshooting time collecting data and only 20% analyzing it. With the cost of downtime only getting more expensive, an opportunity to significantly reduce the time spent collecting data is critical.To read this article in full or to leave a comment, please click here

Why dynamic mapping is changing network troubleshooting for the better

Effective network troubleshooting requires experience and a detailed understanding of a network’s design. And while many great network engineers possess both qualities, they still face the daunting challenge of manual data collection and analysis.The storage and backup industries have long been automated, yet, for the most part, automation has alluded the network, forcing engineering teams to troubleshoot and map networks manually. Estimates from a NetBrain poll indicate that network engineers spend 80% of their troubleshooting time collecting data and only 20% analyzing it. With the cost of downtime only getting more expensive, an opportunity to significantly reduce the time spent collecting data is critical.To read this article in full or to leave a comment, please click here

Why dynamic mapping is changing network troubleshooting for the better

Effective network troubleshooting requires experience and a detailed understanding of a network’s design. And while many great network engineers possess both qualities, they still face the daunting challenge of manual data collection and analysis.

The storage and backup industries have long been automated, yet, for the most part, automation has alluded the network, forcing engineering teams to troubleshoot and map networks manually. Estimates from a NetBrain poll indicate that network engineers spend 80% of their troubleshooting time collecting data and only 20% analyzing it. With the cost of downtime only getting more expensive, an opportunity to significantly reduce the time spent collecting data is critical.

To read this article in full or to leave a comment, please click here

The Year Ahead for GPU Accelerated Supercomputing

GPU computing has deep roots in supercomputing, but Nvidia is using that springboard to dive head first into the future of deep learning.

This changes the outward-facing focus of the company’s Tesla business from high-end supers to machine learning systems with the expectation that those two formerly distinct areas will find new ways to merge together given the similarity in machine, scalability, and performance requirements. This is not to say that Nvidia is failing the HPC set, but there is a shift in attention from what GPUs can do for Top 500 class machines to what graphics processors can do

The Year Ahead for GPU Accelerated Supercomputing was written by Nicole Hemsoth at The Next Platform.

How to use blockchain: Following an asset through its lifecycle to learn more

This contributed piece has been edited and approved by Network World editors

Possession is nine-tenths of the law, right?  But thanks to blockchain, this old adage may no longer be a viable way to settle property disputes.

Artists and enterprises alike have long struggled to prove ownership of their work after it has been disseminated, especially when it is uploaded online. What if there was a way to use technology to reliably track asset provenance with absolute certainty, from creation to marketplace and beyond?  The reality is that this is already possible with the help of blockchain, and the benefits to the enterprise are many.

To read this article in full, please click here

How to use blockchain: Following an asset through its lifecycle to learn more

This contributed piece has been edited and approved by Network World editors

Possession is nine-tenths of the law, right?  But thanks to blockchain, this old adage may no longer be a viable way to settle property disputes.

Artists and enterprises alike have long struggled to prove ownership of their work after it has been disseminated, especially when it is uploaded online. What if there was a way to use technology to reliably track asset provenance with absolute certainty, from creation to marketplace and beyond?  The reality is that this is already possible with the help of blockchain, and the benefits to the enterprise are many.

To read this article in full or to leave a comment, please click here

How to use blockchain: Following an asset through its lifecycle to learn more

This contributed piece has been edited and approved by Network World editorsPossession is nine-tenths of the law, right?  But thanks to blockchain, this old adage may no longer be a viable way to settle property disputes.Artists and enterprises alike have long struggled to prove ownership of their work after it has been disseminated, especially when it is uploaded online. What if there was a way to use technology to reliably track asset provenance with absolute certainty, from creation to marketplace and beyond?  The reality is that this is already possible with the help of blockchain, and the benefits to the enterprise are many.To read this article in full or to leave a comment, please click here

How to use blockchain: Following an asset through its lifecycle to learn more

This contributed piece has been edited and approved by Network World editorsPossession is nine-tenths of the law, right?  But thanks to blockchain, this old adage may no longer be a viable way to settle property disputes.Artists and enterprises alike have long struggled to prove ownership of their work after it has been disseminated, especially when it is uploaded online. What if there was a way to use technology to reliably track asset provenance with absolute certainty, from creation to marketplace and beyond?  The reality is that this is already possible with the help of blockchain, and the benefits to the enterprise are many.To read this article in full or to leave a comment, please click here

How to use blockchain: Following an asset through its lifecycle to learn more

This contributed piece has been edited and approved by Network World editors

Possession is nine-tenths of the law, right?  But thanks to blockchain, this old adage may no longer be a viable way to settle property disputes.

Artists and enterprises alike have long struggled to prove ownership of their work after it has been disseminated, especially when it is uploaded online. What if there was a way to use technology to reliably track asset provenance with absolute certainty, from creation to marketplace and beyond?  The reality is that this is already possible with the help of blockchain, and the benefits to the enterprise are many.

To read this article in full or to leave a comment, please click here

12 ways to improve run-time container security

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.There still really aren’t many enterprise run-time security tools for containers available, which has skewed the conversation toward establishing defensive barriers prior to run-time – during the build, integration, and deployment stage.Of course, with rapidly evolving technology like containers, it can be all too easy to overlook the most basic security concerns, so, really, any focus at all is welcome. Efforts pointing out the security advantages of digitally signing container images at build time, and scanning them before they are pushed to the registry, should indeed be heard. The OS should be hardened and attack surfaces should be trimmed where possible. Solutions like Seccomp and AppArmor that introduce security profiles between containers and the host kernel ought to be implemented.To read this article in full or to leave a comment, please click here

12 ways to improve run-time container security

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.There still really aren’t many enterprise run-time security tools for containers available, which has skewed the conversation toward establishing defensive barriers prior to run-time – during the build, integration, and deployment stage.Of course, with rapidly evolving technology like containers, it can be all too easy to overlook the most basic security concerns, so, really, any focus at all is welcome. Efforts pointing out the security advantages of digitally signing container images at build time, and scanning them before they are pushed to the registry, should indeed be heard. The OS should be hardened and attack surfaces should be trimmed where possible. Solutions like Seccomp and AppArmor that introduce security profiles between containers and the host kernel ought to be implemented.To read this article in full or to leave a comment, please click here

6 Tips for Protecting Against Ransomware

The Internet Society has been closely monitoring the ransomware cyber-attacks that have been occurring over the last couple of days. The malware, which has gone by multiple names, including WannaCry, WannaDecryptor, and WannaCrypt, exploits a flaw in Microsoft Windows that was first reportedly discovered by the National Security Agency (NSA). A group of hackers leaked the code for exploiting this vulnerability earlier this year, and a fix or patch was available as far back as March 2017. Since Friday, 200,000 computers in 150 countries have been compromised using this exploit. The numbers are expected to grow exponentially as people settle back into their work routines and regular use of computer systems this week.

Niel Harper