BrandPost: Will disruptive healthcare technologies disrupt hospital networks?

Ciena Daniele Loffreda, Senior Advisor, Industry Marketing New innovations stemming from artificial intelligence, machine learning and connected health are changing the way medical professionals treat their patients. Doctors are having to adapt to take advantage of these developments – but has their network adapted to support them?Two newborns are delivered on the same day by the same doctor in the same hospital.  The truth is, the decisions their respective parents are about to make can have an enormous impact on their daughters’ health and quality of life for the next several decades. To read this article in full, please click here

Watch the Experts Session on Encryption from Canberra

Internet Australia and the Internet Society are pleased to invite you to watch the recording of an Experts Session on Encryption on 20 August 2018 at Parliament House, Canberra, Australia.

Encryption technologies enable Internet users to protect the integrity and the confidentiality of their data and communications. From limiting the impact of data breaches, to securing financial transactions, to keeping messages private, encryption is an essential tool for digital security. As a technical foundation for trust on the Internet, encryption promotes commerce, privacy, and user trust, and helps protect data and communications from bad actors.

During the session, international and local experts from across the field discussed the technical aspects of encryption and digital security. They explained how encryption is used to secure communications and data, and explored its role in the Australian digital economy. Experts also discussed the risks associated with attempting to provide exceptional access to encrypted systems.

The post Watch the Experts Session on Encryption from Canberra appeared first on Internet Society.

Will Fujitsu be the first to make an ARM-powered supercomputer?

I’ve long felt Japan has been severely overlooked in recent years due to two “lost decades” and China overshadowing it — and supercomputing is no exception.In 2011, Fujitsu launched the K computer at the Riken Advanced Institute for Computational Science campus in Kobe, Japan. Calling it a computer really is a misnomer, though, as is the case in any supercomputer these days. When I think “computer,” I think of the 3-foot-tall black tower a few feet from me making the room warm. In the case of K, it’s rows and rows of cabinets stuffed with rack-mounted servers in a space the size of a basketball court.With its distributed memory architecture, K had 88,128 eight-core SPARC64 VIIIfx processors in 864 cabinets. Fujitsu was a licensee of Sun Microsystems’ SPARC processor (later Oracle) and did some impressive work on the processor on its own. When it launched in 2011, the K was ranked the world's fastest supercomputer on the TOP500 supercomputer list, at a computation speed of over 8 petaflops. It has since been surpassed by supercomputers from the U.S. and China.To read this article in full, please click here

Will Fujitsu be the first to make an ARM-powered supercomputer?

I’ve long felt Japan has been severely overlooked in recent years due to two “lost decades” and China overshadowing it — and supercomputing is no exception.In 2011, Fujitsu launched the K computer at the Riken Advanced Institute for Computational Science campus in Kobe, Japan. Calling it a computer really is a misnomer, though, as is the case in any supercomputer these days. When I think “computer,” I think of the 3-foot-tall black tower a few feet from me making the room warm. In the case of K, it’s rows and rows of cabinets stuffed with rack-mounted servers in a space the size of a basketball court.With its distributed memory architecture, K had 88,128 eight-core SPARC64 VIIIfx processors in 864 cabinets. Fujitsu was a licensee of Sun Microsystems’ SPARC processor (later Oracle) and did some impressive work on the processor on its own. When it launched in 2011, the K was ranked the world's fastest supercomputer on the TOP500 supercomputer list, at a computation speed of over 8 petaflops. It has since been surpassed by supercomputers from the U.S. and China.To read this article in full, please click here

Auth0 Architecture: Running In Multiple Cloud Providers And Regions

 

This is article was written by Dirceu Pereira Tiegs, Site Reliability Engineer at Auth0, and originally was originally published in Auth0.

Auth0 provides authentication, authorization, and single sign-on services for apps of any type (mobile, web, native) on any stack. Authentication is critical for the vast majority of apps. We designed Auth0 from the beginning so that it could run anywhere: on our cloud, on your cloud, or even on your own private infrastructure.

In this post, we'll talk more about our public SaaS deployments and provide a brief introduction to the infrastructure behind auth0.com and the strategies we use to keep it up and running with high availability. 

A lot has changed since then in Auth0. These are some of the highlights:

IDG Contributor Network: Analytics are the key to network uptime, but are they enough?

Imagine you are in a crowded ER, and doctors are running from room to room. In the waiting area, patients are checking in via an online portal, and hospital staff are quickly capturing their confidential medical and insurance information on a mobile device. You look down the hall, where admitted patients are receiving treatment through Wi-Fi enabled biomedical devices, and some are even streaming their favorite show on Netflix while they wait. In a hospital, there are hundreds of individuals conducting critical tasks at any given moment, and they rely on thousands of connected devices to get the job done. But, this begs the question: What happens if that network fails?To read this article in full, please click here

IDG Contributor Network: Analytics are the key to network uptime, but are they enough?

Imagine you are in a crowded ER, and doctors are running from room to room. In the waiting area, patients are checking in via an online portal, and hospital staff are quickly capturing their confidential medical and insurance information on a mobile device. You look down the hall, where admitted patients are receiving treatment through Wi-Fi enabled biomedical devices, and some are even streaming their favorite show on Netflix while they wait. In a hospital, there are hundreds of individuals conducting critical tasks at any given moment, and they rely on thousands of connected devices to get the job done. But, this begs the question: What happens if that network fails?To read this article in full, please click here

IDG Contributor Network: Preparing your network for the hybrid world – 4 imminent IT shifts and the role of NPMD

To the surprise of many not living it every day, a robust, resilient, and reliable network is one of the most important drivers behind success in today’s business world. Organizations must continuously improve their network infrastructure to better meet organizational requirements and offer the experiences their customers expect. Recent changes in the network market mean this continuous improvement needs to go beyond optimizations and extend all the way to re-architecting the network.The forces driving network re-architecture are twofold: new demands on the network, and innovations in network technology and solutions. These new demands on the network stem from enterprise-wide digital transformation initiatives such as cloud, SD-WAN, machine learning and AI, IoT, edge computing, and more. While these new requirements offer a host of business benefits, they’re also introducing disruptive complexity to the network, driving the need to simplify and accelerate the way all IT services are delivered today.To read this article in full, please click here

IDG Contributor Network: Preparing your network for the hybrid world – 4 imminent IT shifts and the role of NPMD

To the surprise of many not living it every day, a robust, resilient, and reliable network is one of the most important drivers behind success in today’s business world. Organizations must continuously improve their network infrastructure to better meet organizational requirements and offer the experiences their customers expect. Recent changes in the network market mean this continuous improvement needs to go beyond optimizations and extend all the way to re-architecting the network.The forces driving network re-architecture are twofold: new demands on the network, and innovations in network technology and solutions. These new demands on the network stem from enterprise-wide digital transformation initiatives such as cloud, SD-WAN, machine learning and AI, IoT, edge computing, and more. While these new requirements offer a host of business benefits, they’re also introducing disruptive complexity to the network, driving the need to simplify and accelerate the way all IT services are delivered today.To read this article in full, please click here

The Week in Internet News: U.S. DOJ Pressures Facebook to Break Messenger Encryption

Encryption wars, part 2,403: The U.S. Department of Justice is pressuring Facebook to break the encryption in its Messenger app so that investigators can access communications by suspected Ms-13 gang members. The DOJ has asked a judge to force Facebook to allow the agency to tap into Messenger, with the outcome potentially affecting other tech companies, Fortune reports.

Hacking the Apple: An infamous North Korean hacking group has created their first macOS malware as a way to compromise a cryptocurrency exchange, Bleeping Computer reports. The hackers who created the so-called AppleJeus malware are going to great lengths to make it work – even creating a fake company and software product to deliver it.

AI loves TV: As researchers explore ways to give Artificial Intelligence systems curiosity, AIs will sometimes choose to watch TV all day, QZ.com says. AIs playing video games will sometimes die on purchase to see the game-over screen or fixate on a fake TV and remote and flip through channels to find something new.

Certified secure? Trade group CTIA is offering a security certification for cellular-connected Internet of Things devices, TechRepublic reports. Security experts and test labs have participated in the program. With so many Continue reading

Does Establishing More IXPs Keep Data Local? Brazil and Mexico Might Offer Answers

Much like air travel, the internet has certain hubs that play important relay functions in the delivery of information. Just as Heathrow Airport serves as a hub for passengers traveling to or from Europe, AMS-IX (Amsterdam Internet Exchange) is a key hub for information getting in or out of Europe. Instead of airline companies gathering in one place to drop off or pick up passengers, it’s internet service providers coming together to swap data – lots and lots of data.

Where the world’s largest internet exchange points (IXPs) reside are mostly where you would expect to find them: advanced economies with sophisticated internet infrastructure. As internet access reached new populations around the world, however, growth in IXPs lagged and traffic tended to make some roundabout, and seemingly irrational, trips to the more established IXPs.

For example, users connected to a server just a few miles away may be surprised to discover that data will cross an entire ocean, turn 180 degrees, and cross that ocean again to arrive at its destination. This occurrence, known as the “boomeranging” or “hair-pinning” (or “trombone effect” due to the path’s shape), is especially true for emerging markets, where local ISPs are less interconnected and Continue reading

Adaptive Micro-segmentation – Built-in Application Security from the Network to the Workload

A zero trust or least-privileged, security model has long been held as the best way to secure applications and data. At its core, a zero trust security model is based on having a whitelist of known good behaviors for an environment and enforcing this whitelist. This model is preferable to one that depends on identifying attacks in progress because attack methods are always changing, giving attackers the upper hand and leaving defenders a step behind.

The problem for IT and InfoSec teams has always been effectively operationalizing a zero trust model. As applications become increasingly distributed across hybrid environments and new application frameworks allow for constant change, a lack of comprehensive application visibility and consistent security control points is exacerbated for IT and InfoSec, making achieving a zero trust model even harder.

A modern application is not a piece of software running on a single machine — it’s a distributed system. Different pieces of software running on different workloads, networked together. And we have thousands of them, all commingled on a common infrastructure or, more lately, spanning multiple data centers and clouds. Our internal networks have evolved to be relatively flat — a decision designed to facilitate organic growth. But Continue reading

NSX Portfolio Realizing the Virtual Cloud Network for Customers

If you’re already in Las Vegas or heading there, we are excited to welcome you into the Virtual Cloud Network Experience at VMworld US 2018!

First, why is the networking and security business unit at VMware calling this a “Virtual Cloud Network Experience”? Announced May 1, the Virtual Cloud Network is the network model for the digital era. It is also the vision of VMware for the future of networking to empower customers to connect and protect applications and data, regardless of where they sit – from edge to edge.

At VMworld this year we’re making some announcements that are helping turn the Virtual Cloud Network vision into reality and showcasing customer that have embraced virtual cloud networking.

With that, here’s what’s new:

Public Cloud, Bare Metal, and Containers

NSX is only for VMs, right? Wrong! We’ve added support for native AWS and Azure workloads with NSX Cloud, support for applications running on bare metal servers (no hypervisor!), and increased support for containers (including containers running on bare metal). There’s much to get up to speed on so check out the can’t-miss 100-level sessions below, plus there are a bunch of 200 and 300 level sessions covering the Continue reading

Provisioning a headless Raspberry Pi

The typical way of installing a fresh Raspberry Pi is to attach power, keyboard, mouse, and an HDMI monitor. This is a pain, especially for the diminutive RPi Zero. This blogpost describes a number of options for doing headless setup. There are several options for this, including Ethernet, Ethernet gadget, WiFi, and serial connection. These examples use a Macbook as an example, maybe I'll get around to a blogpost describing this from Windows.

Burning micro SD card

We are going to edit the SD card before booting, so for completeness, I thought I'd describe the process of burning an SD card.

We are going to download the latest "raspbian" operating system. I download the "lite" version because I'm not using the desktop features. It comes as a compressed .zip file which we need to extract into an .img file. Just double-click on the .zip on Windows or Mac.

The next step is to burn the image to an SD card. On Windows I use Win32DiskImager. On Mac I use the following command-line steps:

$ sudo -s
# mount
# diskutil unmount /dev/disk2s1
# dd bs=1m if=~/Downloads/2018-06-27-raspbian-stretch-lite.img of=/dev/disk2 conv=sync

First, I need a root prompt. I then use the Continue reading