Archive

Category Archives for "Networking"

Watch the Experts Session on Encryption from Canberra

Internet Australia and the Internet Society are pleased to invite you to watch the recording of an Experts Session on Encryption on 20 August 2018 at Parliament House, Canberra, Australia.

Encryption technologies enable Internet users to protect the integrity and the confidentiality of their data and communications. From limiting the impact of data breaches, to securing financial transactions, to keeping messages private, encryption is an essential tool for digital security. As a technical foundation for trust on the Internet, encryption promotes commerce, privacy, and user trust, and helps protect data and communications from bad actors.

During the session, international and local experts from across the field discussed the technical aspects of encryption and digital security. They explained how encryption is used to secure communications and data, and explored its role in the Australian digital economy. Experts also discussed the risks associated with attempting to provide exceptional access to encrypted systems.

The post Watch the Experts Session on Encryption from Canberra appeared first on Internet Society.

Will Fujitsu be the first to make an ARM-powered supercomputer?

I’ve long felt Japan has been severely overlooked in recent years due to two “lost decades” and China overshadowing it — and supercomputing is no exception.In 2011, Fujitsu launched the K computer at the Riken Advanced Institute for Computational Science campus in Kobe, Japan. Calling it a computer really is a misnomer, though, as is the case in any supercomputer these days. When I think “computer,” I think of the 3-foot-tall black tower a few feet from me making the room warm. In the case of K, it’s rows and rows of cabinets stuffed with rack-mounted servers in a space the size of a basketball court.With its distributed memory architecture, K had 88,128 eight-core SPARC64 VIIIfx processors in 864 cabinets. Fujitsu was a licensee of Sun Microsystems’ SPARC processor (later Oracle) and did some impressive work on the processor on its own. When it launched in 2011, the K was ranked the world's fastest supercomputer on the TOP500 supercomputer list, at a computation speed of over 8 petaflops. It has since been surpassed by supercomputers from the U.S. and China.To read this article in full, please click here

Will Fujitsu be the first to make an ARM-powered supercomputer?

I’ve long felt Japan has been severely overlooked in recent years due to two “lost decades” and China overshadowing it — and supercomputing is no exception.In 2011, Fujitsu launched the K computer at the Riken Advanced Institute for Computational Science campus in Kobe, Japan. Calling it a computer really is a misnomer, though, as is the case in any supercomputer these days. When I think “computer,” I think of the 3-foot-tall black tower a few feet from me making the room warm. In the case of K, it’s rows and rows of cabinets stuffed with rack-mounted servers in a space the size of a basketball court.With its distributed memory architecture, K had 88,128 eight-core SPARC64 VIIIfx processors in 864 cabinets. Fujitsu was a licensee of Sun Microsystems’ SPARC processor (later Oracle) and did some impressive work on the processor on its own. When it launched in 2011, the K was ranked the world's fastest supercomputer on the TOP500 supercomputer list, at a computation speed of over 8 petaflops. It has since been surpassed by supercomputers from the U.S. and China.To read this article in full, please click here

IDG Contributor Network: Analytics are the key to network uptime, but are they enough?

Imagine you are in a crowded ER, and doctors are running from room to room. In the waiting area, patients are checking in via an online portal, and hospital staff are quickly capturing their confidential medical and insurance information on a mobile device. You look down the hall, where admitted patients are receiving treatment through Wi-Fi enabled biomedical devices, and some are even streaming their favorite show on Netflix while they wait. In a hospital, there are hundreds of individuals conducting critical tasks at any given moment, and they rely on thousands of connected devices to get the job done. But, this begs the question: What happens if that network fails?To read this article in full, please click here

IDG Contributor Network: Analytics are the key to network uptime, but are they enough?

Imagine you are in a crowded ER, and doctors are running from room to room. In the waiting area, patients are checking in via an online portal, and hospital staff are quickly capturing their confidential medical and insurance information on a mobile device. You look down the hall, where admitted patients are receiving treatment through Wi-Fi enabled biomedical devices, and some are even streaming their favorite show on Netflix while they wait. In a hospital, there are hundreds of individuals conducting critical tasks at any given moment, and they rely on thousands of connected devices to get the job done. But, this begs the question: What happens if that network fails?To read this article in full, please click here

IDG Contributor Network: Preparing your network for the hybrid world – 4 imminent IT shifts and the role of NPMD

To the surprise of many not living it every day, a robust, resilient, and reliable network is one of the most important drivers behind success in today’s business world. Organizations must continuously improve their network infrastructure to better meet organizational requirements and offer the experiences their customers expect. Recent changes in the network market mean this continuous improvement needs to go beyond optimizations and extend all the way to re-architecting the network.The forces driving network re-architecture are twofold: new demands on the network, and innovations in network technology and solutions. These new demands on the network stem from enterprise-wide digital transformation initiatives such as cloud, SD-WAN, machine learning and AI, IoT, edge computing, and more. While these new requirements offer a host of business benefits, they’re also introducing disruptive complexity to the network, driving the need to simplify and accelerate the way all IT services are delivered today.To read this article in full, please click here

IDG Contributor Network: Preparing your network for the hybrid world – 4 imminent IT shifts and the role of NPMD

To the surprise of many not living it every day, a robust, resilient, and reliable network is one of the most important drivers behind success in today’s business world. Organizations must continuously improve their network infrastructure to better meet organizational requirements and offer the experiences their customers expect. Recent changes in the network market mean this continuous improvement needs to go beyond optimizations and extend all the way to re-architecting the network.The forces driving network re-architecture are twofold: new demands on the network, and innovations in network technology and solutions. These new demands on the network stem from enterprise-wide digital transformation initiatives such as cloud, SD-WAN, machine learning and AI, IoT, edge computing, and more. While these new requirements offer a host of business benefits, they’re also introducing disruptive complexity to the network, driving the need to simplify and accelerate the way all IT services are delivered today.To read this article in full, please click here

The Week in Internet News: U.S. DOJ Pressures Facebook to Break Messenger Encryption

Encryption wars, part 2,403: The U.S. Department of Justice is pressuring Facebook to break the encryption in its Messenger app so that investigators can access communications by suspected Ms-13 gang members. The DOJ has asked a judge to force Facebook to allow the agency to tap into Messenger, with the outcome potentially affecting other tech companies, Fortune reports.

Hacking the Apple: An infamous North Korean hacking group has created their first macOS malware as a way to compromise a cryptocurrency exchange, Bleeping Computer reports. The hackers who created the so-called AppleJeus malware are going to great lengths to make it work – even creating a fake company and software product to deliver it.

AI loves TV: As researchers explore ways to give Artificial Intelligence systems curiosity, AIs will sometimes choose to watch TV all day, QZ.com says. AIs playing video games will sometimes die on purchase to see the game-over screen or fixate on a fake TV and remote and flip through channels to find something new.

Certified secure? Trade group CTIA is offering a security certification for cellular-connected Internet of Things devices, TechRepublic reports. Security experts and test labs have participated in the program. With so many Continue reading

Does Establishing More IXPs Keep Data Local? Brazil and Mexico Might Offer Answers

Much like air travel, the internet has certain hubs that play important relay functions in the delivery of information. Just as Heathrow Airport serves as a hub for passengers traveling to or from Europe, AMS-IX (Amsterdam Internet Exchange) is a key hub for information getting in or out of Europe. Instead of airline companies gathering in one place to drop off or pick up passengers, it’s internet service providers coming together to swap data – lots and lots of data.

Where the world’s largest internet exchange points (IXPs) reside are mostly where you would expect to find them: advanced economies with sophisticated internet infrastructure. As internet access reached new populations around the world, however, growth in IXPs lagged and traffic tended to make some roundabout, and seemingly irrational, trips to the more established IXPs.

For example, users connected to a server just a few miles away may be surprised to discover that data will cross an entire ocean, turn 180 degrees, and cross that ocean again to arrive at its destination. This occurrence, known as the “boomeranging” or “hair-pinning” (or “trombone effect” due to the path’s shape), is especially true for emerging markets, where local ISPs are less interconnected and Continue reading

Adaptive Micro-segmentation – Built-in Application Security from the Network to the Workload

A zero trust or least-privileged, security model has long been held as the best way to secure applications and data. At its core, a zero trust security model is based on having a whitelist of known good behaviors for an environment and enforcing this whitelist. This model is preferable to one that depends on identifying attacks in progress because attack methods are always changing, giving attackers the upper hand and leaving defenders a step behind.

The problem for IT and InfoSec teams has always been effectively operationalizing a zero trust model. As applications become increasingly distributed across hybrid environments and new application frameworks allow for constant change, a lack of comprehensive application visibility and consistent security control points is exacerbated for IT and InfoSec, making achieving a zero trust model even harder.

A modern application is not a piece of software running on a single machine — it’s a distributed system. Different pieces of software running on different workloads, networked together. And we have thousands of them, all commingled on a common infrastructure or, more lately, spanning multiple data centers and clouds. Our internal networks have evolved to be relatively flat — a decision designed to facilitate organic growth. But Continue reading

NSX Portfolio Realizing the Virtual Cloud Network for Customers

If you’re already in Las Vegas or heading there, we are excited to welcome you into the Virtual Cloud Network Experience at VMworld US 2018!

First, why is the networking and security business unit at VMware calling this a “Virtual Cloud Network Experience”? Announced May 1, the Virtual Cloud Network is the network model for the digital era. It is also the vision of VMware for the future of networking to empower customers to connect and protect applications and data, regardless of where they sit – from edge to edge.

At VMworld this year we’re making some announcements that are helping turn the Virtual Cloud Network vision into reality and showcasing customer that have embraced virtual cloud networking.

With that, here’s what’s new:

Public Cloud, Bare Metal, and Containers

NSX is only for VMs, right? Wrong! We’ve added support for native AWS and Azure workloads with NSX Cloud, support for applications running on bare metal servers (no hypervisor!), and increased support for containers (including containers running on bare metal). There’s much to get up to speed on so check out the can’t-miss 100-level sessions below, plus there are a bunch of 200 and 300 level sessions covering the Continue reading

Using Workers To Make Static Sites Dynamic

Using Workers To Make Static Sites Dynamic

The following is a guest post by Paddy Sherry, Lead Developer at Gambling.com Group. They build performance marketing websites and tools, using Cloudflare to serve to their global audience. Paddy is a Web Performance enthusiast with an interest in Serverless Computing.

Choosing technology that is used on a large network of sites is a key architectural decision that must be correct. We build static websites but needed to find a way to make them dynamic to do things like geo targeting, restrict access and A/B testing. This post shares our experiences on what we learned when using Workers to tackle these challenges.

Our Background

At Gambling.com Group, we use Cloudflare on all of our sites so our curiosity level in Workers was higher than most. We are big fans of static websites because nothing is faster than flat HTML. We had been searching for a technology like this for some time and applied to be part of the beta program, so were one of the first to gain access to the functionality.

The reason we were so keen to experiment with Workers is that for anyone running static sites, 99% of the time, the product requirements Continue reading

Deploying TLS 1.3

Last week saw the formal publication of the TLS 1.3 specification as RFC 8446. It’s been a long time coming – in fact it’s exactly 10 years since TLS 1.2 was published back in 2008 – but represents a substantial step forward in making the Internet a more secure and trusted place.

What is TLS and why is it needed?

Transport Layer Security (TLS) is widely used to encrypt data transmitted between Internet hosts, with the most popular use being for secure web browser connections (adding the ‘S’ to HTTP). It is also commonly (although less visibly) used to encrypt data sent to and from mail servers (using STARTTLS with SMTP and IMAP/POP etc..), but can be used in conjunction with many other Internet protocols (e.g. DNS-over-TLS, FTPS) where secure connections are required. For more information about how TLS works and why you should use it, please see our TLS Basics guide.

TLS is often used interchangeably with SSL (Secure Socket Layers) which was developed by Netscape and predates it as an IETF Standard, but many Certification Authorities (CAs) still market the X.509 certificates used by TLS as ‘SSL certificates’ due to their familiarity with Continue reading