Federated learning improves how AI data is managed, thwarts data leakage

Privacy is one of the big holdups to a world of ubiquitous, seamless data-sharing for artificial intelligence-driven learning. In an ideal world, massive quantities of data, such as medical imaging scans, could be shared openly across the globe so that machine learning algorithms can gain experience from a broad range of data sets. The more data shared, the better the outcomes.That generally doesn't happen now, including in the medical world, where privacy is paramount. For the most part, medical image scans, such as brain MRIs, stay at the institution level for analysis. The result is then shared, but not the original patient scan data. READ MORE: Cisco challenge winners use AI, IoT to tackle global problemsTo read this article in full, please click here

Everything You Need to Know about Network Time Security

This article was first published on NetNod’s Blog. It is reposted here with permission of the author.

A lot of the Internet’s most important security tools are dependent on accurate time. But until recently there was no way to ensure that the time you were getting came from a trusted source. The new Network Time Security (NTS) standard has been designed to fix that. In this post, we will summarise the most important NTS developments and link to a range of recent Netnod articles providing more information on the background, the NTS standard and the latest implementations.

What is NTS and why is it important?

NTS is an essential development of the Network Time Protocol (NTP). It has been developed within the Internet Engineering Task Force (IETF) and adds a much needed layer of security to a protocol that is more than 30 years old and is vulnerable to certain types of attack. Netnod has played an important role in the development of Network Time Security (NTS) from the standardization effort in the IETF to the development of several implementations and the launch of one of the first NTS-enabled NTP services in the world.

NTS consists of two protocols, Continue reading

BGP Convergence and ASn allocation design in Large Scale Networks

BGP Convergence and ASn allocation design in Large Scale Networks covered in this post and the video at the end of the post.

This content is explained in great detail in my BGP Zero to Hero course as well as CCIE Enterprise Training.

 

BGP is always known as slowly converged protocol. In fact this is wrong knowledge. If you just mention about BGP Control plane convergence, can be true but we always ignore BGP Data Plane Convergence which is commonly known as BGP PIC (Prefix Independent Convergence) 

 

In this post, I will explain the BGP Path Hunting process which slows down the convergence process. Path Hunting is not only BGP but in general distance vector protocols convergence problem.

 

Effect of Path Hunting gets very problematic in densely meshed topologies such as CLOS or Fat Tree.

 

Many Leaf and Spine switches might be in the network and when EBGP is used (As it is recommended in RFC 7938) Path Hunting should be avoided by allocation the Autonomous System number to the networking devices wisely.

 

Otherwise, for the prefix which is not anymore advertised to network due to failure for example, BGP speaking routers try any Continue reading

Many data-center workloads staying on premises, Uptime Institute finds

Another study finds that the data center is far from dying. That's not surprising to learn from the Uptime Institute's annual data center survey. However one trend that did stand out in the research is that power efficiency has "flatlined" in recent years.Uptime says big improvements in energy efficiency were achieved between 2007 and 2013 using mostly inexpensive or easy methods, such as simple air containment. But moving beyond those gains involves more difficult or expensive changes. Since 2013, improvements in power usage effectiveness (PUE) have been marginal, according to the group.To read this article in full, please click here

Many data-center workloads staying on premises, Uptime Institute finds

Another study finds that the data center is far from dying. That's not surprising to learn from the Uptime Institute's annual data center survey. However one trend that did stand out in the research is that power efficiency has "flatlined" in recent years.Uptime says big improvements in energy efficiency were achieved between 2007 and 2013 using mostly inexpensive or easy methods, such as simple air containment. But moving beyond those gains involves more difficult or expensive changes. Since 2013, improvements in power usage effectiveness (PUE) have been marginal, according to the group.To read this article in full, please click here

On Cyber Governance

APAN (Asia Pacific Advanced Network) brings together national research and education networks in the Asia Pacific region. APAN holds meetings twice a year to talk about current activities in the regional NREN sector. I was invited to be on a panel at APAN 50 on the subject of Cyber Governance, and I’d like to share my perspective on this topic here.

How To Setup Your Local Node.js Development Environment Using Docker – Part 2

In part I of this series, we took a look at creating Docker images and running Containers for Node.js applications. We also took a look at setting up a database in a container and how volumes and network play a part in setting up your local development environment.

In this article we’ll take a look at creating and running a development image where we can compile, add modules and debug our application all inside of a container. This helps speed up the developer setup time when moving to a new application or project. 

We’ll also take a quick look at using Docker Compose to help streamline the processes of setting up and running a full microservices application locally on your development machine.

Fork the Code Repository

The first thing we want to do is download the code to our local development machine. Let’s do this using the following git command:

git clone [email protected]:pmckeetx/memphis.git

Now that we have the code local, let’s take a look at the project structure. Open the code in your favorite IDE and expand the root level directories. You’ll see the following file structure.

├── docker-compose.yml
├── notes-service
│   ├── config
│    Continue reading

Cisco-challenge winners use AI, IoT to tackle global problems

An IoT-enabled system for transporting dairy products earned its designers the top prize in a competition run by Cisco. The Global Problem Solver Challenge, which is one of Cisco's corporate social responsibility (CSR) initiatives, pays cash awards to entrepreneurial companies using technology to solve the world's biggest challenges.Now in its fourth year, Cisco's Global Problem Solver Challenge awards $100,000 to the first-place winner and $75,000 to the first runner-up. The program also gives out four $25,000 awards and seven $10,000 prizes.This year, I was honored to be invited to help judge the 2020 winners. In full disclosure, I agreed to be a judge but I received no compensation, as I believe we all have to work together to make the world a better place. One important consideration for me, as I thought about whether to volunteer my time as a judge, was that this is not a marketing ploy by Cisco to sell more technology. There is no requirement for any of the entries to use Cisco products.To read this article in full, please click here

The Hedge Episode 47: Scott Burleigh and the Bundle Protocol

In this episode of the Hedge, Scott Burleigh joins Alvaro Retana and Russ White to discuss the Bundle Protocol, which is designed to support delay tolerant data delivery over intermittently available or “stressed” networks. Examples include interstellar communication, email transmission over networks where access points move around (carrying data with them), etc. You can learn more about delay tolerant networking here, and read the most recent draft specification here.

download

Day Two Cloud 060: Charting Global Internet Performance With ThousandEyes (Sponsored)

What's really going on in the cloud? ThousandEyes, our sponsor for this episode, has just released its inaugural Internet Performance Report, which tracks the performance and availability of ISPs, public clouds, CDNs, and DNS across multiple geographical regions. The report measures performance over time and also looks at the current impact of COVID-19 on Internet usage. Angelique Medina, Director, Product Marketing at ThousandEyes, is our guide.

Day Two Cloud 060: Charting Global Internet Performance With ThousandEyes (Sponsored)

What's really going on in the cloud? ThousandEyes, our sponsor for this episode, has just released its inaugural Internet Performance Report, which tracks the performance and availability of ISPs, public clouds, CDNs, and DNS across multiple geographical regions. The report measures performance over time and also looks at the current impact of COVID-19 on Internet usage. Angelique Medina, Director, Product Marketing at ThousandEyes, is our guide.

The post Day Two Cloud 060: Charting Global Internet Performance With ThousandEyes (Sponsored) appeared first on Packet Pushers.

Network-layer DDoS attack trends for Q2 2020

Network-layer DDoS attack trends for Q2 2020
Network-layer DDoS attack trends for Q2 2020

In the first quarter of 2020, within a matter of weeks, our way of life shifted. We’ve become reliant on online services more than ever. Employees that can are working from home, students of all ages and grades are taking classes online, and we’ve redefined what it means to stay connected. The more the public is dependent on staying connected, the larger the potential reward for attackers to cause chaos and disrupt our way of life. It is therefore no surprise that in Q1 2020 (January 1, 2020 to March 31, 2020) we reported an increase in the number of attacks—especially after various government authority mandates to stay indoors—shelter-in-place went into effect in the second half of March.

In Q2 2020 (April 1, 2020 to June 30, 2020), this trend of increasing DDoS attacks continued and even accelerated:

  • The number of L3/4 DDoS attacks observed over our network doubled compared to that in the first three months of the year.
  • The scale of the largest L3/4 DDoS attacks increased significantly. In fact, we observed some of the largest attacks ever recorded over our network.
  • We observed more attack vectors being deployed and attacks were more geographically distributed.

The number Continue reading

The NSX-T Gateway Firewall Secures Physical Servers

To date, our blog series on securing physical servers with NSX Data Center has covered the use of bare metal agents installed in a physical server. In this scenario, NSX bare metal agents provide management and enforcement of security policy for the physical server. For a quick recap of how NSX Data Center secures physical server traffic, please review our first and second blogs in this multi-part series. In this article, we will discuss the use of one of the NSX-T Gateway services of an NSX Edge Node. Specifically, the NSX-T Gateway Firewall secures physical servers.

What’s The NSX-T Edge?

The NSX-T Edge is a feature-rich L3-L7 gateway.  A brief review of some NSX-T Edge services:

  • Via Tier-0 Gateway, routing between the logical and the physical using dynamic routing protocols (eBGP and iBGP) as well as static routing
  • Via Tier-1 Gateway, routing between logical network segments, or from logical network segments to uplink to the Tier-0 Gateway
  • Routing for IPv4 and IPv6 addresses
  • Load Balancing via NSX-T Edge, which offers high-availability service for applications and distribution of network traffic load
  • Network Address Translation (NAT), available on tier-0 and tier-1 gateways
  • To manage IP addresses, the configuration of DNS (Domain Continue reading