Building a diverse and strong Internet Society Board of Trustees

[Published on behalf of the Internet Society Board of Trustees.]

The Internet Society’s 2020 AGM (Annual General Meeting) is going to be held on the first weekend of August. While the meeting had originally been planned as a face-to-face meeting, the Board decided to turn it into an online meeting instead given the current COVID-19 pandemic.

The AGM is the meeting where we say goodbye to the outgoing trustees. We want to thank them for all their efforts during their terms and wish them good luck in their future endeavours. We are confident they will continue supporting the Internet Society down the road.

The AGM is also the meeting where we welcome the incoming trustees. This year is special because we will be welcoming five new trustees. This represents a significant Board turnover for a Board of twelve voting trustees. Therefore, we are currently running a comprehensive onboarding process to get our new trustees up to speed as efficiently as possible.

As you know, the Board is selected and elected by our community, with the IETF, Organizational Members, and Chapters each independently choosing a third of the Trustees. Next year, at the 2021 AGM, three trustees will be reaching Continue reading

In Africa, An Open Internet Standards Course for Universities

Seventy university students from the Democratic Republic of Congo (DRC), Ethiopia, Kenya, and Ghana gained insights into open Internet standards

Many of the Internet standards that make the Internet work today are developed using open processes. Early exposure to these processes could significantly help future engineers play a role in the evolution of the Internet.

Next Generation of Open Internet Standards Experts in Africa

To expose the next generation of African experts to open Internet standards, the Internet Society put together a short pilot course on Internet Protocol Security (IPSec). IPSec is a technology used to improve communication security between devices on the Internet.

To promote the teaching of open Internet standards in African Universities, the one-month course brought together 70 students from 4 African universities from DRC, Ethiopia, Kenya, and Ghana. The pilot course was designed to provide university lecturers with additional training material to support existing courses at universities.

Facilitators

Technology experts Dr. Daniel Migault, Professor Nabil Benamar, and Loganaden Velvindron facilitated the learning experience. Between March and April 2020, they delivered online lectures for three weeks before opening up a week for student assignments.

The Internet Society’s Regional Vice President for Africa Dawit Bekele said the course Continue reading

Serverless Rendering with Cloudflare Workers

Serverless Rendering with Cloudflare Workers
Serverless Rendering with Cloudflare Workers

Cloudflare’s Workers platform is a powerful tool; a single compute platform for tasks as simple as manipulating requests or complex as bringing application logic to the network edge. Today I want to show you how to do server-side rendering at the network edge using Workers Sites, Wrangler, HTMLRewriter, and tools from the broader Workers platform.

Each page returned to the user will be static HTML, with dynamic content being rendered on our serverless stack upon user request. Cloudflare’s ability to run this across the global network allows pages to be rendered in a distributed fashion, close to the user, with miniscule cold start times for the application logic. Because this is all built into Cloudflare’s edge, we can implement caching logic to significantly reduce load times, support link previews, and maximize SEO rankings, all while allowing the site to feel like a dynamic application.

A Brief History of Web Pages

In the early days of the web pages were almost entirely static - think raw HTML. As Internet connections, browsers, and hardware matured, so did the content on the web. The world went from static sites to more dynamic content, powered by technologies like CGI, PHP, Flash, CSS, JavaScript, and Continue reading

Social Networking Do’s and Don’ts

In this era of growing technology, social media plays a significant role when it comes to networking. Unlike the traditional practices, social media sites like Facebook and LinkedIn are heavily relied on when it comes to networking. Now these sites are even used to look for the new talent as recruiters use it as a recruitment tool to fill in new positions at their workplaces. Social networking plays an important role in getting to know new people and building strong public relations.

In this article we will cover the Do’s and Don’ts of Social Networking and how you can make the most of it.

The Do’s:

Do Maintain a Proper Profile

Usually people fail to maintain a proper profile on their social media accounts, because they seem to think it is not that important. Wrong! In order to network socially, it is crucial that you have a small but complete and updated profile so that people do not consider you fake or a scam.

Do Add Value

One of the best ways to maintain an online presence that is unique and different is by being visible and by adding value. It is important to get to know people by joining Continue reading

Next Platform TV for July 16, 2020

On today’s program we talk AI chip innovations with Graphcore co-founder and CEO, Nigel Toon, tap into the Barcelona Supercomputing Center to talk Arm server performance, check in with Sandia for brain-inspired computing advances; talk HDD technologies that keep disk relevant, and also discuss RISC-V with the foundation’s CEO.

Next Platform TV for July 16, 2020 was written by Nicole Hemsoth at The Next Platform.

Counterfeit Cisco switches raise network security alarms

In a disconcerting event for IT security professionals, counterfeit versions of Cisco Catalyst 2960-X Series switches were discovered on an unnamed business network, and the fake gear was found to be designed to circumvent typical authentication procedures, according to a report from F-Secure.F-Secure says its investigators found that while the counterfeit Cisco 2960-X units did not have any backdoor-like features, they did employ various measures to fool security controls. For example, one of the units exploited what F-Secure believes to be a previously undiscovered software vulnerability to undermine secure boot processes that provide protection against firmware tampering. To read this article in full, please click here

Visibility into dropped packets

Dropped packets have a profound impact on network performance and availability. Packet discards due to congestion can significantly impact application performance. Dropped packets due to black hole routes, expired TTLs, MTU mismatches, etc. can result in insidious connection failures that are time consuming and difficult to diagnose.

Devlink Trap describes recent changes to the Linux drop monitor service that provide visibility into packets dropped by switch ASIC hardware. When a packet is dropped by the ASIC, an event is generated that includes the header of the dropped packet and the reason why it was dropped. A hardware policer is used to limit the number of events generated by the ASIC to a rate that can be handled by the Linux kernel. The events are delivered to userspace applications using the Linux netlink service.

Running the dropwatch command line tool on an Ubuntu 20 system demonstrates the instrumentation:
pp@ubuntu20:~$ sudo dropwatch
Initializing null lookup method
dropwatch> set alertmode packet
Setting alert mode
Alert mode successfully set
dropwatch> start
Enabling monitoring...
Kernel monitoring activated.
Issue Ctrl-C to stop monitoring
drop at: __udp4_lib_rcv+0xae5/0xbb0 (0xffffffffb05ead95)
origin: software
input port ifindex: 2
timestamp: Wed Jul 15 23:57:36 2020 223253465 nsec
protocol: 0x800
length: 128
original Continue reading

DockerCon 2020: The AWS Sessions

Last week we announced Docker and AWS created an integrated and frictionless experience for developers to leverage Docker Compose, Docker Desktop, and Docker Hub to deploy their apps on Amazon Elastic Container Service (Amazon ECS) and Amazon ECS on AWS Fargate. On the heels of that announcement, we continue the latest series of blog articles focusing on developer content that we are curating from DockerCon LIVE 2020, this time with a focus on AWS. If you are running your apps on AWS, bookmark this post for relevant insights for easy access in one place.

As more developers adopt and learn Docker, and as more organizations are jumping head-first into containerizing their applications, AWS continues to be the cloud of choice for deployment. Earlier this year Docker and AWS collaborated on Compose-spec.io open specification and as mentioned on the Docker blog by my colleague Chad Metcalf, deploying straight from Docker to AWS has never been easier. It’s just another step to constantly put ourselves in the shoes of you, our customer, the developer.

The replay of these three sessions on AWS is where you can learn more about container trends for developers, adopting microservices and building and deploying multi-container Continue reading

Manage Red Hat Enterprise Linux like a Boss with Red Hat Ansible Content Collection for Red Hat Insights

Running IT environments means facing many challenges at the same time: security, performance, availability and stability are critical for the successful operation of today’s data centers. IT managers and their teams of administrators, operators and architects are well advised to move from a reactive, “fire-fighting” mode to a proactive approach where systems are continuously scanned and improvements are applied before critical situations come up. Red Hat Insights routinely analyzes Red Hat Enterprise Linux systems for security/vulnerability, compliance, performance, availability and stability threats, and based on the results, can provide guidance on how to improve daily operations. Insights is included with your Red Hat Enterprise Linux subscription and located at cloud.redhat.com

We recently announced a new Red Hat Ansible Content Collection for Insights, an integration designed to make it easier for Insights users to manage Red Hat Enterprise Linux and to automate tasks on those systems using Ansible. The Ansible Content Collection for Insights is ideal for customers that have large Red Hat Enterprise Linux estates that require initial deployment and ongoing management of the Insights client. 

In this blog, we will look at how this integration with Ansible takes care of key tasks via included Ansible Continue reading

Cloudflare’s first year in Lisbon

Cloudflare's first year in Lisbon
Cloudflare's first year in Lisbon

A year ago I wrote about the opening of Cloudflare’s office in Lisbon, it’s hard to believe that a year has flown by. At the time I wrote:

Lisbon’s combination of a large and growing existing tech ecosystem, attractive immigration policy, political stability, high standard of living, as well as logistical factors like time zone (the same as the UK) and direct flights to San Francisco made it the clear winner.

We landed in Lisbon with a small team of transplants from other Cloudflare offices. Twelve of us moved from the UK, US and Singapore to bootstrap here. Today we are 35 people with another 10 having accepted offers; we’ve almost quadrupled in a year and we intend to keep growing to around 80 by the end of 2020.

Cloudflare's first year in Lisbon

If you read back to my description of why we chose Lisbon only one item hasn’t turned out quite as we expected. Sure enough TAP Portugal does have direct flights to San Francisco but the pandemic put an end to all business flying worldwide for Cloudflare. We all look forward to getting back to being able to visit our colleagues in other locations.

The pandemic also put us in the Continue reading

Happy 10-year Anniversary Lostintransit

Wow! I can’t believe it. I’ve been blogging for 10 years! Where did time go? July 16th 2010 is when I posted the first time to this blog. It was a post saying “I’m game” and I included Radia Perlman’s Algorhyme.

August 27th 2010, I wrote that I wanted to pass the CCIE lab within two years. Turns out I wasn’t too far from the truth. I passed late October 2012. Greg Ferro himself popped in to wish me good luck:

January 2011, I passed the written. I had a little different approach to many where I spent a considerate amount of time, around 200h if I remember correctly, to build a strong foundation before moving on to labbing. Today you would take the ENCOR exams, of course. But I still think this is a valid strategy.

It took me a little more than 6 months to get my first 5000 views. It’s good to remember that. Especially for those of you just starting out. This site has now had more than a million views but it took some time to get there. It doesn’t get as many views as you probably think, either.

I took my first stab at Continue reading

IPv6 and the DNS

These days it seems that whenever we start to talk about the DNS the conversation immediately swings around to the subject of DNS over HTTPS (DoH) and the various implications of this technology. But that's not my intention here. I'd like to look at a different, but still very familiar and somewhat related, topic relating to the DNS, namely how IPv6 is being used as a transport protocol for DNS queries.

A Look at the New Calico eBPF Dataplane

Calico was designed from the ground up with a pluggable dataplane architecture. The Calico 3.13 release introduced an exciting new eBPF (extended Berkeley Packet Filter) dataplane targeted at those ready to adopt newer kernel versions and wanting to push the Linux kernel’s latest networking capabilities to the limit. In addition to improved throughput and latency performance compared to the standard Linux networking data plane, Calico’s eBPF data plane also includes native support for Kubernetes services without the need to run kube-proxy. One of the ways Calico’s eBPF dataplane realizes these improvements is through source IP preservation and Direct Server Return (DSR)

Kube-proxy and Source IP

The application of Network Address Translation (NAT) by kube-proxy to incoming network connections to Kubernetes services (e.g. via a service node port) is a frequently encountered friction point with Kubernetes networking. NAT has the unfortunate side effect of removing the original client source IP address from incoming traffic. When this occurs, Kubernetes network policies can’t restrict incoming traffic from specific external clients. By the time the traffic reaches the pod it no longer has the original client IP address. For some applications, knowing the source IP address is desirable or required. For example, Continue reading

Automating Mitigation of the F5 BIG-IP TMUI RCE Security Vulnerability Using Ansible Tower (CVE-2020-5902)

On June 30, 2020, a security vulnerability affecting multiple BIG-IP platforms from F5 Networks was made public with a CVSS score of 10 (Critical). Due to the significance of the vulnerability, network administrators are advised to mitigate this issue in a timely manner. Doing so manually is tricky, especially if many devices are involved. Because F5 BIG-IP and BIG-IQ are certified with the Red Hat Ansible Automation Platform, we can use it to tackle the issue.

This post provides one way of temporarily mitigating CVE-2020-5902 via Ansible Tower without upgrading the BIG-IP platform. However, larger customers like service providers might struggle to upgrade on a short notice, as they may have to go through a lengthy internal validation process. For those situations, an automated mitigation may be a reasonable workaround until such time to perform an upgrade.

 

Background of the vulnerability

The vulnerability is described in K52145254 of the F5 Networks support knowledgebase

The Traffic Management User Interface (TMUI), also referred to as the Configuration utility, has a Remote Code Execution (RCE) vulnerability in undisclosed pages.

And describes the impact is serious:

This vulnerability allows for unauthenticated attackers, or authenticated users, with network access to the Configuration Continue reading

Options grow for migrating mainframe apps to the cloud

Mainframe users looking to bring legacy applications into the public or private cloud world have a new option: LzLabs, a mainframe software migration vendor.Founded in 2011 and based in Switzerland, LzLabs this week said it's setting up shop in North America to help mainframe users move legacy applications – think COBOL – into the more modern and flexible cloud application environment.Read also: How to plan a software-defined data-center network At the heart of LzLabs' service is its Software Defined Mainframe (SDM), an open-source, Eclipse-based system that's designed to let legacy applications, particularly those without typically available source code, such as COBOL, run in the cloud without recompilation.To read this article in full, please click here

Are Your Virtual Meetings Accessible for People with Disabilities? Start with This Checklist

The COVID-19 pandemic has changed the way humans interact with one another. With an emphasis on less physical interaction and more social distancing, institutions and organizations are moving their work and meetings online.

People with disabilities form about 15 percent of world population, so it is all the more important these online meetings are made accessible.

The Internet Society Accessibility Special Interest Group (Accessibility SIG) aims to make the Internet and its attendant technologies accessible to the largest audience possible, regardless of disabilities. The digital divide is not just about having the access to digital technology, it could also be about having the access to technology and not being able to use it. Our digital products must be usable by all. Many laws and the Internet Society’s vision – the Internet is for everyone – demand that we provide everyone with an equal experience.

The Accessibility SIG is planning a series of seven webinars discussing this very topic. Our first one was titled When Rhetoric Meets Reality: Digital Accessibility, Persons With Disabilities and COVID-19 and was held on May 28.

The way we design and build can make it hard – and sometimes impossible – for people with disabilities to access Continue reading

Cumulus content roundup: June 2020

June seems like a lifetime ago but there was so much content we wanted to make sure was on your radar. We know you may be thinking but wait, didn’t something big happen to Cumulus Networks in June? You would be right! We’re excited to share that we are now officially NVIDIA®.  Along with the news, we kept very busy with fresh podcast episodes, informative blog posts and much more so take a minute to dive on in and catch up on it all here.

From Cumulus Networks, now NVIDIA

Cumulus Networks’ President and Chief Product Officer, Partho Mishra, on the NVIDIA-Cumulus acquisition.: Partho Mishra answers your questions regarding the strategic focus of the new networking business unit at NVIDIA & the future of open networking.

Open source — the great equalizer.: Technology is a great equalizer and the open source movement has played a huge role in making this true and accelerating the process.

Remote work makes network visibility more critical than ever: We’re living through a major shift in the way employees work, extending the boundaries of what was once a tightly controlled environment.

Kernel of Truth season 3 episode 8: Cumulus Linux in action Continue reading

Containerized Python Development – Part 1

Developing Python projects in local environments can get pretty challenging if more than one project is being developed at the same time. Bootstrapping a project may take time as we need to manage versions, set up dependencies and configurations for it. Before, we used to install all project requirements directly in our local environment and then focus on writing the code. But having several projects in progress in the same environment becomes quickly a problem as we may get into configuration or dependency conflicts. Moreover, when sharing a project with teammates we would need to also coordinate our environments. For this we have to define our project environment in such a way that makes it easily shareable. 

A good way to do this is to create isolated development environments for each project. This can be easily done by using containers and  Docker Compose to manage them.  We cover this in a series of blog posts, each one with a specific focus.

This first part covers how to containerize a Python service/tool and the best practices for it.

Requirements

To easily exercise what we discuss in this blog post series, we need to install a minimal set Continue reading