Archive

Category Archives for "Networking"

HPKE: Standardizing public-key encryption (finally!)

HPKE: Standardizing public-key encryption (finally!)

For the last three years, the Crypto Forum Research Group of the Internet Research Task Force (IRTF) has been working on specifying the next generation of (hybrid) public-key encryption (PKE) for Internet protocols and applications. The result is Hybrid Public Key Encryption (HPKE), published today as RFC 9180.

HPKE was made to be simple, reusable, and future-proof by building upon knowledge from prior PKE schemes and software implementations. It is already in use in a large assortment of emerging Internet standards, including TLS Encrypted Client Hello and Oblivious DNS-over-HTTPS, and has a large assortment of interoperable implementations, including one in CIRCL. This article provides an overview of this new standard, going back to discuss its motivation, design goals, and development process.

A primer on public-key encryption

Public-key cryptography is decades old, with its roots going back to the seminal work of Diffie and Hellman in 1976, entitled “New Directions in Cryptography.” Their proposal – today called Diffie-Hellman key exchange – was a breakthrough. It allowed one to transform small secrets into big secrets for cryptographic applications and protocols. For example, one can bootstrap a secure channel for exchanging messages with confidentiality and integrity using a key exchange Continue reading

What a more holistic approach to cloud-native security and observability looks like

The rise of cloud native and containerization, along with the automation of the CI/CD pipeline, introduced fundamental changes to existing application development, deployment, and security paradigms. Because cloud native is so different from traditional architectures, both in how workloads are developed and how they need to be secured, there is a need to rethink our approach to security in these environments.

As stated in this article, security for cloud-native applications should take a holistic approach where security is not an isolated concern, but rather a shared responsibility. Collaboration is the name of the game here. In order to secure cloud-native deployments, the application, DevOps, and security teams need to work together to make sure security happens earlier in the development cycle and is more closely associated with the development process.

Since Kubernetes is the most popular container orchestrator and many in the industry tend to associate it with cloud native, let’s look at this holistic approach by breaking it down into a framework for securing Kubernetes-native environments.

Framework

At a high level, the framework for securing cloud-native environments consists of three stages: build, deploy, and runtime.

Build

In the build stage, developers write code and the code gets compiled, Continue reading

Mobile Edge Computing (MEC) Puts Compute, Networking Services Closer To Applications

The following post originally appeared on the Packet Pushers’ Ignition site on November 13, 2020. 5G has long been declared the future of mobile networks by both tech analysts and the popular press, but scratch the surface and IT pros will find that behind all the hype and headlines lies a massive redesign of network […]

The post Mobile Edge Computing (MEC) Puts Compute, Networking Services Closer To Applications appeared first on Packet Pushers.

Hedge 119: Product Marketing with Cathy Gadecki

Marketing is an underappreciated (and even demonized) part of the process in creating and managing networking products. Cathy Gadecki of Juniper joins Russ White and Tom Ammon on this episode of the Hedge to fill in the background and discuss the importance of marketing, and some of the odd corners where marketing impacts product development.

download

Cloudflare re-enforces commitment to security in Germany via BSIG audit

Cloudflare re-enforces commitment to security in Germany via BSIG audit
Cloudflare re-enforces commitment to security in Germany via BSIG audit

As a large data processing country, Germany is at the forefront of security and privacy regulation in Europe and sets the tone for other countries to follow. Analyzing and meeting the requirements to participate in Germany’s cloud security industry requires adherence to international, regional, and country-specific standards. Cloudflare is pleased to announce that we have taken appropriate organizational and technical precautions to prevent disruptions to the availability, integrity, authenticity, and confidentiality of Cloudflare’s production systems in accordance with BSI-KritisV. TÜViT, the auditing body tasked with auditing Cloudflare and providing the evidence to BSI every two years. Completion of this audit allows us to comply with the NIS Directive within Germany.

Why do cloud companies operating in Germany need to go through a BSI audit?

In 2019, Cloudflare registered as an Operator of Essential Services’ under the EU Directive on Security of Network and Information Systems (NIS Directive). The NIS Directive is cybersecurity legislation with the goal to enhance cybersecurity across the EU. Every member state has started to adopt national legislation for the NIS Directive and the criteria for compliance is set individually by each country. As an ‘Operator of Essential Services’ in Germany, Cloudflare is regulated by the Federal Continue reading

Building Confidence in Cryptographic Protocols

Building Confidence in Cryptographic Protocols

An introduction to formal analysis and our proof of the security of KEMTLS

Building Confidence in Cryptographic Protocols

Good morning everyone, and welcome to another Post-Quantum–themed blog post! Today we’re going to look at something a little different. Rather than look into the past or future quantum we’re going to look as far back as the ‘80s and ‘90s, to try and get some perspective on how we can determine whether a protocol is or is not secure. Unsurprisingly, this question comes up all the time. Cryptographers like to build fancy new cryptosystems, but just because we, the authors, can’t break our own designs, it doesn’t mean they are secure: it just means we are not smart enough to break them.

One might at this point wonder why in a post-quantum themed blog post we are talking about security proofs. The reason is simple: the new algorithms that claim to be safe against quantum threats need proofs showing that they actually are safe. In this blog post, not only are we going to introduce how we go about proving a protocol is secure, we’re going to introduce the security proofs of KEMTLS, a version of TLS designed to be more secure against quantum computers, and Continue reading

Using EasyCrypt and Jasmin for post-quantum verification

Using EasyCrypt and Jasmin for post-quantum verification
Using EasyCrypt and Jasmin for post-quantum verification

Cryptographic code is everywhere: it gets run when we connect to the bank, when we send messages to our friends, or when we watch cat videos. But, it is not at all easy to take a cryptographic specification written in a natural language and produce running code from it, and it is even harder to validate both the theoretical assumptions and the correctness of the implementation itself. Mathematical proofs, as we talked about in our previous blog post, and code inspection are simply not enough. Testing and fuzzing can catch common or well-known bugs or mistakes, but might miss rare ones that can, nevertheless, be triggered by an attacker. Static analysis can detect mistakes in the code, but cannot check whether the code behaves as described by the specification in natural-language (for functional correctness). This gap between implementation and validation can have grave consequences in terms of security in the real world, and we need to bridge this chasm.

In this blog post, we will be talking about ways to make this gap smaller by making the code we deploy better through analyzing its security properties and its implementation. This blog post continues our work on high assurance Continue reading

Best Tips on Networking for Artists

Networking is a vital part of success in any career, but for artists it can be especially tough. They don’t have the same opportunities to meet new people and build up networks as someone who works at an office. So in this article we will discuss how artists can improve their networking game and reach their full potential for success.

Art gallery events are a great opportunity to meet people and learn about new artists, especially the upcoming ones. Follow your favorite galleries on Instagram or Twitter to find out when they hold their openings and make sure you attend as many of them as possible.

Learn About Your Fellow Artists 

Networking is more than just meeting people; it’s learning how to work with other creative too. Find out which local art communities exist in your area so that you can be a part of those groups and become friends with fellow artists who share similar interests. 

Show Encouragement for the Work of Other Artists 

Artists tend to be very critical of their own work, but you should try not to be. When someone asks for your opinion on their artwork always remember that Continue reading

IPng Networks – Colocation

Introduction

As with most companies, it started with an opportunity. I got my hands on a location which has a raised floor at 60m2 and a significant power connection of 3x200A, and a metro fiber connection at 10Gbps. I asked my buddy Luuk ‘what would it take to turn this into a colo?’ and the rest is history. Thanks to Daedalean AG who benefit from this infrastructure as well, making this first small colocation site was not only interesting, but also very rewarding.

The colocation business is murder in Zurich - there are several very large datacenters (Equinix, NTT, Colozüri, Interxion) all directly in or around the city, and I’m known to dwell in most of these. The networking and service provider industry is quite small and well organized into Network Operator Groups, so I work under the assumption that everybody knows everybody. I definitely like to pitch in and share what I have built, both the physical bits but also the narrative.

This article describes the small serverroom I built at a partner’s premises in Zurich Albisrieden. The colo is open for business, that is to say: Please feel free to reach out if you’re interested.

Continue reading

Database backup: You need to get familiar with the database type being used

In order to back up a database, you need to know how it’s delivered, but you also need to know which of the more than 13 types of database designs it employs. Here we’ll cover four of them—relational, key-value, document, and wide column—that generate a lot of backup questions.Understanding these models will help the backup team create a relationship and trust level with the database admins, and that will help both parties.[Get regularly scheduled insights by signing up for Network World newsletters.] Four database types Relational A relational-database management system (RDBMS) is a series of tables with a defined schema, or layout, with records in rows of one or more attributes, or values.  There are relationships between the tables, which is why it is called a relational database, and why backups generally have to back up and restore everything. Examples of RDMBSs include Oracle, SQL Server, DB2, MySQL, and PostgreSQLTo read this article in full, please click here

Intel announces new roadmaps for Xeon CPUs, Xe GPUs

At this year's Intel's investors' day meeting with Wall Street analysts, CEO Pat Gelsinger revealed new road maps for Xeon CPUs and Xe GPUs that you would typically expect to see reveals at an IDF show that stretches through 2024.Most notable about the Xeon news is that for the first time, Intel is bifurcating the processor line into two microarchitecture types. The two types are a continuation of the current design, and a whole new architecture based on the Alder Lake hybrid architecture currently used in client CPUs.[Get regularly scheduled insights by signing up for Network World newsletters.] Adler Lake uses a different core design than traditional Intel CPUs have used. Up to now, Intel cores have all been the same, identical. But Adler Lake uses two types of cores; the performance core used to do the computing, and the efficient core, used to do small background tests that don’t require a high-performance core. This is hardly an original design; Arm has been doing for years.To read this article in full, please click here

Database backup: Get familiar with the database type being used

In order to back up a database, you need to know how it’s delivered, but you also need to know which of the more than 13 types of database designs it employs. Here we’ll cover four of them—relational, key-value, document, and wide column—that generate a lot of backup questions.Understanding these models will help the backup team create a relationship and trust level with the database admins, and that will help both parties.[Get regularly scheduled insights by signing up for Network World newsletters.] Four database types Relational A relational-database management system (RDBMS) is a series of tables with a defined schema, or layout, with records in rows of one or more attributes, or values.  There are relationships between the tables, which is why it is called a relational database, and why backups generally have to back up and restore everything. Examples of RDMBSs include Oracle, SQL Server, DB2, MySQL, and PostgreSQLTo read this article in full, please click here

Backup for databases: Get familiar with the type you use

In order to back up a database, you need to know how it’s delivered, but you also need to know which of the more than 13 types of database designs it employs. Here we’ll cover four of them—relational, key-value, document, and wide column—that generate a lot of backup questions.Understanding these models will help the backup team create a relationship and trust level with the database admins, and that will help both parties.[Get regularly scheduled insights by signing up for Network World newsletters.] Four database types Relational A relational-database management system (RDBMS) is a series of tables with a defined schema, or layout, with records in rows of one or more attributes, or values.  There are relationships between the tables, which is why it is called a relational database, and why backups generally have to back up and restore everything. Examples of RDMBSs include Oracle, SQL Server, DB2, MySQL, and PostgreSQLTo read this article in full, please click here

Intel announces new roadmaps for Xeon CPUs, Xe GPUs

At this year's Intel's investors' day meeting with Wall Street analysts, CEO Pat Gelsinger revealed new road maps for Xeon CPUs and Xe GPUs that you would typically expect to see reveals at an IDF show that stretches through 2024.Most notable about the Xeon news is that for the first time, Intel is bifurcating the processor line into two microarchitecture types. The two types are a continuation of the current design, and a whole new architecture based on the Alder Lake hybrid architecture currently used in client CPUs.[Get regularly scheduled insights by signing up for Network World newsletters.] Adler Lake uses a different core design than traditional Intel CPUs have used. Up to now, Intel cores have all been the same, identical. But Adler Lake uses two types of cores; the performance core used to do the computing, and the efficient core, used to do small background tests that don’t require a high-performance core. This is hardly an original design; Arm has been doing for years.To read this article in full, please click here

Using pipes on Linux to get a lot more done

One of the things that I have always loved about Unix and then Linux is how it allows me to connect a series of commands together with pipes and get a lot of work done without a lot of effort. I can generate the output that I need in the form that I need it. It's not just the existence of the pipes themselves, but the flexibility of the Linux commands. You can run commands, select portions of the output, sort the results or match on specific strings and you can pare the results down to just what you want to see.In this post, we're going to look at a couple commands that demonstrate the power of the pipe and how easily you can get commands to work together.To read this article in full, please click here