Archive

Category Archives for "Networking"

Scientific network tags (scitags)

The data shown in the chart was gathered from The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC22) being held this week in Dallas. The conference network, SCinet, is described as the fastest and most powerful network on Earth, connecting the SC community to the world. The chart shows data generated as part of the Packet Marking for Networked Scientific Workflows demonstration using SCinet - Booth 2847 (StarLight).

Scientific network tags (scitags) is an initiative promoting identification of the science domains and their high-level activities at the network level. Participants include, dCacheESnet, GÉANT, Internet2, Jisc, NORDUnet, OFTS, OSG, RNP, RUCIO, StarLight, XRootD.

This article will demonstrate how industry standard sFlow telemetry streaming from switches and routers can be used to report on science domain activity in real-time using the sFlow-RT analytics engine.

The scitags initiative makes use of the IPv6 packet header to mark traffic. Experiment and activity identifiers are encoded in the IPv6 Flow label field. Identifiers are published in an online registry in the form of a JSON document, https://www.scitags.org/api.json.

One might expect IPFIX / NetFlow to be a Continue reading

Full Stack Journey 072: A Peek Inside The Comp Sci Ivory Tower

On today's Full Stack Journey podcast we climb the ivory tower to get a glimpse of academic life in the field of networking and computer science with guest Dave Levin. Dr. Levin is Assistant Professor of Computer Science at the University of Maryland. His research focuses on networking and security, including measurement, cryptography, artificial intelligence, and economics.

The post Full Stack Journey 072: A Peek Inside The Comp Sci Ivory Tower appeared first on Packet Pushers.

Full Stack Journey 072: A Peek Inside The Comp Sci Ivory Tower

On today's Full Stack Journey podcast we climb the ivory tower to get a glimpse of academic life in the field of networking and computer science with guest Dave Levin. Dr. Levin is Assistant Professor of Computer Science at the University of Maryland. His research focuses on networking and security, including measurement, cryptography, artificial intelligence, and economics.

Cisco study: Network teams look to SDN, automation to manage multicloud operations

Networking teams have been challenged to provide hybrid workers with secure access to cloud-based applications, and they’re looking for greater automation and additional visibility tools to help them manage today’s diverse enterprise workloads.That’s among the key findings of Cisco’s new Global Hybrid Cloud Trends report, which surveyed 2,500 IT decision makers in 13 countries to discern the trends and priorities of organizations that are managing workloads across multiple private, public cloud and edge computing environments. Cisco said its report is aimed at looking at how these multicloud environments impact network operations.To read this article in full, please click here

AMD gives new Epyc processors a big launch with help from partners

AMD has officially launched the fourth-generation of its Epyc server processors for high performance computing (HPC) in the data center, and all the top OEMs showed up for the party.Officially named the Epyc 9004 but commonly referred to by its codename “Genoa,” the new chip is based on the fourth generation of AMD’s Zen microarchitecture and built on a 5nm manufacturing process by TSMC.Thanks to its chiplet design, which breaks the monolithic CPU into smaller “chiplets” that are tied together with a high speed interconnect, Genoa has up to 96 cores (double Intel’s best at the moment). The chiplets, with 16 cores each, are easier to manufacture than a single 96-core CPU. Genoa includes the latest I/O technology, such as PCI Express 5.0, 12 channels of DDR5 memory, and CXL 1.1.To read this article in full, please click here

AMD gives new Epyc processors a big launch with help from partners

AMD has officially launched the fourth-generation of its Epyc server processors for high performance computing (HPC) in the data center, and all the top OEMs showed up for the party.Officially named the Epyc 9004 but commonly referred to by its codename “Genoa,” the new chip is based on the fourth generation of AMD’s Zen microarchitecture and built on a 5nm manufacturing process by TSMC.Thanks to its chiplet design, which breaks the monolithic CPU into smaller “chiplets” that are tied together with a high speed interconnect, Genoa has up to 96 cores (double Intel’s best at the moment). The chiplets, with 16 cores each, are easier to manufacture than a single 96-core CPU. Genoa includes the latest I/O technology, such as PCI Express 5.0, 12 channels of DDR5 memory, and CXL 1.1.To read this article in full, please click here

Migrate from S3 easily with the R2 Super Slurper

Migrate from S3 easily with the R2 Super Slurper
Migrate from S3 easily with the R2 Super Slurper

R2 is an S3-compatible, globally distributed object storage, allowing developers to store large amounts of unstructured data without the costly egress bandwidth fees you commonly find with other providers.

To enjoy this egress freedom, you’ll have to start planning to send all that data you have somewhere else into R2. You might want to do it all at once, moving as much data as quickly as possible while ensuring data consistency. Or do you prefer moving the data to R2 slowly and gradually shifting your reads from your old provider to R2? And only then decide whether to cut off your old storage or keep it as a backup for new objects in R2?

There are multiple options for architecture and implementations for this movement, but taking terabytes of data from one cloud storage provider to another is always problematic, always involves planning, and likely requires staffing.

And that was hard. But not anymore.

Today we're announcing the R2 Super Slurper, the feature that will enable you to move all your data to R2 in one giant slurp or sip by sip — all in a friendly, intuitive UI and API.

Migrate from S3 easily with the R2 Super Slurper

The first step: R2 Super Slurper Private Beta

One Continue reading

Get started with Cloudflare Workers with ready-made templates

Get started with Cloudflare Workers with ready-made templates
Get started with Cloudflare Workers with ready-made templates

One of the things we prioritize at Cloudflare is enabling developers to build their applications on our developer platform with ease. We’re excited to share a collection of ready-made templates that’ll help you start building your next application on Workers. We want developers to get started as quickly as possible, so that they can focus on building and innovating and avoid spending so much time configuring and setting up their projects.

Introducing Cloudflare Workers Templates

Cloudflare Workers enables you to build applications with exceptional performance, reliability, and scale. We are excited to share a collection of templates that helps you get started quickly and give you an idea of what is possible to build on our developer platform.

We have made available a set of starter templates highlighting different use cases of Workers. We understand that you have different ideas you will love to build on top of Workers and you may have questions or wonder if it is possible. These sets of templates go beyond the convention ‘Hello, World’ starter. They’ll help shape your idea of what kind of applications you can build with Workers as well as other products in the Cloudflare Developer Ecosystem.

We are excited to Continue reading

Store and process your Cloudflare Logs… with Cloudflare

Store and process your Cloudflare Logs... with Cloudflare
Store and process your Cloudflare Logs... with Cloudflare

Millions of customers trust Cloudflare to accelerate their website, protect their network, or as a platform to build their own applications. But, once you’re running in production, how do you know what’s going on with your application? You need logs from Cloudflare – a record of what happened on our network when your customers interacted with your product that uses Cloudflare.

Cloudflare Logs are an indispensable tool for debugging applications, identifying security vulnerabilities, or just understanding how users are interacting with your product. However, our customers generate petabytes of logs, and store them for months or years at a time. Log data is tantalizing: all those answers, just waiting to be revealed with the right query! But until now, it’s been too hard for customers to actually store, search, and understand their logs without expensive and cumbersome third party tools.

Today we’re announcing Cloudflare Logs Engine: a new product to enable any kind of investigation with Cloudflare Logs — all within Cloudflare.

Starting today, Cloudflare customers who push their logs to R2 can retrieve them by time range and unique identifier. Over the coming months we want to enable customers to:

Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve

Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve
Reduce origin load, save on cloud egress fees, and maximize cache hits with Cache Reserve

Earlier this year, we introduced Cache Reserve. Cache Reserve helps users serve content from Cloudflare’s cache for longer by using R2’s persistent data storage. Serving content from Cloudflare’s cache benefits website operators by reducing their bills for egress fees from origins, while also benefiting website visitors by having content load faster.

Cache Reserve has been in closed beta for a few months while we’ve collected feedback from our initial users and continued to develop the product. After several rounds of iterating on this feedback, today we’re extremely excited to announce that Cache Reserve is graduating to open beta – users will now be able to test it and integrate it into their content delivery strategy without any additional waiting.

If you want to see the benefits of Cache Reserve for yourself and give us some feedback– you can go to the Cloudflare dashboard, navigate to the Caching section and enable Cache Reserve by pushing one button.

How does Cache Reserve fit into the larger picture?

Content served from Cloudflare’s cache begins its journey at an origin server, where the content is hosted. When a request reaches the origin, the origin compiles the content needed for the response and sends Continue reading

Indexing millions of HTTP requests using Durable Objects

Indexing millions of HTTP requests using Durable Objects
Indexing millions of HTTP requests using Durable Objects

Our customers rely on their Cloudflare logs to troubleshoot problems and debug issues. One of the biggest challenges with logs is the cost of managing them, so earlier this year, we launched the ability to store and retrieve Cloudflare logs using R2.

In this post, I’ll explain how we built the R2 Log Retrieval API using Cloudflare Workers with a focus on Durable Objects and the Streams API. Using these, allows a customer to index and query millions of their Cloudflare logs stored in batches on R2.

Before we dive into the internals you might be wondering why one doesn't just use a traditional database to index these logs? After all, databases are a well proven technology. Well, the reason is that individual developers or companies, both large and small, often don't have the resources necessary to maintain such a database and the surrounding infrastructure to create this kind of setup.

Our approach instead relies on Durable Objects to maintain indexes of the data stored in R2, removing the complexity of managing and maintaining your own database. It was also super easy to add Durable Objects to our existing Workers code with just a few lines of config and some Continue reading

Easy Postgres integration on Cloudflare Workers with Neon.tech

Easy Postgres integration on Cloudflare Workers with Neon.tech
Easy Postgres integration on Cloudflare Workers with Neon.tech

It’s no wonder that Postgres is one of the world’s favorite databases. It’s easy to learn, a pleasure to use, and can scale all the way up from your first database in an early-stage startup to the system of record for giant organizations. Postgres has been an integral part of Cloudflare’s journey, so we know this fact well. But when it comes to connecting to Postgres from environments like Cloudflare Workers, there are unfortunately a bunch of challenges, as we mentioned in our Relational Database Connector post.

Neon.tech not only solves these problems; it also has other cool features such as branching databases — being able to branch your database in exactly the same way you branch your code: instant, cheap and completely isolated.

How to use it

It’s easy to get started. Neon’s client library @neondatabase/serverless is a drop-in replacement for node-postgres, the npm pg package with which you may already be familiar. After going through the getting started process to set up your Neon database, you can easily create a Worker to ask Postgres for the current time like so:

  1. Create a new Worker — Run npx wrangler init neon-cf-demo and accept all the defaults. Enter Continue reading

FlameGraph Htop — Benchmarking CPU— Linux

< MEDIUM: https://raaki-88.medium.com/flamegraph-htop-benchmarking-cpu-linux-e0b8a8bb6a94 >

I have written a small post on what happens at a Process-Level, now let’s throw some flame into it with flame-graphs

Am a fan of Brendan Gregg’s work and his writings and flame graph tool are his contribution to the open-source community —

https://www.brendangregg.com/flamegraphs.html

Before moving into Flamegraph, let’s understand some Benchmarking concepts.

Benchmarking in general is a methodology to test resource limits and regressions in a controlled environment. Now there are two types of benchmarking

  • Micro-Benchmarking — Uses small and artificial workloads
  • Macro-Benchmarking — Simulates client in part or total client workloads

Most Benchmarking scenario results boil down to the price/performance ratio. It can slowly start with an intention to provide proof-of-concept testing to test application/system load to identify bottlenecks in the system for troubleshooting or enhancing the system or to know about the maximum stress system simply is capable of taking.

Enterprise / On-premises Benchmarking: let’s take a simple scenario to build out a data centre which has huge racks of networking and computing equipment. As Data-centre builds are mostly identical and mirrored, benchmarking before going for Purchase-order is critical.

Cloud-based Benchmarking: This is a really in-expensive setup. While Continue reading

BGP Unnumbered Duct Tape

Every time I mention unnumbered BGP sessions in a webinar, someone inevitably asks “and how exactly does that work?” I always replied “gee, that’s a blog post I should write one of these days,” and although some readers might find it long overdue, here it is ;)

We’ll work with a simple two-router lab with two parallel unnumbered links between them. Both devices will be running Cumulus VX 4.4.0 (FRR 8.4.0 container generates almost identical printouts).

Terminator 1 is the best Terminator movie

And now for something completely different.

I’ve off and on thought about this for years, so it needed to be written down.

Terminator 1 is the best Terminator movie

Obviously SPOILERS, for basically all Terminator movies.

Summary of reasons

  • The robot is really not human.
  • It’s a proper time loop, with a bonus that none of the players in the movie know it.

I’m aware of The Terminator Wiki, but I don’t care about it. My opinions are on the movies as movies.

The behavior of the terminator

In Terminator 1 (T1) Arnold is clearly a robot in human skin. At no point do you believe it’s a human. The only reason people don’t stop and scream and point, is that “I’m being silly, that’s clearly impossible”. But Arnold spends the whole movie in the uncanny valley, the kind in 2022 reserved for realistically generated CGI characters.

It’s very nearly a perfect movie. Just take his first dialog. “Nice night for a walk”, the punks say. They are saying this to a machine that has never talked to a human before, so its response is complete nonsense. It just repeats the words back to them.

It’s a Continue reading

HPE launches supercomputers for the enterprise

Supercomputers are super expensive, but Hewlett Packard Enterprise has announced plans to make supercomputing accessible for more enterprises by offering scaled down, more affordable versions of its Cray supercomputers.The new portfolio includes HPE Cray EX and HPE Cray XD supercomputers, which are based on the Frontier exascale supercomputer at Oak Ridge National Labs. These servers come with the full array of hardware, including compute, accelerated compute, interconnect, storage, software, and flexible power and cooling options.To read this article in full, please click here

HPE launches supercomputers for the enterprise

Supercomputers are super expensive, but Hewlett Packard Enterprise has announced plans to make supercomputing accessible for more enterprises by offering scaled down, more affordable versions of its Cray supercomputers.The new portfolio includes HPE Cray EX and HPE Cray XD supercomputers, which are based on the Frontier exascale supercomputer at Oak Ridge National Labs. These servers come with the full array of hardware, including compute, accelerated compute, interconnect, storage, software, and flexible power and cooling options.To read this article in full, please click here

World’s fastest supercomputer is still Frontier, 2.5X faster than #2

Frontier, which became the first exascale supercomputer in June and ranked number one among the fastest in the world, retained that title in the new TOP500 semiannual list of the world’s fastest.Without any increase in its speed—1.102EFLOP/s—Frontier still managed to score 2.5 times faster that the number two finisher, Fugaku, which also came in second in the June rankings. An exascale computer is one that can perform 1018 (one quintillion) floating point operations per second (1 exaFLOP/s).Despite doubling its maximum speed since it was ranked number three in June, the Lumi supercomputer remained in third-place.There was just one new member of the top-ten list, and that was Leonardo, which came in fourth after finishing a distant 150th in the TOP500 rankings in June.To read this article in full, please click here

World’s fastest supercomputer is still Frontier, 2.5X faster than #2

Frontier, which became the first exascale supercomputer in June and ranked number one among the fastest in the world, retained that title in the new TOP500 semiannual list of the world’s fastest.Without any increase in its speed—1.102EFLOP/s—Frontier still managed to score 2.5 times faster that the number two finisher, Fugaku, which also came in second in the June rankings. An exascale computer is one that can perform 1018 (one quintillion) floating point operations per second (1 exaFLOP/s).Despite doubling its maximum speed since it was ranked number three in June, the Lumi supercomputer remained in third-place.There was just one new member of the top-ten list, and that was Leonardo, which came in fourth after finishing a distant 150th in the TOP500 rankings in June.To read this article in full, please click here

Network Break 407: VMware Buys Startup For SD-WAN Client; Zoom Meetings At The Movies?

This week's Network Break covers several announces from VMware Explore including a new SD-WAN client. ASIC-maker Marvell goes after industrial networks with new silicon, Cisco announces the curtain falling on several ISR router models, and SolarWinds settles with the SEC. Zoom and the AMC movie theater chain partner on an offering to hold big meetings at the movies, and Starlink announces it will slow customer speeds if they cross a 1TB cap.

The post Network Break 407: VMware Buys Startup For SD-WAN Client; Zoom Meetings At The Movies? appeared first on Packet Pushers.