Today's Heavy Networking podcast dives into academic research on DDoS attack techniques. Our guests have published a paper about how the TCP protocol and middleboxes such as firewalls can be weaponized by bad actors and used in reflective amplification attacks. We discuss technical details, how they performed this research, potential countermeasures, and more.
Enterprises large and small that depend on Hewlett Packard Enterprise to build their systems and certify them for an absolutely enormous amount of software and support them during a long life in the field should send Thank You notes to the venerable systems maker. …
The name of a resource indicates what we seek, an address indicates where it is, and a route tells us how to get there.
You might wonder when that document was written… it’s from January 1978. They got it absolutely right 42 years ago, and we completely messed it up in the meantime with the crazy ideas of making IP addresses resource identifiers.
The name of a resource indicates what we seek, an address indicates where it is, and a route tells us how to get there.
You might wonder when that document was written… it’s from January 1978. They got it absolutely right 42 years ago, and we completely messed it up in the meantime with the crazy ideas of making IP addresses resource identifiers.
Ever since Nutanix, the first virtualized server-storage smashup, dropped out of stealth in 2011, we have been watching with great interest to see if this hyperconverged infrastructure would take the world by storm. …
When it comes to allocating budget for cybersecurity there are many approaches to breaking it down into line items. We discuss various ideas and possibilities that might offer some insight for your own situation.
When it comes to allocating budget for cybersecurity there are many approaches to breaking it down into line items. We discuss various ideas and possibilities that might offer some insight for your own situation.
My name is Rishabh Bector, and this summer, I worked as a software engineering intern on the Cloudflare Tunnel team. One of the things I built was quick Tunnels and before departing for the summer, I wanted to write a blog post on how I developed this feature.
Over the years, our engineering team has worked hard to continually improve the underlying architecture through which we serve our Tunnels. However, the core use case has stayed largely the same. Users can implement Tunnel to establish an encrypted connection between their origin server and Cloudflare’s edge.
This connection is initiated by installing a lightweight daemon on your origin, to serve your traffic to the Internet without the need to poke holes in your firewall or create intricate access control lists. Though we’ve always centered around the idea of being a connector to Cloudflare, we’ve also made many enhancements behind the scenes to the way in which our connector operates.
Typically, users run into a few speed bumps before being able to use Cloudflare Tunnel. Before they can create or route a tunnel, users need to authenticate their unique token against a zone on their account. This means in order to simply Continue reading
Western Digital has announced a new disk drive architecture that combines flash memory with high-density hard-disk drives plus a small CPU to manage everything.If this sounds familiar, it is. Several years ago there was an effort by WD and other hard-disk drive (HDD) makers to build hybrid hard drives, with small flash drives acting as a cache for the hard disk, but those efforts failed, said Ravi Pendekanti, senior vice president of HDD product management and marketing at WD.Now see how AI can boost data-center availability and efficiency
“There was a huge pitfall in those [drives],” he told me. The drives didn’t know what kind of data they had, so they didn’t know that hot data was frequently accessed and should be written on to the flash drive, while warm or cold that wasn’t accessed as much should be written to the disk.To read this article in full, please click here
Western Digital has announced a new disk drive architecture that combines flash memory with high-density hard-disk drives plus a small CPU to manage everything.If this sounds familiar, it is. Several years ago there was an effort by WD and other hard-disk drive (HDD) makers to build hybrid hard drives, with small flash drives acting as a cache for the hard disk, but those efforts failed, said Ravi Pendekanti, senior vice president of HDD product management and marketing at WD.Now see how AI can boost data-center availability and efficiency
“There was a huge pitfall in those [drives],” he told me. The drives didn’t know what kind of data they had, so they didn’t know that hot data was frequently accessed and should be written on to the flash drive, while warm or cold that wasn’t accessed as much should be written to the disk.To read this article in full, please click here
Self-driving cars must possess the ability to recognize road conditions, make decisions and take appropriate action, all in real time. This requires on-board artificial intelligence (AI) that ensures vehicles are able to “learn,” along with super-fast processing power.Tesla unveiled a custom AI chip back in 2019 and soon began manufacturing cars with it. Now Tesla has unveiled a second internally designed semiconductor to power the company’s Dojo supercomputer.Chip shortage will hit hardware buyers for months to years
The D1, according to Tesla, features 362teraFLOPS of processing power. This means it can perform 362 trillion floating-point operations per second (FLOPS), Tesla says.To read this article in full, please click here
Self-driving cars must possess the ability to recognize road conditions, make decisions and take appropriate action, all in real time. This requires on-board artificial intelligence (AI) that ensures vehicles are able to “learn,” along with super-fast processing power.Tesla unveiled a custom AI chip back in 2019 and soon began manufacturing cars with it. Now Tesla has unveiled a second internally designed semiconductor to power the company’s Dojo supercomputer.Chip shortage will hit hardware buyers for months to years
The D1, according to Tesla, features 362teraFLOPS of processing power. This means it can perform 362 trillion floating-point operations per second (FLOPS), Tesla says.To read this article in full, please click here
The Linux set command allows you to change the value of shell options or to display the names and values of shell variables. Rarely used, it is a bash builtin, but is quite a bit more complicated than most builtins.If you use the command without any arguments, you will get a list of all the settings—the names and values of all shell variables and functions. Watch out though! You’ll end up with a torrent of output flowing down your screen. There are just short of 3,000 lines of output on my Fedora system:$ set | wc -l
2954
The top of the list looks like what you see below, but the output gets considerably more complicated as you move through it.To read this article in full, please click here
The Linux set command allows you to change the value of shell options or to display the names and values of shell variables. Rarely used, it is a bash builtin, but is quite a bit more complicated than most builtins.If you use the command without any arguments, you will get a list of all the settings—the names and values of all shell variables and functions. Watch out though! You’ll end up with a torrent of output flowing down your screen. There are just short of 3,000 lines of output on my Fedora system:$ set | wc -l
2954
The top of the list looks like what you see below, but the output gets considerably more complicated as you move through it.To read this article in full, please click here
The traditional data center is built on a three-tier infrastructure with discreet blocks of compute, storage and network resources allocated to support specific applications. In a hyperconverged infrastructure (HCI), the three tiers are combined into a single building block called a node. Multiple nodes can be clustered together to form a pool of resources that can be managed through a software layer.
Hyperconverged-infrastructure resources
8 reasons to consider HCI for your data center
How to backup HCI
Making the right choice: HCI hardware or software?
HCI: It’s not just for specific workloads anymore
Instead of a server with 50 cores, 128GB RAM and 1TB of storage, you can have 500 cores with 1.2TB RAM and 10TB of storage across 10 nodes, presented as a pool of resources to mix and match into services that deliver the specific performance characteristics and back-end resources needed for the job at hand. Configuration can be done on the fly, through an easy-to-access interface that lets you build or scale your solution.To read this article in full, please click here
The traditional data center is built on a three-tier infrastructure with discreet blocks of compute, storage and network resources allocated to support specific applications. In a hyperconverged infrastructure (HCI), the three tiers are combined into a single building block called a node. Multiple nodes can be clustered together to form a pool of resources that can be managed through a software layer.
Hyperconverged-infrastructure resources
8 reasons to consider HCI for your data center
How to backup HCI
Making the right choice: HCI hardware or software?
HCI: It’s not just for specific workloads anymore
Instead of a server with 50 cores, 128GB RAM and 1TB of storage, you can have 500 cores with 1.2TB RAM and 10TB of storage across 10 nodes, presented as a pool of resources to mix and match into services that deliver the specific performance characteristics and back-end resources needed for the job at hand. Configuration can be done on the fly, through an easy-to-access interface that lets you build or scale your solution.To read this article in full, please click here
What makes his solution even more interesting is the choice of automation tool: instead of using the universal automation hammer (aka Ansible) he used Terraform, a much better choice if you want to automate service provisioning, and you happen to be using vendors that invested time into writing Terraform provisioners.
What makes his solution even more interesting is the choice of automation tool: instead of using the universal automation hammer (aka Ansible) he used Terraform, a much better choice if you want to automate service provisioning, and you happen to be using vendors that invested time into writing Terraform provisioners.