The strange webserver hot potato — sending file descriptors
I’ve previously mentioned my io-uring webserver tarweb. I’ve now added another interesting aspect to it.
As you may or may not be aware, on Linux it’s possible to send a file descriptor from one process to another over a unix domain socket. That’s actually pretty magic if you think about it.
You can also send unix credentials and SELinux security contexts, but that’s a story for another day.
My goal
I want to run some domains using my webserver “tarweb”. But not all. And I want to host them on a single IP address, on the normal HTTPS port 443.
Simple, right? Just use nginx’s proxy_pass?
Ah, but I don’t want nginx to stay in the path. After SNI (read: “browser saying which domain it wants”) has been identified I want the TCP connection to go directly from the browser to the correct backend.
I’m sure somewhere on the internet there’s already an SNI router that does this, but all the ones I found stay in line with the request path, adding a hop.
Why?
A few reasons:
- Having all bytes bounce on the SNI router triples the number of total file descriptors for the connection. (one on the backend, Continue reading
which brings a wave of new features and improvements.
