Say (an encrypted) hello to a more private internet.
https://blog.mozilla.org/en/products/firefox/encrypted-hello/
Nothing big, but kinda interesting. I’m excited to see how this will go 👀
It is kinda big, previously you had to send the host unencrypted to support SNI which in turn was needed to support https for multiple sites per one IP address, which was needed because we lack IP addresses. So there were basically two options: compromise privacy a tiny bit (by sending host unencrypted), or make it impossible for most websites to have any privacy at all (by making it impossible to have a https certificate).
Now you can have the best of both worlds. Granted, you need to have DoH (which still isn’t the default on most systems AFAIK), but it’s still a step in the right direction.
@rikudou @voxel
ASFAIR it used to be even worse than that, because if you didn’t want SNI (for compatibility reasons or whatever), but you still wanted a certificate, you had to have one server for every hostname (because each had its own IP), assuming you could afford the additional IP spaceGranted you didn’t need a physical server, but that was still a bigger cost
Some servers are more flexible on that front, but early SNI didn’t have those
Yeah, I thought I implied that, but that was the reason SNI started - IPv4 is a scarce resource and thus expensive and the only way to host multiple https websites was having multiple IPs (not necessarily multiple servers, you can easily have multiple IPs for one server, you just had to bind one IP per host), which was adding to the costs quite a bit and hobby projects couldn’t really afford it (well, they could, but not many people are spending hundreds of dollars for a hobby website).
@rikudou
Yes, but binding one IP per host is what some web servers can’t do, they bind globally and forward accordinglyUnless I missed it, that’s always a possibility
ie httpd can’t do it (at least back then) while nginx can
Which translates to reconfiguration of the entire infra to replace one server with another, and that’s also a cost
It’s happening as part of the handshake. Probably not completely what it’s about, but it was the first that came to my mind.
Edit: It has to happen before the encryption is established, because otherwise the server doesn’t know which certificate to use, because it doesn’t know which host is the client requesting. There’s also ESNI (encrypted SNI) to solve this but I’m not sure on how many servers actually deploy it.
This is actually pretty cool. More encrypted things is always the answer.
I don’t get it… How does this protect anything? If we want our packets to reach a web server, we need to write the server’s IP address on them. If a snooper has the IP, can’t they just lookup the domain name from a DNS server? Or is that not a service DNS provides?
If the IP address is encrypted, how will the routers know where to send the packets? Only solution I can think of would be onion routing… Am I wrong??
An IP address is no longer associated with just one website/domain name. There could be thousands of websites running on a single IP address.
As is, anyone can currently look at your encrypted traffic and see in plain text which site you’re surfing to. So this proposal is long overdue.
A government will still subpoena the destination IP for the information if they want it.
ECH protects against warrantless monitoring and other non-government bad actors and I’m happy to see it implemented. If there hasn’t been a strong enough privacy argument to use Firefox for someone to date, this is a big one.
A government will still subpoena the destination IP for the information if they want it.
Sure, but they probably won’t do that every day, so for the general public this is an improvement.
And also they cant get the info if the logs are deleted
They can’t get info that has been deleted yes, but I think it might be possible to coerce the company into starting to collect logs, legally or not.
Absolutely!
somebody wiresharking your traffic can see the domain name you’re contacting even if you use https; this solves that.
reverse DNS lookup does exist, but it’s not always accurate, especially when multiple websites are hosted on the same server (which is more common than you think)
Is it because of the “Host” HTTP header? I always thought it was optional, since the IP address and port were handled by the network and transport layers respectively. Turns out it’s required to resolve between different virtual hosts in the same server. Today I Remembered (TIR?) that virtual hosts are a thing…
Is there anything else that might indicate the domain name in the handshake connection?
Is there anything else that might indicate the domain name in the handshake connection?
The SNI (Server Name Indication) happens before any HTTP communication and is done in plain text. It is needed because a single web server might host multiple websites, since each of them has their own certificate it needs to know which one to serve you.
With the new proposal that SNI is now encrypted. It makes the difference between anyone listening in being able to tell “you visited lemmy.world” and “you visited something behind Cloudflare”.
Technically DNS will let you look up a host name from an IP address, but the catch is that it might not work: it’s not automatically configured. And even if it is configured you might not get all of the host names pointing at that address.
Very many webserver operators don’t bother adding the server’s host name to reverse DNS. For example,
lemmy.world
’s IP address does not map to any host name in reverse DNS, andgoogle.com
’s IP address maps to some completely different name for me, with no mention of Google in the returned name.Also, many websites can be served from the same IP address, especially if they are hosted in the cloud. You are correct that someone snooping on the connection would still see the IP address, but if that points them at something like a webhosting company or a CDN (or some other server hosting many different sites) it still doesn’t really tell them which specific site is being accessed.
But yes, if the site you’re accessing is the only one hosted on that server then the snoop could potentially guess the host name. But even then: how would they know that’s the only site hosted there? If some site they’ve never even heard of uses the same IP address they would never know.
You need to read how SNI works, then it will make sense
So if I understand it correctly, with this scheme the traffic between my computer and the DNS server is encrypted and so my internet provider won’t know which websites I’m visiting?
DNS of HTTPS or TLS has been available for years, but it hasn’t been adopted widely because the hello at the beginning if the three way handshake when connecting to a website ratted you out to your ISP anyways.
While this is good for survielience circumventing… It is looking like the beginning of the end of DNS filtering and the popularization of encrypted telemetry.
You can always set up a MITM on your network. But yeah, DNS filtering is doomed in the not so far future.
Does this preclude on device DNS filters?
I think it doesn’t, though I’m not really a network guy.
I read through it, to me, it seems like on/device piHole etc. Would still be fine. But I am not a network guy either
PiHole might be a different story than your local device, I think that one might be affected.
That’s an option, but its a lot of work and all you get in return is broken apps/websites and not being able to tell if someone is mitm-ing you mitm.
I’m sure some engineer out there is going to find a workaround, hopefully without breaking encryption.
Never had a broken website due to my own MITM.
Removed by mod
You can do filtering and monitoring in the DNS server itself in corpo environment, like umbrella or AD DNS.
Does anyone know how to set this up on a ngnix proxy?
06298aad9d02fff5244d12c366120ca2
!Do web servers support it tho?