It's kinda sad how much of the Internet design comes down to "this is why we can't have nice things".
Pretty much everything that can be abused, is. Third party cookies, DNS interception by ISPs, monitoring (and selling) your Web activity and so on. You can really see the Internet was designed for simpler, more innocent times.
Take the trash fire that is IPv6 with its stupidly large address space (ie 128 bits). Not 48 or even 64 bits. Why? To give users a /64 address space. Why? Because MAC addresses were 48 bits and the IPv6 designers thought they'd "solve" internal network addressing by just using the MAC address in the /64 bit space.
Obviously we don't do that because it's a privacy issue.
So it's inevitable that DNS goes the SSL route. It's sad that it comes to that but here we are.
I really wish IP had a guaranteed connectionless delivery option (ie TCP without the connection handshake). Yeah I know there are hacks to kind of achieve this.
Likewise, a lot of CDNs and big sites use extremely short TTLs for load-balancing and other reasons. DNS lookup can significantly degrade application performance to the point people have done research on the speed up benefits of keeping a warm DNS cache. Adding SSL to the mix isn't going to improve things.
Beneath IP are things that are hardware, or very close to it.
Hardware physically pushes bits on a medium and gets them off. Hardware is the only layer that can even try to provide any guarantees about those bits, since it's the one physically handling them. Anything above isn't "real" until that point.
But it can't--after all, power may go out, your cat may chew on the Ethernet cable, your ISP cuts you off because your bill is unpaid, etc.
Kinda like the same way your filesystem really wants to treat the underlying storage devices as reliable, but does so at its peril because when a drive wants to die, it's going to die and you're not going to stop it. So you put layers between the high-level filesystem and low-level devices to abstract it.
IP is an internetworking protocol that enables routing. Routing is supposed to be where "RAID" for Internet is supposed to live - ideally there is one global Internet, not millions of balkanized mini-Internets behind NATs, and robust networks would have working multiple routes out of their network, and those routes would be known to all hosts on that network and easily be communicable to adjacent networks. So your path to a given IP could take one of many, and that is what happens between service provider networks.
DNS does suck, we could go back to passing hosts files around. Essentially that's what's happening in reverse with ad-blocklists.
Separating IP and the transport layer (initially TCP but now also UDP, SCTP, QUIC etc) was a really smart move that’s enabled lots of innovation.
Any version of IP is better without a “guaranteed delivery option”.
Your point about IPv6 is rather random I feel. Smaller address space may have sufficed, but it’s flaws are elsewhere (many changes in how it operates versus IPv4 rather than simply larger address field).
The use of TLS should have no effect on how “hot” resolver caches are.
IPv6 has 64bit host addresses so you can do whatever the hell you like. SLAAC EUI64 (the MAC based one you describe) is one of those. So is tmpaddrs, where you just assign yourself a new random address every ten minutes. A quite reasonable use case, I think.
I don't like the "simpler, more innocent" phrasing, at least if we consider the innocent ~ naive connotations. It obfuscates that not spending resources on attacks and the perpetual defenses that would render the attacks useless make the system multi-agent optimal as long as those actors don't have extreme future discounting - i.e. as long as they don't try to maximize short-term gains at the expense of making the system less valuable for everyone.
And the protocols started out in such an environment where it wasn't really "innocent" to not waste huge amounts of engineering on defenses, it was the sensible thing to do.
It all comes back to data collection. All of it. No market, means no motive.
If absolutely all collection of Pii was made illegal, or perhaps even all sale of any data based upon it, be it anonomized or not, things would rapidly change.
Even better to make illegal all targeted marketing, based upon stored data or info about a user.
Adtech would still exist. If you visit a reddit about showers, show shower ads. You don't need to know a single thing, except the page currently visited is about showers... so show shower ads.
All of it, massive issues with our democracies, with spying, with tracking, with price fixing, all of it comes back to, what is now, a purely evil industry. User tracking.
Take the profit away from all collectable data about a user, and watch things improve fast.
The GDPR makes processing of PII illegal by default and then adds clearly defined exceptions. The problem are IMHO not the exceptions but the enforcement problem that persists.
I see a huge risk in centralising DNS records resolvers. Being able to set a default DoH resolver for all your devices is adding another layer of tracking after centralised WiFi location resolvers. Like browser choices, resolver choices need to be displayed to the user at setup. We need a competition on trust rather than on convenience.
Why is it bad to give users 64 bits of address space?
Today's Ethernet/WiFi networks don't use the space very effectively, but IP will be with us for a very long time, and those bits are reserved for future innovation at the network edge.
IPv6 has a lot of problems. Primarily it solves non-problems and doesn't solve actual problems (other than address space). The actual problems with IPv4 are:
1. Lack of sufficient addresses; and
2. Roaming.
It's actually a step backwards for IP portability too. This too is another example of solving a non-problem (ie routing of random address blocks).
The user /64 space was specifically done to be large enough to fit a MAC address, which we never ended up doing. Ports are somehow deemed bad too.
So with IPV^ we haven't gotten away from the need for NATs, we have a ridicously large address space and then waste half of those bits.
That's not my take at all. I have a /60 from my ISP, and have 5 ports on my router, each getting it's own /64.
Hosts get a stateless IPv6 address, no running out of leases, no DHCP service that's painful to make redundant, no expired leases when the DHCP server is down, etc. I consider that a win.
Also with so many IPs, each client can grab a few, and rotate them, so facebook doesn't see the same IP as twitter, etc. Nor can you assume that connecting to facebook today will be from the same IP as facebook tomorrow.
So your MAC related IP never need to be shared with the internet, but can be used to accept incoming connections, for those that know the IP, or know the DNS to look up the IP. For those that really want DHCP you can have that as well with DHCPv6.
IPv6 also removes the need for a NAT, all my /60 of IP addresses are visible to the internet, nothing gross like the various ugly hacks for connection forwarding, tunneling, and various weird unreliable technologies trying to get past consumer NATs.
IPv6 seems so much saner than IPv4. You can autoconfigure IPs and setup a firewall for any restrictions you want to make. I can share files gasp without dropbox, share photos without a photo hosting site, remotely open/close/check my garage door without depending on some few $ a month cloud service, remotely view my home security cams, etc.
Generally I think of the normal IPv4 provided single IP+NAT as a crippled consumer only connection that forces the use of someone else's service/cloud for even the most simple of things.
Right, and now your isp knows how many devices you have, what endpoints each of them connect to and at what time. This is what kills me with v6. Privacy is already a large enough issue, I have no interest in letting my ISP "see" my internal network and egress traffic flows from each device.
> Right, and now your isp knows how many devices you have
It sounds crazy today, but this is how it used to be with IPv4 back in the day.
For instance at my first job, every employee's computer (Sun SPARCstations) and every server in the building was directly on the internet with a routable IP address. No firewalls in sight, no NAT.
sendmail ran on every box, so incoming email was handled directly by your own workstation, from anywhere in the internet. So my email address was name@myhostname.company.com (we did have aliases and forwarding so name@company.com also worked).
I ran FTP server and mailing lists and (slightly later) web sites from my workstation. This was peak decentralized internet and a wonderful time.
No, they don't, sure in a given period of time they know how many unique IPv6 addresses are in use. But they can't tell how many devices that is.
I'd much rather have 2^68 IPs that I can sprinkle over as many devices as I want than a single IPv4 that makes p2p hard, makes services hard, and requires the use of either NAT or tons of port forwarding.
Some of the IPv6 changes are good, many of them are not so amazing.
I think one can say in general that if we had of stuck to only increasing the address field size, and not changed a bunch of other things between IPv4 and IPv6 we could have gotten there a lot quicker and be mostly through the transition.
It's good because it enables future innovation at the network edge. I don't know what that will look like, but it's better to have too much freedom than too little.
LISP looks similar to what can be achieved with zero tier or tailscale today: with an overlay network you effectively separate the identity from the location and you can move your physical machine around while maintaining all the connections with the "internal" addresses intact.
Does LISP allow you to achieve the same goal but in the open internet?
That's my understanding, yes. LISP would allow any ISP to perform tunnel encap/decap on their own routers, so traffic doesn't need to traverse a third-party network.
MAC addresses are an Ethernet thing, but Ethernet is not the only LAN protocol in existence. The way I heard it, IPv6 reserves 64 bits for local addresses because there are some non-Ethernet LAN protocols which uses 64 bits instead of 48.
> John D. Day (from Kinmundy, Illinois, born 1947) is an electrical engineer, an Internet pioneer, and a historian. He has been involved in the development of the communication protocols of Internet and its predecessor ARPANET since the 1970s, and he was also active in the design of the OSI reference model. He has contributed in the research and development of network management systems, distributed databases, supercomputing, and operating systems.
Also, I suppose that in 10 years, there'll be virtually no unencrypted traffic above the IP level, and the outer IP level will only be used to help run various encrypted channels, overlay networks, VPNs, etc, which usually have their own, unrelated IP routing inside.
DNSResolver was made a modular system component in Android 10 technically, which is why DNS-over-HTTP/3 will also be supported on "some Android 10 devices which adopted Google Play system updates early." Although DNS Resolver was one of the original 13 Project Mainline modules introduced in Android 10, it was optional to implement. It was made mandatory for devices upgrading to or launching with Android 11, however.
Go to "Private DNS" settings under Network & Internet (on Pixel, could be called something else on other OEM devices), select "Private DNS provider hostname", and enter 'dns.google' or 'cloudflare-dns.com' for Google DNS and Cloudflare DNS respectively. Android will add the https:// and /dns-query parts of the URL for you. And yes those two providers are hardcoded right now: https://cs.android.com/android/platform/superproject/+/maste...
If it doesn't work for you, then DoH support may not have rolled out or be enabled. To check, run this shell command:
cmd device_config get netd_native doh
If it returns '1', then DoH support is enabled. It's enabled by default for Android 13 devices, but for Android 11-12, DNSResolver checks this flag. If it's '0', then you can try running:
They already could. DoT and DoH have been part of Android for years now. If you block too many domains, some Android devices might classify your DNS resolver as broken and fall back to Google's (secure) DNS or anything else the manufacturer has specified in their fork.
In fact, I've analyzed several apps that used some kind of binary/text monstrosity over HTTP (not even HTTPS) to resolve IP addresses, usually Chinese manufacturer bloatware. They've been doing this crap for years! When I looked at the apps, the resolver IP had seemingly even been hard-coded into the Java code.
If you think your network is leak-free just because you've blocked port 53, you've either been missing leaks or hadn't had any kind of software actually try to evade your blocks. DoH doesn't add anything that wasn't already possible.
If you want control over your network, block all outgoing traffic and force every device to go through an intercepting HTTPS proxy and apply filtering heuristics like "this looks double encrypted" or "this looks like an IP address". It's practically impossible to do these days because we've lost control over the devices we've bought, though.
It should be noted that there's no sign of this feature being enabled across all Android devices automatically. For most devices, it's an opt-in feature you can toggle in the settings and broken DNS servers won't trigger a fallback. In other words: it doesn't change a thing about your situation, though your situation may not be what you expect it to be.
I doubt that just using nextdns could help in that. Couldn't they just hardcode and make requests to, say 8.8.8.8 or an unknown address, to resolve their DNS-over-HTTP domains?
Yes they can. You need to additionally run a firewall on your mobile device (and/or a firewall on your home network) and block all of the common DNS IP. Then only NextDNS or your choice of DNS is available (and encrypted)
Google just wants you to use them for DNS so they can still see where you are going :-)
The current implementation uses HTTPS (or DoT in some cases, though that's easily blockable just based on port number if you want to). Blocking QUIC will just make it fall back to a slower method that's just as effective.
DNS-over-HTTP/2 was a step back in efficiency, but with HTTP/3 (QUIC) things finally fall into place. We have UDP back, and it's with proper encryption built-in.
DNS has known privacy issues, but this solution just (conviniently for them) shifts trust and data from the network operator to Google DNS or another central provider
Again, why? What bad thing could Comcast do to you? I know they're like a regional semi-monopoly, but they don't control anything else in your internet life.
Who knows if some 'suspicious' DNS queries sent to Google will accidentally set off some tripwire that causes them to lock you out of half the internet: https://news.ycombinator.com/item?id=30771057
Shudder sure. But why not run your own DNS? Pick a desktop, Pi, server or similar and run a full resolver like unbound. You'll never have issues because your ISPs DNS is slow or dead, and you'll be talking to the root servers directly.
That's the great thing about this feature, you can! Set up a DoH server, configure it on your phone, and you'll have your personal, encrypted-in-transit DNS server that you can use from anywhere!
By leveraging Oblivious DOH you can even encrypt your DNS traffic securely to your upstream DNS provider without having to set up your own recursive resolver (which would only lead to privacy issues).
Depends on your setup. You can tunnel your DNS through a Wireguard link to some cloud server if you want or you can use something like ngrok to expose the port publicly.
Every hop adds latency, though, so I'd recommend using as direct a connection as you can get. DNS latency can make your internet experience a real pain!
I've worked it out myself, there's a Rust crate called doh-proxy that contains a binary that's essentially a DoH server for a DNS server you specify and if you set it up right it'll just open a DoH server on a port you specify. Kind of a pain to debug, but once it works, it works pretty well. Can take a TLS cert or can work without TLS and a reverse proxy.
DNSSEC protects against that for well-configured domains, though you can't assume people put in the effort.
You can use ODoH (https://blog.cloudflare.com/oblivious-dns/) to double-encrypt your DNS requests and forward them through an external server, disconnecting your query from your response, and encrypting your upstream DNS requests. You can pick any relay from this list: https://download.dnscrypt.info/dnscrypt-resolvers/v3/odoh-re... (need to de-base64 them to get the actual domain) and any upstream DOH server you prefer.
> DNSSEC protects against that for well-configured domains
This isn't effective against DNS-level censorship, though. A DNSSEC validation error is just as effective as a fake NXDOMAIN or bogus IP at keeping me from visiting the correct site.
It works in the sense that at least you can know your ISP is messing with your DNS. If they mess with DNS, they might as well just block an IP (range), so a DNS alternative probably won't bypass most censorship. You're better off with a decent VPN at that point.
I temporarily agree with this, but once TLS ECH gets widely deployed then I won't. I can see an ISP blocking a single domain, but not all of Cloudflare just because it's hosted there.
I'm still trying to add http3 to my DNS server and can't test DoH. But based on the other comments, it looks like a custom DNS server will use DoH if supported?
I haven't looked into DNS-over-HTTP/3 (DoH), but there's a secure-ish DNS mechanism already: DNS over Distributed TLS (DoT) [1].
The main criticisms of DoT are that it's hard to analyze for cybersecurity purposes. Except DoT is not encrypted end-to-end but only encrypted hop-to-hop. This means you can put a proxy checking for cybersecurity threats in front of your DoT traffic into or out of your network, if you need to. DoT can also be peer to peer, which can leverage a lot of benefits. DoH is client-server.
DoH has all the problems that come with HTTP/s and seems a good way to break one of the most relatively reliable systems of the Internet, in the name of convenience.
No, the main criticism of DoT is that it's trivial for network operators to block, because it runs on its own port. The entire reason DoH exists is to get past middleboxes.
DoT is simply DNS-over-TLS (never heard of Distributed TLS).
The article linked by this post explains that Android supported DoT since Android 9, but it was problematic because it required frequent TCP handshakes and TLS handshakes. Plus, it could be easily blocked by middle boxes.
Ah, thank you. Then DoT is not really what I meant. Distributed TLS, aka Datagram TLS, aka DLTS, is TLS security mechanisms adapted for use with UDP [1].
I like DoH a lot. Previously if you were on cellular network you could not change your DNS resolver. But now I have option to change it both from Google Chrome and Android settings. My ISP is no longer be able to log my DNS queries. And also now my ISP is not able to block "illegal" web sites like Wikipedia and Youtube, at least in DNS level.
The ISP can indeed block arbitrary IP addresses. As well as the risk of overblocking (and consequent negative consumer experience, oops, we forgot WikiMedia's controversial project X is also on the same servers as their famous encyclopedia so when we blocked the ~1 in a million customers looking at project X we also broke Wikipedia for all our customers) this is a serious administrative hassle. ISPs are for-profit entities so "We can spend $$$ doing this" is only justified if you can show why it increased revenue more than $$$.
You can't see "the presented certificate" on a modern web browser visiting most web sites. In TLS 1.3 (offered by > 55% of 135,000 surveyed web sites) every step after Client Hello is encrypted, so the certificate isn't available to snoops.
For now, you can see the SNI in that client Hello, telling the server who they wanted to talk to. However ECH (Encrypted Client Hello) is intended to get rid of that too.
However, the SNI (prior to ECH) just tells us who the Client said they wanted to talk to when connecting.
Suppose the client calls 10.20.30.40, and they announce they want to talk to legit.example which is fine. The server sends a certificate (which the ISP doesn't see) and that certificate says it's valid for legit.example and for naughty.example. Now the client is allowed, in HTTP/2 and HTTP/3 to say "Actually I want https://naughty.example/stuff" and although the server isn't obligated to have that answer because the client said it originally wanted to talk to legit.example not naughty.example it often can answer and will. The client has a certificate showing this server is entitled to answer this question, and now it has an answer, so it's done. [If the HTTPS server can't answer or doesn't want to for any reason, the HTTP error code for this scenario, where somebody asked you about a name for which you have a certificate but aren't actually able to answer questions, is 421 Misdirected].
I remember one case where Russia was blocking a web site's IP address automatically. Whenever the owner of the site changed IP address of his own web site it was blocked in minutes. Then he has changed to, I don't remember exactly but if I'm not mistaken to bank's IP address. And as you have guessed that bank was blocked :D
Yes, but that is computationally expensive at the ISP level. Turning on deep packet inspection for one user is way overkill. Doing it for everyone? Congrats, you've just completely trashed out your core. Performance tanks. You lose customers.
It is opportunistically expensive, as well. Tracking down users on a case-by-case basis costs a lot in real time, as well as human work hours.
It is almost never worth it to an ISP do to this type of inspection -- ie, 'just figure it out' -- unless they are being compelled to.
Apparently it is not possible for the moment to specify a custom suffix to the complete URL of a DoH server (for example, with a AdGuard server, I cannot append a client identifier such as dns.contoso.com/dns-query/my-client, while dns.contoso.com is working fine).
Will stick to DoT for the moment :)
EDIT : ok so only 'dns.google' or 'cloudflare-dns.com' are supported right now, other domains are still using DoT. Pretty useless feature then :(
The problem is that the settings UI only allows typing a hostname, not a full URL.
The Settings app is heavily customized by Android vendors and can't be updated on all devices in lockstep with the DnsResolver module. Hopefully it will be fixed starting from Android 13.
They said they implemented the query engine in tokio, a popular async runtime in Rust, but quiche is blocking AFAIK. I wonder how they integrated the two.
No, quiche is agnostic to the event framework used by the application: you have to handle socket I/O externally and hand the data you receive to the quiche connection objects which keep track of protocol state and tell you what to send back.
I don't think we should shun secure protocols because they make it harder for us to "hack" devices that we don't control.
Instead I think we should embrace the secure option and use it as a wake up call that it isn't worth saving $50 for a Smart TV that shows ads and we should choose devices that respect us so that we can be secure and not be abused.
You make it sound like the only resolver available is Google's in which case there might be some merit to your comment. In reality you can pick from many resolver's to fit ones preferences, making your point moot.
Even if the default resolve is Google's they could've easily set 8.8.8.8 as the default for all Android devices too, that's an orthogonal concern.
Ew, you give your TV an IP? DNS isn't magic, I've seen numerous devices try DNS, but then fall back to IPs. Found out my TV fingerprints what I watch and uploads that to a central server to analyze what I'm watching, even if I'm watching local content. That was the end of my TVs access to the internet. I bought a Roku and the netflix, youtube, and amazon prime clients run WAY better than they did on the TV, nicer remote as well.
Android's support for custom DNS servers in combination with an ad blocking DNS server has been a very effective system level ad blocker for me. No additional software or hacks like on device VPNs needed.
Oh no, now you can optionally use Google's DNS servers instead your ISPs that redirect every unresolved domain to an ad-laden hellsite! Or you could even set up your personal PiHole as a DNS server from anywhere to prevent others from reading your DNS requests! The absolute horror! How will our privacy ever recover!
Nobody is forcing you to use Google's servers. They're off by default. They've bumped the HTTP version their existing feature uses up a level. If you don't want you Android system DNS client to use DoH, don't enable it. If you want to prevent apps from connecting through DoH to resolve IPs, don't install any closed source apps and read through the source code of open source ones vigorously, because DNS evasion has been a thing for years now.
If you rely on DNS for ad blocking or privacy, you've already lost. You've lost several years ago, actually. You can't block Youtube ads with just DNS, unless you break Youtube all together.
Chromecast doesn't run Android, of course, so I'm not sure how it's relevant to this article.
Personally, I've statically routed my Chromecast to forward UDP/53 to my PiHole and it's been very effective so far. Too bad blocking trackers also breaks the applications on Chromecast, making the block effectively useless, but that's the choice I made when I bought one of those things.
It puts the device owner in charge rather than the network owner. I've checked, my android gives me the option to reconfigure it including turning it off.
It makes DNS based advert adding Impossible. Hopefully this is the end of captive portals.
Isn't DNS based filtering usually done by configuring the device to use a DNS server that filters? (vs intercepting traffic to another server and modifying that)
Yeah this is likely the #1 reason why it's been implemented by Google. It's sad that this is one of the first Rust features. Proprietary and user hostile.
It doesn't make DNS based ad filtering impossible unless you're doing that at the router level. You can still do it locally (like via hosts file) or via the DNS resolver itself.
As I painfully experienced recently, browsers on desktop ignores host file with DoH enabled. Had to create a separate browser profile to get this to work with DoH and other privacy/security settings to work
Yes, based on your other comment it sounds like you're concerned about "smart" devices like TVs that are on the internet. It's unfortunate that those devices lock you out but that's kinda on those devices.
> Not to mention that DNS over HTTP AdBlock is basically just as easy to set up nowadays.
Only if the device in question uses the ad-blocking DNS servers.
Firefox (IIRC) by default does not use the operating system's resolv.conf. Smart TVs (and Chromecast) have also been known to ignore DNS settings from DHCP.
And since the DNS traffic now looks like HTTP(S) traffic, your only recourse is to block all HTTP access and tunnel it through a proxy.
As an IT guy, and the person who runs a home network, this reduces the visibility of what is happening on my network(s). Reduced visibility is bad IMHO.
Yeah and if you don't use SNI, but the website sits on its own IP, then the website can be found out via the ip, which is transmitted in the clear (unless VPNs/tunneling etc are used).
There have been devices that used 8.8.8.8 no matter what your network said. Now, you can obviously make your own 8.8.8.8 for your network, but good luck also supply proper TLS certificates for the encrypted DNS traffic.
Yeah, you _still_ can tweak this and you _still_ can configure that, but it’s getting more finicky ever so slightly every time. “Still” is the key, to hint at how volatile it all is.
The point of DoH is preventing 'others' - such as ISPs - from snooping and interfering with plain-text DNS queries. Encrypting network traffic is generally good. Anything that hijacks DNS requests is going to no longer work, as designed.
Devices should still allow setting a custom DoH server, and they should use it. You should still be able to run your own DoH server and use that.
Any device/software that ignores your network settings (such as classic DNS or DoH) is bad, just like an ISP intercepting your DNS requests is bad.
Hidden behind “privacy” marketing DOH looks to be a way to centralize DNS queries at the app level to protect ad revenue.
Now apps you download can essentially have their own DNS resolvers built into the app and you no longer have control over DNS data. Especially IOT devices and smart TV’s will just bypass all user settings and directly resolve dns with resolvers of their choosing.
Now apps can do that? Couldn't apps previously/currently hard code their own plaintext DNS resolver? The only change here is that now it's encrypted, and network operators (including yourself) cannot mitm DNS?
I noticed that if I block 8.8.8.8 from google assistants and advertise (via IPv4 and IPv6) that they wait 5 seconds for 8.8.8.8 to fail, then retry on the local DNS server. So when you say "ok google, what's the temperature?" you get an annoying 5 second delay.
Frustrating that google assistants ignore the DNS server presented to them.
If they don't use DoT/DoH, you can try redirecting traffic for 8.8.8.8:53 to your own DNS server with one or two firewall rules. If they're actually securing the connection that won't work, though.
Indeed. I block port 853 and various popular DoH servers and then rewrite any port 53 access to use my servers. So far it's working well. Frustrating that so many devices ignore DHCP (IPv4) and RADVD (IPv6) recommended name servers.
You just nat UDP/53 traffic and resolve it yourself. Now you have to identify the specific DoH server they're using and block it, and hope it falls back to something you can control.
Pretty much everything that can be abused, is. Third party cookies, DNS interception by ISPs, monitoring (and selling) your Web activity and so on. You can really see the Internet was designed for simpler, more innocent times.
Take the trash fire that is IPv6 with its stupidly large address space (ie 128 bits). Not 48 or even 64 bits. Why? To give users a /64 address space. Why? Because MAC addresses were 48 bits and the IPv6 designers thought they'd "solve" internal network addressing by just using the MAC address in the /64 bit space.
Obviously we don't do that because it's a privacy issue.
So it's inevitable that DNS goes the SSL route. It's sad that it comes to that but here we are.
I really wish IP had a guaranteed connectionless delivery option (ie TCP without the connection handshake). Yeah I know there are hacks to kind of achieve this.
Likewise, a lot of CDNs and big sites use extremely short TTLs for load-balancing and other reasons. DNS lookup can significantly degrade application performance to the point people have done research on the speed up benefits of keeping a warm DNS cache. Adding SSL to the mix isn't going to improve things.