Literally the one job it had, to prevent address depletion on the Internet. Just to be clear, we're talking about the Internet here, and not some IPv6 island.
I don't get you. We are running dual stack for the time being, and indeed we cannot solve address depletion this way, but that doesn't mean we are doing it forever.
At some point -- maybe when 60%, 70%, or 80%, or 90% of the Internet is running on IPv6 -- Internet services at a large scale will begin to deem IPv4 as a liability, and drop IPv4 support along with the addresses they are holding.
I am not talking about a distant future either. We already have IPv6-only servers up and running, mine included, and we haven't even reached the 50% milestone. In a way, the existence of IPv6-only servers meant that IPv6 is _already_ preventing IPv4 address depletion, because those servers would otherwise have to compete for IPv4 addresses with the other servers too.
Furthermore, I find "I hate IPv6 because it hasn't eliminated IPv4" a really weird opinion (it's not exactly what you said, but it is how I interpret your first few comments combined) because it's sort of recursive: hate IPv6 -> continues to use IPv4 -> IPv6 is unable to eliminate IPv4 -> hate IPv6??? Perhaps you can elaborate on it further, but I don't think it will be agreeable either way.
IPng was meant to avert the IPv4 address exhaustion crisis. IPv6 was chosen as the way forward. A transition group (ngtrans) was formed and instead of taking the time to consider how to integrate IPv6 into the existing landscape it decided to create a separate island. We now have a hodgepodge of transition plans and umpteen different transition mechanisms of which none make IPv6 first class.
What do I mean by making IPv6 first class? Basically a transition plan/mechanism where IPv6 is interoperable with IPv4. This was one of the primary considerations at the time when IPng were still deciding, where other contenders at the time had a better transition plan story. Instead the IPng group chose the technology that didn't have any transition plan because they liked the shiny new features that were irrelevant to averting the address exhaustion situation. Their failure to address the transition back then is why we're in this IPv6 adoption bottleneck.
Can you tell us what the better transition story was? As far as I can tell, v6 is already approximately as interoperable with v4 as it's possible to be.
I don't get your claim that it's a separate island at all. It's not. This machine is v6-only and isn't on an island, it has full access to the entire Internet.
TUBA had a sane transition story [1] although I do not know of the mailing lists these were discussed (probably done in in-person IETF meetings).
For IPv6 transition mechanism you can look into the ngtrans mailing list regarding AutoIPv6 which is basically an extension of 6to4 but instead of connecting IPv6 networks over IPv4 it would extend to IPv4 hosts. There's also an archive link in my post history which you can lookup (on phone).
As for current v6 being interoperable with v4, that is patently false.
I think you are having a very romanticized view of 6to4 and its derivatives.
By the late 2000s it was clear as day that 6to4 wasn't working out (the design rationales of contemporary IPv4<->IPv6 transition technologies will tell you why [0]). By extension, AutoIPv6 which was building on 6to4 was also unlikely to work out. Even worse, AutoIPv6 relies at least partially on anycast 6to4 which was later deprecated due to operational problems [1].
The only surviving 6to4 derivative is 6rd and even that is mostly phased out.
>As for current v6 being interoperable with v4, that is patently false.
You need to define your scope. Are you talking about programming? Or ability for IPv6 clients to talk to IPv4 servers? Or...?
IPv6 is interoperable with IPv4 from a programmer's perspective, once you upgrade your sockets from accepting sockaddr_in to accepting sockaddr_in6, your program automatically receives both IPv4 and IPv6 packets with IPv4 addresses represented as ::ffff:x.y.z.w.
And from a client's perspective, IPv6 clients absolutely can connect to IPv4 servers through a border relay of some sort, typically NAT64. But you can do SIIT as well if you are feeling fancy. Note that this is not much different from 6to4 and its derivatives.
As for the perspective of a server, indeed this is unsolved but again AutoIPv6 doesn't solve the issue as well since the server still needs a public IPv4 address.
Hm... according to that draft, TUBA's transition story is dual stack, which is the same as the main transition approach for v6. How come it's sane for TUBA but not sane when v6 does it?
> As for current v6 being interoperable with v4, that is patently false.
This machine is v6-only and can reach the entire Internet, including v4-only hosts. It clearly has more interoperability than you're claiming or this wouldn't be possible.
Interoperability would be the ability to use either IPv4 or IPv6 and talk to each other seamlessly. This is clearly not the case for IPv4 clients talking to IPv6 servers. This is clearly not the case for IPv6 servers accepting IPv4 connections.
As for TUBA, the transition plan incorporated tunneling, with a bit more thought put into this we could've had something similar to AutoIPv6. That document was just a starting point and outlined some important aspects of the criteria such as low administrative overhead for transitioning IPv4 networks to IPng, something which IPv6 falls completely flat on.
I've talked from v4 clients to v6 servers before, and I've accepted v4 connections to v6 servers. How can it be so clear that it's not possible when I've done it before?
> As for TUBA, the transition plan incorporated tunneling
The transition plan for v6 also incorporates tunneling (6in4 is the same thing as the EON tunneling described in the draft you linked), so again I ask why TUBA counts as having a sane transition plan when doing the same things in v6 doesn't.
There's not much administrative overhead involving in transitioning a network to v6. For most people it involves doing exactly no extra work beyond what they already do to deploy v4. Of course, you do need to reconfigure anything that's been configured such that it can only work with v4 (and this might be quite a lot of stuff), but that was always going to be necessary no matter what you're transitioning to.