As soon as I saw that whole section on database-related flakiness my mind went from "flaky unit tests" to "tests called unit tests that are actually integration tests". I worked on a team where we labored under that misconception for a long, long time. By the time we finally realized that many of the tests in our suite were integration tests and not unit tests it was too late to change (due to budget and timeline pressure).
I really like the different approaches to dealing with these flaky tests, that is a good list.
I think it's important that engineers can distinguish between testing code in isolation versus "integration" or "system" testing, but I've seen a sophomoric stigma around integration tests that lead to mocking hell, and a hatred towards testing in general.
Unit tests are great. You want them. Craft your interfaces to enable them.
Integration and system tests are important too. Again, crafting higher level interfaces that allow for testing will, in general, lead to a more ergonomic API.
Analogously: unit tests ensure each of your LEGO blocks are individually well-formed. Integration tests ensure that the build instructions actually result in something reasonable.
I think the definition has evolved. Since usually the DB is reset between tests (and such reset is snappy), it's transparent enough to appear as unit testing. That we do this in a buggy manner or do not understand how to properly reset the DB does not negate that fact in my opinion. You could mock it or inject it, therefore doing unit testing in the traditional sense, but again you could introduce even nastier bugs and a whole lot of indirection overhead.
I really like the different approaches to dealing with these flaky tests, that is a good list.