Hacker Newsnew | past | comments | ask | show | jobs | submit | naranha's commentslogin

It's still common in IT departments to enter the domain administrator password to join a computer to a domain or install software on a client machine. This seems insane to me, you can just fake the windows gui in a Fullscreen application and keylog the password - even using a web browser. I think AD is a relic of the 90s that should be retired


Now do one with ECC!


... and one with 32GB/64GB/128GB ECC RAM ... and one with USB4/USB4 2.0 ... and one with 2.5/5/10Gbps Ethernet ... and one with M.2 SSD port ... and one with 2nm litography


If the browser has 1080 vertical pixels, the scrollbars has max say 1000 possible positions. According to my napkin math* if you scroll over 100 uuids per second it would take up to ~1.7 septillion or ~1 700 000 000 000 000 000 000 000 years to scroll to an uuid of which you know the position if you hit the spot exactly on the scrollbar.

* https://www.wolframalpha.com/input?i=ROUND%5B2%5E122%2F1000%...

Edit: Use 122 bit instead of 128 due to UUIDv4


Not OP, but I have converted a couple of projects from knex to kysely recently. With Typescript kysely is much better. With kysely-codegen you can generate typescript types for a pre existing schema too.


That was my impression too. Glad to have it confirmed. Thank you.


Stuff becomes legacy not because of the language, but because of outdated libraries or frameworks or unavailable/uninterested developers. For me PHP is legacy, because I refused to work on PHP codebases, except I find someone who's willing to do the job for me.


At least couchdb is also append only with vacuum. So it's maybe not completely outdated.


High performance has never been a reason to use couchdb.


I build an internal project using CouchDB2 as a backend in 2017 and it's still used today. CouchDB definitely surpassed my expectations in what has been possible to do with it. Its biggest advantages are that data sync is effortless and that it uses HTTP as a protocol, so you can communicate directly with it from a web application.


uuidv7 will be supported in PostgreSQL 17, at which point generation should be as fast as uuidv4, or if you implement it in pgsql now it will be as fast as the proposed ULID algorithm.

Insert performance could be even better, iirc for BTree Indexes monotonic increasing values are better than random ones, but feel free to correct me on that ;)


Commenting on this a bit late, but in case anyone reads this later too:

UUIDv7 support unfortunately didn't make it to Postgres 17, since the RFC wasn't completely finalized yet by the time of feature freeze (April 8), see discussion on pgsql-hackers:

https://www.postgresql.org/message-id/flat/ZhzFQxU0t0xk9mA_%...

So I guess we'll unfortunately have to rely on extensions or client-side generation for the time being, until Postgres 18.


In that case I don't understand why the author didn't go for uuidv7? It seems like existing tooling (both in database and outside) deals with it better, it seems like there are no downsides unless you expect your identifiers to be generated past 4147 but don't care if they are generated past 10889 (I'd love to hear that use-case, seems like it must be interesting).


As with databases it always depends, for maximum insert performance you'd actually often go with random uuids so you dont get a hot page.


Hot page?

Using a monotonically increasing PK would cause pages in the index to be allocated and filled sequentially, increasing throughput.

Using random UUIDs would lead to page-splitting and partially-filled pages everywhere, negatively impacting performance and size-on-disk.


Not always, this article describes the problem: https://learn.microsoft.com/en-us/troubleshoot/sql/database-...


Exactly, with one big insert its better to have a sequential value, for many small ones its often better to not.

As with all databases, measure before you cut.


The way to take advantage of the bandwidth of multiple storage devices is to distribute concurrent writes across them, rather than forcing everything to commit sequentially using contended locks or rollbacks.


The DB (or any application) should not have any need to know what devices are underneath its mount point. If you’re striping across disks, that’s a device (or filesystem, for ZFS) level implementation.


idk, when I want productivity I go with nodejs these days. lodash for some quick data crunching and pg or mariadb for db access using promises simply beats native php functions. with express you can spawn a http server in under 10 lines, while with php you need to setup apache/nginx or docker.

at some point in the past PHP was the most productive tool for some quick & dirty coding, but not anymore for me.


I recommend checking out FrankenPHP, where you can spin up a production php server with a single cli command, or compile your php into a self-executable binary.

I’m a contributor over there.


In the graphics benchmarks it performs mostly worse than the 5600g/5700g. That's pretty disappointing 3 years later.


It's the lowest end of the consumer APUs. The 8600g and the 8700g are those models replacements.


The 5600G had 7 graphics compute units, the 8500G only has 4; if it performs within 10% of the 5600G, the compute units are almost 60% faster than 3 years ago.


They had better be; 5000G chips are based on old GCN Vega stuff, while these are based on the latest RDNA3 architecture. They're also clocked MUCH higher, as AMD focused a lot on clock speeds when developing RDNA2.

In fact, pretty much all of the performance gain going from a 5700 XT to a 6700 XT is based on the latter's higher clock speed, as their specs are very similar otherwise (although a 6700 XT has 12 GB 192bit memory and a heap of cache, whereas the 5700 XT has 8GB 256bit memory that runs a small bit slower, and crucially, way less cache).


Yeah it's a shame, you really have to step up to the 8700G (or at least the 8600G) to get the true APU experience with modern graphics performance. That said even the bottom of the range significantly outperforms Intel's integrated solutions, and I support anything that gives them a kick in the pants to ship better iGPUs - one step closer to my work laptop being able to run the Windows desktop at 4k without turning into a laggy mess.


At least you hope, it is more power efficient due to advanced manufacturing process. And then a totally disappointment: new CPU consumes significantly more both under load and idle. The AI-extensions are also missing. ECC is missing in all 8300G/8500G/8600G/8700G.


Got a link for the ECC claim? Last I read, the ECC issues with the AM5 platform were mostly fixed (at least many vendors are claiming ECC compatibility again, which they didn't a year ago).


I've just searched for official ECC support on * 8700G https://www.amd.com/en/product/14066 * 8600G https://www.amd.com/en/product/14071 * 8500G https://www.amd.com/en/product/14086 * 8300G https://www.amd.com/en/product/14091

5xxxG are also known for missing official support, only PRO favors have it.

Since ECC is for robustness I'd really prefer when both AMD and ASUS/ASRock/Gigabyte/MSI/... mention ECC support in the specification. Last time I've checked, only ASUS did mention it for consumer AM5 mainboards. (Gigabyte did it for (at least) one simple but expensive server mainboard.) While for AM4 it wasn't such an issue for other manufacturers too.


Historically, only the PRO variants of AMD's APUs have supported ECC.


If comparing to 5600g/5700g, there's no difference in AI extensions or ECC. AI is new for 8600g/8700g, and ECC is only in "pro" APUs.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: