Hacker Newsnew | past | comments | ask | show | jobs | submit | Dwedit's commentslogin

"Don’t ask an LLM if a URL is valid. It will hallucinate a 200 OK. Run requests.get()."

Except for sites that block any user agent associated with an AI company.


640 * 480 / 2 = 150KB for a classic 16-color VGA screen.

One way to use unofficial instructions is so you can use Read-Modify-Write instructions in addressing modes that the official instruction cannot be used in.

To understand, it helps if you write out the instruction table in columns, so here's the CMP and DEC instructions:

Byte C1: (add 4 to get to the next instruction in this table)

CMP X,ind [x indirect, read instruction's immediate value, add X, then read that pointer from zeropage, written like CMP ($nn,x)]

CMP zpg [zeropage, written like CMP $nn]

CMP # [immediate value, written like CMP #$nn]

CMP abs [absolute address, written like CMP $nnnn]

CMP ind,Y [indirect Y, read pointer from zeropage then add Y, written like CMP ($nnnn,Y)]

CMP zpg,X [zeropage plus X, add X to the zeropage address, written like CMP $nn,X]

CMP abs,Y [absolute address plus Y, add Y to the address, written like CMP $nnnn,Y]

CMP abs,X [absolute address plus X, add X to the address, written like CMP $nnnn,X]

So that's 8 possible addressing modes for this instruction.

Immediately afterwards:

Byte C2: (add 4 to get to the next instruction in this table)

???

DEC zpg

DEX

DEC abs

???

DEC zpg,X

???

DEC abs,X

That's 5 possible addressing modes. So where's "DEC X,ind", "DEC ind,Y", and "DEC abs,Y"? They don't exist.

Table for Byte C3 is 8 undocumented instructions that aren't supposed to be used. So people determined what the instruction did. Turns out, it's a combination of CMP and DEC, so people named the instruction "DCP".

Byte C3:

DCP X,ind

DCP zpg

???

DCP abs

DCP ind,Y

DCP zpg,X

DCP abs,Y

DCP abs,X

Unlike the "DEC" instruction, you have the "X,ind", "ind,Y", and "abs,Y" addressing modes available. So if you want to decrement memory, and don't care about your flags being correct (because it's also doing a CMP operation), you can use this DCP instruction.

Same idea with INC and SBC, you get the ISC instruction. For when you want to increment, and don't care about register A and flags afterwards.


What exactly does sponsoring CachyOS mean? Bandwidth and hosting? Money going upstream to the actual developers who make the packages?

It's missing Santa Claus Conquers the Martians.


Because pirates are unaffected by the patent situation with H.265.

But isn’t AV1 just better than h.265 now regardless of the patents? The only downside is limited compatibility.

Encoding my 40TB library to AV1 with software encoding without losing quality would take more then a year of not multiple years, consume lots of power while doing this, to save a little bit of storage. Granted, after a year of non stop encoding I would save a few TB of space. But it think it is cheaper to buy a new 20TB hard drive than the electricity used for the encoding.

HW support for av1 is still behind h265. There's a lot of 5-10 year old hw that can play h265 but not av1. Second, there is also a split bw Dovi and HDR(+). Is av1 + Dovi a thing? Blu rays are obviously h265. Overall, h265 is the common denominator for all UHD content.

> Blu rays are obviously h265

Most new UHD, yes, but otherwise BRD primarily use h264/avc


I avoid av1 downloads when possible because I don’t want to have to figure out how to disable film grain synthesis and then deal with whatever damage that causes to apparent quality on a video that was encoded with it in mind. Like I just don’t want any encoding that supports that, if I can stay away from it.

In MPV it's just "F1 vf toggle format:film-grain=no" in the input config. And I prefer AV1 because of this, almost everything looks better without that noise.

You can also include "vf=format:film-grain=no" in the config itself to start with no film grain by default.


I watch almost everything in Infuse on Apple TV or in my browser, though.

What's wrong with film grain synthesis? Most film grain in modern films is "fake" anyway (The modern VFX pipeline first removes grain, then adds effects, and lastly re-adds fake grain), so instead of forcing the codec to try to compress lots of noise (and end up blurring lots of it away), we can just have the codec encode the noisless version and put the noise on after.

I watch a lot of stuff from the first 110ish years of cinema. For the most recent 25, and especially 15… yeah I dunno, maybe, but easier to just avoid it.

I do sometimes end up with av1 for streaming-only stuff, but most of that looks like shit anyway, so some (more) digital smudging isn’t going to make it much worse.


Even for pre-digital era movies, you want film grain. You just want it done right (which not many places do to be fair).

The problem you see with AV1 streaming isn't the film grain synthesis; it's the bitrate. Netflix is using film grain synthesis to save bandwidth (e.g. 2-5mbps for 1080p, ~20mbps for 4k), 4k bluray is closer to 100mbps.

If the AV1+FGS is given anywhere close to comparable bitrate to other codecs (especially if it's encoding from a non-compressed source like a high res film scan), it will absolutely demolish a codec that doesn't have FGS on both bitrate and detail. The tech is just getting a bad rap because Netflix is aiming for minimal cost to deliver good enough rather than maximal quality.


With HEVC you just don't have the option to disable film grain because it's burned into the video stream.

I’m not looking to disable film grain, if it’s part of the source.

Does AV1 add it if it's not part of the source?

I dunno, but if there is grain in the source it may erase it (discarding information) then invent new grain (noise) later.

I'm skeptical of this (I think they avoid adding grain to the AV1 stream which they add to the other streams--of course all grain is artificial in modern times), but even if true--like, all grain is noise! It's random noise from the sensor. There's nothing magical about it.

The grain’s got randomness because distribution and size of grains is random, but it’s not noise, it’s the “resolution limit” (if you will) of the picture itself. The whole picture is grain. The film is grain. Displaying that is accurately displaying the picture. Erasing it for compression’s sake is tossing out information, and adding it back later is just an effect to add noise.

I’m ok with that for things where I don’t care that much about how it looks (do I give a shit if I lose just a little detail on Happy Gilmore? Probably not) and agree that faking the grain probably gets you a closer look to the original if you’re gonna erase the grain for better compression, but if I want actual high quality for a film source then faked grain is no good, since if you’re having to fake it you definitely already sacrificed a lot of picture quality (because, again, the grain is the picture, you only get rid of it by discarding information from the picture)


If you’re watching something from the 70s, sure. I would hope synthesized grain isn’t being used in this case.

But for anything modern, the film grain was likely added during post-production. So it really is just random noise, and there’s no reason it can’t be recreated (much more efficiently) on the client-side.


Everyone is affected by that mess, did you miss the recent news about Dell and HP dropping HEVC support in hardware they have already shipped? Encoders might not care about legal purity of the encoding process, but they do have to care about how it's going to be decoded. I like using proper software to view my videos, but it's a rarity afaik.

RIP John Conway, a victim of Covid.

You need to convince all developers that all 117,881 Steam games need be recompiled for ARM. Hopefully they have a working build environment, have appropriate libraries built for ARM, still have the source code, and are able to do the testing to see if the same code works correctly on ARM.

Async always confused me as to when a function would actually create a new thread or not.

Zig doesn't make it simpler! Now in a single function, using async won't spawn threads, while using sync might.

But I'm digging this abstraction.


Why? Asynchrony has nothing to do with multiple threads. In fact you can have async with only a single thread!

The "every second version" rule may be a meme, but it does not reflect the actual release order of Windows, nor properly count the NT series. It only really applies to sentiment surrounding Windows 98, ME, XP, Vista, 7, 8, 10, and 11. But that leaves out Windows 95, most versions of Windows NT, and Windows 2000.

It works with 95/2000.

95 - good,

98 - bad,

2000 - good,

ME - bad,

XP - good,

Vista - bad,

7 - Good,

8 - bad,

10 - good,

11 - bad.


That assumes 10 was good and misses 8.1.

Also 95/98/me were a different line from NT/2000.

It sounds like a good theory but there isn’t much substance to it.


> That assumes 10 was good and misses 8.1.

I find this to be a mostly valid assumption, and 8.1 shouldn't be counted separately from 8 just as Vista SP2 should be counted any differently from Vista (Vista was mostly fine after companies fixed their drivers and Microsoft toned things down a bit. 7 just drove that home and put some necessary distance between itself and Vista).

> Also 95/98/me were a different line from NT/2000.

I fail to see why this matters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: