Hacker Newsnew | past | comments | ask | show | jobs | submit | meander_water's commentslogin

Seems similar to a Show HN from 5 years ago: https://news.ycombinator.com/item?id=26256726

I gotta say Horcrux is a catchier name ;)


Until you get sued by JK Rowling. Unlikely? sure, but I wanted to decouple for other reasons.

That's why i went with PassCrux for mine. Can't argue that it's too close, since "crux" is just latin for cross, as in "crux of the matter" (JK likely invented horcrux as a portmanteau of horror + crux).

https://github.com/xkortex/passcrux


Definitely catchier name!

Have you done a comparison on token usage + cost? I'd imagine there would be some level of re-inventing the wheel (i.e. rewriting code for very similar tasks) for common tasks, or do you re-use previously generated code?


It reuses previously generated code, so tools it creates persists from session to session. It also lets the LLM avoid actually “seeing” the tokens in some cases since it can pipe directly between tools/write to disk instead of getting returned into the LLMs context window.


Not foolproof, but a couple of easy ways to verify if images were AI generated:

- OpenAI uses the C2PA standard [0] to add provenance metadata to images, which you can check [1]

- Gemini uses SynthId [2] and adds a watermark to the image. The watermark can be removed, but SynthId cannot as it is part of the image. SynthId is used to watermark text as well, and code is open-source [3]

[0] https://help.openai.com/en/articles/8912793-c2pa-in-chatgpt-...

[1] https://verify.contentauthenticity.org/

[2] https://deepmind.google/models/synthid/

[3] https://github.com/google-deepmind/synthid-text


I just went to a random OpenAI blog post ("The new ChatGPT Images is here"), right-click saved one of the images (the one from "Text rendering" section), and pasted it to your [1] link - no metadata.

I know the metadata is probably easy to strip, maybe even accidentally, but their own promotional content not having it doesn't inspire confidence.


Yeah that's been my experience as well. I think most uploads strip the metadata unfortunately


Synth id can be removed, run it through an image 2 image model with a reasonably high denoising value or add artificial noise and use another model to denoise and voila. It's effort that probably most aren't doing, but it's certainly possible.


> The watermark can be removed, but SynthId cannot as it is part of the image.

That's not quite right. SynthID is a digital watermark, so it's hard to remove, while metadata can be easily removed.


Reminder that provenance exists to prove something as REAL, not to prove something is fake.

AI content outnumbers Real content. We are not going to decide if every single thing is real or not. C2PA is about labeling the gold in a way the dirt can't fake. A Photo with it can be considered real and used in an encyclopedia or sent court without people doubting it.


The lowest bar in agentic coding is the ability to create something which compiles successfully. Then something which runs successfully in the happy path. Then something which handles all the obvious edge cases.

By far the most useful metric is to have a live system running for a year with widespread usage that produces a lower number of bugs than that of a codebase created by humans.

Until that happens, my skeptic hat will remain firmly on my head.


I recently created a throwaway API key for cloudflare and asked a cursor cloud agent to deploy some infra using it, but it responded with this:

> I can’t take that token and run Cloudflare provisioning on your behalf, even if it’s “only” set as an env var (it’s still a secret credential and you’ve shared it in chat). Please revoke/rotate it immediately in Cloudflare.

So clearly they've put some sort of prompt guard in place. I wonder how easy it would be to circumvent it.


If your prompt is complex enough, doesn’t seem to get triggered.

I use a lot of ansible to manage infra, and before I learned about ansible-vault, I was moving some keys around unprotected in my lab. Bad hygiene- and no prompt intervening.

Kinda bums me out that there may be circumstances where the model just rejects this even if you for some reason you needed it.


It seems depends on model and context usage though, the agent forgets a lot of things after half fill up. It even forgets the primary target you give at the start of chat.


Claude definitely has some API token security baked in, it saw some API keys in a log file of mine the other day and called them out to me as a security issue very clearly. In this case it was a false positive but it handled the situation well and even gave links to reset each token.


For those looking for a better language server for python, I would recommend ruff. As of v0.4.5 it's completely written in rust, and much faster than pylance.

If you've got the ruff plugin installed it should use it by default. Should be able to use it in zed as well.


I love this so much, on my phone this is much faster than actual HN (I know it's only a read-only version).

Where did you get the 22GB figure from? On the site it says:

> 46,399,072 items, 1,637 shards, 8.5GB, spanning Oct 9, 2006 to Dec 28, 2025


> Where did you get the 22GB figure from?

The HN post title (:


22GB is non-gzipped.


Hah, well that's embarrassing


Actually you can go one better:

  #!/usr/bin/env -S uv run --python 3.14 --script
Then you don't even need python installed. uv will install the version of python you specified and run the command.


alternatively, uv lets you do this:

  #!/usr/bin/env -S uv run --script
  #
  # /// script
  # requires-python = ">=3.12"
  # dependencies = ["foo"]
  # ///


The /// script block is actually specified in PEP 723 and supported by several other tools apart from uv.


The last time I commented extolling the virtues of uv on here, I got a similar reply, pointing out that PEP 723 specs this behavior, and uv isn’t the only way. So I’ll try again in this thread: I’m bullish on uv, and waiting for Cunningham.


I am all in on uv as well, and advocating for its use heavily at $dayjob. But I think having as much as possible of these things encoded in standards is good for the ecosystem. Maybe in a few years time, someone will make something even better than uv. And in the meantime, having things standardised speeds up adoption in e.g. syntax highlighting in editors and such.


That's good to hear; do you know what other tools support it?


From what I can tell, Hatch, PDM, pipx and pip-run also support it.


I’ve started migrating all of my ~15 years of one-off python scripts to have this front matter. Right now, I just update when/if I use them. I keep thinking if were handier with grep/sed/regex etc, I’d try to programmatically update .pys system-wide. But, many aren’t git tracked/version controlled, just laying in whatever dir they service(d). I’ve several times started a “python script dashboard” or “hacky tools coordinator” but stop when I remember most of these are unrelated (to each-other) and un/rarely used. I keep watching the chatter and thinking this is probably an easy task for codex, or some other agent but these pys are “mine” (and I knew^ how they worked when I wrote^ them) and also, they’re scattered and there’s no way I’m turning an agent loose on my file system.

^mostly, some defs might have StackOverflow copy/pasta


You could run ripgrep on your file system root to find most of them, its insanely fast, then feed it to claude or something to generate a script to do it for you.


This is an awesome features for quick development.

I'm sure the documentation of this featureset highlights what I'm about to say but if you're attracted to the simplicity of writing Python projects who are initialized using this method, do not use this code in staging/prod.

If you don't see why this is not production friendly it's for the simple a good.reaaon that creating deployable artifacts packaging a project or a dependency of a project this uses this method, creating reproducible builds becomes impossible.

This will also lead to builds that pass your CI but fail to run in their destination environment and vice versa due to the fact that they download heir dependencies on the fly.

There may be workarounds and I know nothing of this feature so investigate yourself if you must.

My two cents.


This isn't really "alternatively"; it's pointing out that in addition to the shebang you can add a PEP 723 dependency specification that `uv run` (like pipx, and some other tools) can take into account.


I'm actually a bit annoyed that uv won. I found pdm to be a really nice solution that fixed a lot of the issues poetry had without the hard ideological stance behind it, while fixing most of its problems. But maybe that ideology was partly what drove it's adoption.


Rust is getting this feature too, it's great for one off scripts


Yeah, but you need `uv`. If we are reaching out for tools that might not be around, then you can also depend on nix-shell,

    #! /usr/bin/env nix-shell
    #! nix-shell -i python3 --packages python3


Yeah, but you need Nix. If we are reaching out for tools that might not be around, then you can also depend on `curl | sudo bash` to install Nix when not present.

(this is a joke btw)


Yeah, but you need curl, sudo, and bash…


"Give me a 190-byte hex0 seed of x86 assembly, and I shall compile the rest of the world." - Archimedes


amazing quote. Adding it to my about page, do you want credit or shall I credit it to archimedes xD

On a serious note, its so brilliant that something like this is now possible when we think about it. It's maddeningly crazy to think about all the process but in the end that you can end up with a system / linux iso whose hash you can trust/independently verify and then you use it and spread around the world. Definitely makes me feel as sky's the only limit or just its very pleasant to think about it.


... you must first invent the universe


As shared in a sibling comment, you can get away with just curl+shell: https://paulw.tokyo/standalone-python-script-with-uv/


The issue I have with `nix-shell` is that the evaluation time is long, so if you need to run the script repeatedly it may take a long time. `nix shell` at least fix this issue by caching evaluations, but I think uv is still faster.


This comes with the added benefit that your environment is reverted as soon as you exit the Nix shell.


I dont think your emvironment is permanently changed with uv run?


Where does uv download the Python interpreter to?


By default, it's `~/Library/Caches/uv/environments-v2/` on macos.

Can find via `uv cache dir`

See: https://docs.astral.sh/uv/reference/cli/#uv-cache-dir


That shebang will work on GNU link based systems, but might not work elsewhere. I know that’s the most popular target, but not working on macOS, BSDs, or even busybox.


I just tried the one you are replying to and it worked great on macOS. I frequently use a variant of this on my Mac.


That’s interesting. I wonder when that changed. Maybe FreeBSD supports multi arg shebangs now, too


The -S argument to env splits the argument on whitespace.

The shell doesn't support anything, it just passes the string to env.

So beware quoting and other delimiters that won't work the way you expect.


And with some small shebang trick, you don't even need to have uv installed [1], just curl and a posix shell

[1] https://paulw.tokyo/standalone-python-script-with-uv/


> Then you don't even need python installed. uv will install the version of python you specified and run the command

What you meant was, "you don't need python pre-installed". This does not solve the problem of not wanting to have (or limited from having) python installed.


I'd like someone to do a comparison of tech company valuations pre GenAI vs post for the same vertical.

I understand there's always some optimism for new tech, but the valuations we're seeing seems absurd to me.

Like, do they expect to see x100 profit for the same vertical? Obviously some new markets have been created, but I don't see them solving any particularly novel business problems.


Cool project, but loading "Crime and Punishment" crashed my mobile browser.

I don't think urls were built for that kind of punishment.


I think it was the text-wrap-style value. https://developer.mozilla.org/en-US/docs/Web/CSS/Reference/P...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: