Hacker Newsnew | past | comments | ask | show | jobs | submit | maximilianroos's commentslogin

I've been working on a worktree manager for a couple months, excited to share it.

Since agents have become good enough to run in parallel, I've found git worktrees to be, in the words of Juliet "my only love sprung from my only hate" — an awesome productivity multiplier, but with a terrible UX...

Worktrunk is designed to fix that: 1) it's a wonderful layer on top of git worktrees and 2) it adds a lot of optional QoL improvements focused on parallel agents.

Those Qol improvements include a command to show the status of all worktrees/branches (including CI status & links to PRs), a great Claude Code statusline, a command to have an LLM write a commit message, etc.

Like my other projects (PRQL, xarray, insta, numbagg), it's Open Source, no commercial intent. It's written in rust, extensively tested; crafted with love (no slop!)

Check it out, please let me know any feedback, either here or in GH. Thanks in advance, Max

- https://github.com/max-sixty/worktrunk

- https://worktrunk.dev/


Have your AI talk to their AI

Then, if the AIs are positive, the human principals can talk

Seems quite reasonable!


> One of the challenges faced by our Java service was its inability to quickly provision and decommission instances due to the overhead of the JVM. ... To efficiently manage this, we aim to scale down when demand is low and scale up as demand peaks in different regions.

but this seems to be a totally asynchronous service with extremely liberal latency requirements:

> On a regular interval, Password Monitoring checks a user’s passwords against a continuously updated and curated list of passwords that are known to have been exposed in a leak.

why not just run the checks at the backend's discretion?


> "why not just run the checks at the backend's discretion?"

Because the other side may not be listening when the compute is done, and you don't want to cache the result of the computation because of privacy.

The sequence of events is:

1. Phone fires off a request to the backend. 2. Phone waits for response from backend.

The gap between 1 and 2 cannot be long because the phone is burning battery the entire time while it's waiting, so there are limits to how long you can reasonably expect the device to wait before it hangs up.

In a less privacy-sensitive architecture you could:

1. Phone fires off request to the backend. Gets a token for response lookup later. 2. Phone checks for a response later with the token.

But that requires the backend to hold onto the response, which for privacy-sensitive applications you don't want!


Especially since the request contains the user's (hashed) passwords. You definitely don't want to be holding that on the server for longer than necessary.


Is it really a problem? Client can pass an encryption key with the request and then collect encrypted result later. As long as computation is done and result is encrypted, server can forget the key, so cache is no longer a privacy concern.


You can, and in situations where the computation is unavoidably long that's what you'd do. But if you can do a bit of work to guarantee the computation is fast then it removes a potential failure mode from the system - a particularly nasty one at that.

If you forget to dump the key (or if the deletion is not clean) then you've got an absolute whopper of a privacy breach.

Also worth noting that you can't dump the key until the computation is complete, so you'd need to persist the key in some way which opens up another failure surface. Again, if it can't be avoided that's one thing, but if it can you'd rather not have the key persist at all.


„UPDATE checks SET result=?, key=null“

Is it that hard?

Also I don’t think persisting a key generated per task is a big privacy issue.


thanks!


> why not just run the checks at the backend's discretion?

Presumably it's a combination of needing to do it while the computer is awake and online, and also the Passwords app probably refreshes the data on launch if it hasn't updated recently.


I posted some notes from a full setup I've built for myself with worktrees: https://github.com/anthropics/claude-code/issues/1052

I haven't productized it though; uzi looks great!


TIL worktrees exist! https://git-scm.com/docs/git-worktree

Thanks :)



that's a great hierarchy!

though what does "static cpu" vs "dynamic cpu" mean? it's one thing to be pointer chasing and missing the cache like OCaml can, it's another to be running a full interpreter loop to add two numbers like python does


That's what it means, basically. I draw a distinction between static code like C++ or Rust may generate and code like what Python may generate.

There is a middle ground of languages that box everything, but lack the rich complexity of Python or Ruby, such as Erlang, and I believe, O'Caml if you aren't semi-carefully programming for performance, that fits fairly cleanly into the middle ground between them. However compared to the uptake of static languages on the one side and the full dynamic scripting languages on the other, these are relatively speaking less common and don't get their own separate "in between" tier in my head, as it would end up being a .75-ish tier that would break the pattern. That is not to say they are bad or uninteresting, there's plenty of interesting languages there, they just aren't as popular.


Big fan of QStudio! Thanks for building it!


SQL is terrible at allowing this sort of transformation.

One benefit of PRQL [disclaimer: maintainer] is that it's simple to add additional logic — just add a line filtering the result:

  from users
  derive [full_name = name || ' ' || surname]
  filter id == 42           # conditionally added only if needed
  filter username == param  # again, only if the param is present
  take 50


I never looked into prql; does the ordering matter? As if not , that would be great; aka, is this the same:

  from users
  take 50
  filter id == 42           # conditionally added only if needed
  filter username == param  # again, only if the param is present
  derive [full_name = name || ' ' || surname]
  
? As that's more how I tend to think and write code, but in sql, I always jump around in the query as I don't work in the order sql works.

I usually use knex or EF or such where ordering doesn't matter; it's a joy however, I prefer writing queries directly as it's easier.


`take 50` of all `users` records in random order and after filter the result with username and id? I hope it's the right answer and prql authors are sane.


I do sometimes miss RQL with RethinkDb. It was a cool database.


That's a facially absurd statement. Just on the numbers:

The US consumes 500 gigawatts on average, or 5000 watts per household.

So if every household bought an 8K TV, turned it on literally 100% of the time, and didn't reduce their use of their old TV, it would represent a 10% increase in power consumption.

The carbon emissions from residential power generation have approximately halved in the past 20 years. So even with the wildest assumptions, it doesn't "throw away all the progress we've made on Global Warming for the past 20 years ...".


Sounds like the coach helped you maintain eye-contact with the camera. But if we get a tool to do this, then we're lying. Would you say the coach helped you lie?


That doesn't even make sense. The lie is that you're not doing the thing you are projecting as doing. You just said the coach helped the poster do the thing they projected as doing.


> He prefers his own “escalatory approach”, working through a system via an administrator’s access and searching for a “confluence”, a collection of information shared in one place, such as a workplace intranet.

Was this a mistaken transcription for Confluence, the Atlassian app?


It sounds like the journalist didn't know what Confluence is and thought it was a term of art for any generic intranet.

edit: to those saying the word makes sense without referring to the Atlassian product, I'm not buying it. The journalist put it in quote marks, which to me suggests he thought it was a term of art — if he instead meant it metaphorically, I don't think he would have phrased it like that. It's also just an odd word to use to describe the idea.


This would be a fun SAT question: Wordpress is to blog as Conflunce is to __intranet__


The dictionary meaning of "confluence", namely an aggregation or coming together of disparate sources of stuff (information, in this case) into a single place, makes perfect sense here. And searching for places that lots of information gathers seems like a sensible approach to me. The fact that one product happens to have the same name didn't even cross my mind.


Confluence literally means the junction of two rivers, genericized it's where two or more things join or occur together ("a confluence of events"), so it could be either. But naming Confluence (the web application) is very specific, not everyone uses it.


To "conflate" is when two or more things are merged into one.

In tech we usually assume "confluence" means the Atlassian product, not "a merging of several items".


In case anyone doesn't know about the field of etymology: https://www.etymonline.com/word/confluence


Confluence, n.: a collection of semirandom characters emitted by employees trying to look busy, interned in a series of secure silos, with stringent access controls, to hide the evidence.


This is what happens when people 'sanitize' their writing with an AI. It doesn't often understand trademarks or context, so we get stuff like this.

I imagine the real human written sentence was "Trying to get admin access via a Confluence exploit," which there are many and an app that IT groups take their time updating.


As I wrote in the sibling comment to yours, it really could go either way. A confluence, a place where you find a lot of information like an intranet shared drive, is a reasonable interpretation without the original quote in place. But so is Confluence the application as an example of a confluence which also exists on an intranet, and the writer misunderstood and (being a writer) used their familiarity with English to infer more than was said.

We don't need AI for either interpretation, just familiarity with English.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: