Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This was my take.

They’ve made an issue tracker out of json files and a text file.

Why not hook an mcp to an actual issue tracker?





Used an LLM to help write the following up as I’m still pretty scattered about the idea and on mobile.

——

Something I’ve been going over in my head:

I used to work in a pretty strict Pivotal XP shop. PM ran the team like a conductor. We had analysts, QA, leads, seniors. Inceptions for new features were long, sometimes heated sessions with PM + Analyst + QA + Lead + a couple of seniors. Out of that you’d get:

- Thinly sliced epics and tasks - Clear ownership - Everyone aligned on data flows and boundaries - Specs, requirements, and acceptance criteria nailed at both high- and mid-level

At the end, everyone knew what was talking to what, what “done” meant, and where the edges were.

What I’m thinking about now is basically that process, but agentized and wired into the tooling:

- Any ticket is an entry point into a graph, not just a blob of text. - Epics ↔ tasks ↔ subtasks - Linked specs / decisions / notes - Files and PRs that touched the same areas

- Standards live as versioned docs, not just a random Agents.md:

  - Markdown (with diagrams) that declares where it applies: tags, ticket types, modules.
  - Tickets can pin those docs via labels/tags/links.
- From the agent’s perspective, the UI is just a viewer/editor. - The real surface is an API: “given this ticket, type, module, and tags, give me all applicable standards, related work, and code history.”

- The agent then plays something like the analyst + senior engineer role: - Pulls in the right standards automatically - Proposes acceptance criteria and subtasks - Explains why a file looks the way it does by walking past tickets / PRs / decisions

So it’s less “LLM stapled to an issue tracker” and more “that old XP inception + thin-slice discipline, encoded as a graph the agent can actually reason over.”


Has any project tried forcing a planning layer as //TODO all throughout the code before making any changes? small loops like one //TODO at a time? What about limiting changes to a function at a time to remain focused? Or is everyone a slave to however the model was designed and currently they are designed for giant one-shot generations only?

Is it possible that all local models need to be better is more context used to make simpler smaller changes at a time? I haven't seen enough specific comparisons of how local models fail vs the expensive cloud models.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: