I use gh cli to make and track issues on the repo's issue tracker, create and reference the issue in the PR. I use Claude normally, so have Gemini and Codex that sit as automated reviewers (github apps), then get Claude to review the comments. Rinse and repeat. Works quite well and catches some major issues. Reading the PR's yourself (at least skimming them for sanity) is still vital.
> Agents report that they enjoy working with Beads, and they will use it spontaneously for both recording new work and reasoning about your project in novel ways.
I’m surprised by this wording. I didn’t encounter anyone talking about AI preference yet.
Can a trained LLM develop a preference for a given tool within some context and reliably report on that?
Is “what AI reports enjoying“ aligned with AI’s optimal performance?
I set up spec-kit first, then updated its templates to tell it to use beads to track features and all that instead of writing markdown files. If nothing else, this is a quality-of-life improvement for me, because recent LLMs seem to have an intense penchant to try to write one or more markdown files per large task. Ending up with loads of markdown poop feels like the new `.DS_Store`, but harder to `.gitignore` because they'll name files whatever floats their boat.
Cool stuff. The readme is pretty lengthy so it was a little hard to identify what is the core problem this tool is aiming to solve and how is it tackling it differently than the present solutions.
Funnily, AI already knows what stereotypical AI sounds like, so when I tell Claude to write a README but "make it not sounds like AI, no buzzwords, to the point, no repetition, but also don't overdo it, keep it natural" it does a very decent job.
Actually drastically improves any kind of writing by AI, even if just for my own consumption.
I'm not saying it is or isn't written by an LLM, but, Yegge writes a lot and usually well. It somehow seems unlikely he'd outsource the front page to AI, even if he's a regular user of AI for coding and code docs.
I use gh cli to make and track issues on the repo's issue tracker, create and reference the issue in the PR. I use Claude normally, so have Gemini and Codex that sit as automated reviewers (github apps), then get Claude to review the comments. Rinse and repeat. Works quite well and catches some major issues. Reading the PR's yourself (at least skimming them for sanity) is still vital.
> Agents report that they enjoy working with Beads, and they will use it spontaneously for both recording new work and reasoning about your project in novel ways.
I’m surprised by this wording. I didn’t encounter anyone talking about AI preference yet.
Can a trained LLM develop a preference for a given tool within some context and reliably report on that?
Is “what AI reports enjoying“ aligned with AI’s optimal performance?
LLM-s also report that they enjoy my questions, in fact they tell me it's a good question literally every time I ask about their weird choices.
I've been trying `beads` out for some projects, in tandem with https://github.com/github/spec-kit with pretty good results.
I set up spec-kit first, then updated its templates to tell it to use beads to track features and all that instead of writing markdown files. If nothing else, this is a quality-of-life improvement for me, because recent LLMs seem to have an intense penchant to try to write one or more markdown files per large task. Ending up with loads of markdown poop feels like the new `.DS_Store`, but harder to `.gitignore` because they'll name files whatever floats their boat.
Cool stuff. The readme is pretty lengthy so it was a little hard to identify what is the core problem this tool is aiming to solve and how is it tackling it differently than the present solutions.
A classic issue of AI generated READMEs. Never to the point, always repetitive and verbose
Funnily, AI already knows what stereotypical AI sounds like, so when I tell Claude to write a README but "make it not sounds like AI, no buzzwords, to the point, no repetition, but also don't overdo it, keep it natural" it does a very decent job.
Actually drastically improves any kind of writing by AI, even if just for my own consumption.
I'm not saying it is or isn't written by an LLM, but, Yegge writes a lot and usually well. It somehow seems unlikely he'd outsource the front page to AI, even if he's a regular user of AI for coding and code docs.
And full of marketing hyperbole. When I have an AI produce a README I always have to ask it to tone it down and keep it factual.
This looks like a ticketing cli