Show HN: Project management system for Claude Code

github.com

140 points by aroussi 9 hours ago


I built a lightweight project management workflow to keep AI-driven development organized.

The problem was that context kept disappearing between tasks. With multiple Claude agents running in parallel, I’d lose track of specs, dependencies, and history. External PM tools didn’t help because syncing them with repos always created friction.

The solution was to treat GitHub Issues as the database. The "system" is ~50 bash scripts and markdown configs that:

- Brainstorm with you to create a markdown PRD, spins up an epic, and decomposes it into tasks and syncs them with GitHub issues - Track progress across parallel streams - Keep everything traceable back to the original spec - Run fast from the CLI (commands finish in seconds)

We’ve been using it internally for a few months and it’s cut our shipping time roughly in half. Repo: https://github.com/automazeio/ccpm

It’s still early and rough around the edges, but has worked well for us. I’d love feedback from others experimenting with GitHub-centric project management or AI-driven workflows.

moconnor - 5 hours ago

"Teams using this system report:

89% less time lost to context switching

5-8 parallel tasks vs 1 previously

75% reduction in bug rates

3x faster feature delivery"

The rest of the README is llm-generated so I kinda suspect these numbers are hallucinated, aka lies. They also conflict somewhat with your "cut shipping time roughly in half" quote, which I'm more likely to trust.

Are there real numbers you can share with us? Looks like a genuinely interesting project!

tummler - 5 hours ago

A project management layer is a huge missing piece in AI coding right now. Proper scoping, documentation, management, etc is essential to getting good results. The people who are having the most success with “vibe coding” have figured this out, but it should really be incorporated into the process.

jdmoreira - 7 hours ago

I'm a huge fan of Claude Code. That being said it blows my mind people can use this at a higher level than I do. I really need to approve every single edit and keep an eye on it at ALL TIMES, otherwise it goes haywire very very fast!

How are people using auto-edits and these kind of higher-level abstraction?

swader999 - 7 hours ago

The advantage with using multiple agents is in context management, not parallelization. A main agent can orchestrate sub agents. The goal is to not overwhelm the main agent with specialized context for each step that can be delegated to separate task focused agents along the way.

Test runner sub agent knows exactly how to run tests, summarize failures etc. It loads up all the context specific to running tests and frees the main agent's context from all that. And so on...

tmvphil - 5 hours ago

Sorry, I'm going to be critical:

"We follow a strict 5-phase discipline" - So we're doing waterfall again? Does this seem appealing to anyone? The problem is you always get the requirements and spec wrong, and then AI slavishly delivers something that meets spec but doesn't meet the need.

What happens when you get to the end of your process and you are unhappy with the result? Do you throw it out and rewrite the requirements and start from scratch? Do you try to edit the requirements spec and implementation in a coordinated way? Do you throw out the spec and just vibe code? Do you just accept the bad output and try to build a new fix with a new set of requirements on top of it?

(Also the llm authored readme is hard to read for me. Everything is a bullet point or emoji and it is not structured in a way that makes it clear what it is. I didn't even know what a PRD meant until halfway through)

royletron - 5 hours ago

It still feels like the more context the agents have the worse the response becomes - and simulataneously the more money ends up being thrown at Anthropic. I have to handhold agents to get anywhere near stuff I actually want to commit with my name on.

blancotech - 4 hours ago

I’m curious how any project management to code agent workflow can be successful given how messy the process is in real life.

Especially discovering unknown unknowns that lead to changes in your original requirements. This often happens at each step of the process (e.g. when writing the PRD, when breaking down the tickets, when coding, when QAing, and when documenting for users).

That’s when the agent needs to stop and ask for feedback. I haven’t seen (any) agents do this well yet.

yodon - 8 hours ago

Lots of thought went into this. It would be very helpful to see examples of the various workflows and documents. Perhaps a short video of the system in use?

Nizoss - 5 hours ago

I'm genuinely curious to see what the software quality looks like with this approach. Particularly how it handles complexity as systems grow. Feature development is one thing, going about it in a clean and maintainable way is another.

I've come across several projects that try to replicate agile/scrum/SAFe for agents, and I'm trying to understand the rationale. Since these frameworks largely address human coordination and communication challenges, I'm curious about the benefits of mapping them to AI systems. For instance, what advantages does separating developer and tester provide versus having unified agents that handle both functions?

CuriouslyC - 2 hours ago

You should just try to integrate your work with Vibe Kanban, I'm pretty sure it's going to be the winning tool in this space.

nivertech - 7 hours ago

Task decomposition is the most important aspect of software design and SDLC.

Hopefully, your GitHub tickets are large enough, such as covering one vertical scope, one cross-cutting function, or some reactive work such as bug fixing or troubleshooting.

The reason is that coding agents are good at decomposing work into small tasks/TODO lists. IMO, too many tickets on GitHub will interfere with this.

vemv - 4 hours ago

It will increasingly become common knowledge that the best practice for AI coding is small edits quite carefully planned by a human. Else the LLM will keep going down rabbit holes and failing to produce useful results without supervision.

Huge rules systems, all-encompassing automations, etc all assume that more context is better, which is simply not the case given that "context rot" is a thing.

linkage - 4 hours ago

Looks like a simpler version of BMAD

https://github.com/bmad-code-org/BMAD-METHOD

thomask1995 - 6 hours ago

OK I need to give this a go. tbh, I've been going back to just writing stuff manually and asking ChatGPT doc questions.

I talked to and extremely strong engineer yesterday who is basically doing exactly this.

Would love to see a video/graphic of this in action.

dcreater - 6 hours ago

This is a more advanced version of what I'm doing.

I was impressed that someone took it up to this level till I saw the tell tale signs of the AI generated content in the README. Now I have no faith that this is a system that was developed, iterated and tested to actually work and not just a prompt to an AI to dress up a more down to earth workflow like mine.

Evidence of results improvement using this system is needed.

nikolayasdf123 - 6 hours ago

when you go to their website some person immediately starts talking to you at the bottom left corner. this is hilarious, websites today got to tune it down a bit with sales

jamauro - 6 hours ago

Looks interesting. How do you make sure that agents that need to collaborate on the solution actually collaborate if they’re working in parallel?

brainless - 5 hours ago

I love what is happening in this domain, so many people experimenting. Thanks for sharing this.

I recently launched https://letsorder.app, https://github.com/brainless/letsorder.

100% of the product (2 web UI apps, 1 backend, 1 marketing site) was generated by LLMs, including deployment scripts. I follow a structured approach. My workflow is a mix of Claude Code, Gemini CLI, Qwen Code or other coding CLI tools with GitHub (issues, documentation, branches, worktrees, PRs, CI, CodeRabbit and other checks). I have recently started documenting my thoughts about user flow with voice and transcribe them. It has shown fantastic results.

Now I am building https://github.com/brainless/nocodo as the most ambitious project I have tried with LLMs (vibe coding). It runs the entire developer setup on a managed Linux server and gives you access through desktop and mobile apps. All self-hosted on your cloud accounts. It would basically be taking an idea to going live with full stack software.

nikolayasdf123 - 6 hours ago

their website also features some shredded bold dude. got to respect their sales skills

poopiokaka - 3 hours ago

Make it work with gitlab

dalore - 4 hours ago

how to use it on an existing repo that has a few issues, a milestone, labels, etc?

apwell23 - 6 hours ago

> With multiple Claude agents running in parallel

Are ppl really doing this? My brain gets overwhelmed if i have more than 2 or 3.

mustaphah - 6 hours ago

TL;DR workflow phases:

- Brainstorm a PRD via guided prompts (prds/[name].md).

- Transform PRD into epics (epics/[epic-name]/epic.md).

- Decompose epic into tasks (epics/[epic-name]/[feature-name]/[task].md).

- Sync: push epics & tasks to GitHub Issues.

- Execute: Analyze which tasks can be run in parallel (different files, etc). Launch specialized agents per issue.

penguin202 - 5 hours ago

[dead]