Interesting approach. One thing I've been thinking about with agent review UIs is the state representation problem or how do you diff what the agent "knew" at step N vs step N+1? If you can serialize the agent's cognitive state at each decision point (not just the code output), you can build much richer "why did it do that?" explanations.
Do you support rollback — i.e., if a reviewer rejects step 5, can the agent resume from step 4's state without replaying the whole chain?
I built AgentClick because I kept getting burned by AI agents acting too fast.
The pattern is always the same: you ask Claude Code or Codex to do something, it generates a response, you get a y/n prompt in the terminal, and you approve it without really reading it. Then it sends the wrong email, runs a destructive command, or commits to a plan you'd never agree to if you actually saw it laid out.
The core problem: terminal y/n is not a real review step.
AgentClick adds a browser-based review layer between "agent proposes" and "agent executes." The agent drafts something, a UI opens in your browser, you can actually read, edit, and approve it visually — then the agent continues.
It works as a skill/plugin for:
- Claude Code
- Codex
- OpenClaw
- Any agent that can call HTTP tools
What you can review:
- Email drafts and inbox triage
- Shell commands before execution
- Multi-step plans
- Memory updates
The key difference from just adding a confirmation step: you can actually edit the content, not just approve/reject. Change the tone of an email, fix a command flag, remove a step from a plan — then let the agent proceed.
Install: `npm install -g @harvenstar/agentclick`
Works locally on localhost, or use `--remote` for a Cloudflare tunnel to review from your phone.
Do you support rollback — i.e., if a reviewer rejects step 5, can the agent resume from step 4's state without replaying the whole chain?
reply