Hacker Newsnew | past | comments | ask | show | jobs | submit | codybontecou's commentslogin

Even if AI progress plateaus, I'm confident we would build tooling and patterns around the current models that would surpass hand crafted equivalents.

Are you using the Chinese models through their individual services or via an intermediary layer?

I am not the person you are responding to but I have tried both: using OpenRouter and also giving a Chinese company $5 on my credit card to buy tokens. If I know what model I want to experiment with, I much prefer to just pay $5 and have plenty of tokens to experiment. On a yearly basis, this is a very tiny expense for the benefits of getting plenty of tokens to experiment with.

Most are lucky to get a few sign ups.

Don't bet against the models and their providers becoming stagnant. Build with the idea that they will continue to improve.

How did you find patterns between these sentences?


Stuck behind Apple's app review process.


That’s… actually a great definition. I’m going to try to retain that.


This is classically called "Inversion of Control"[0] or the "Hollywood Principle"[1] as in "Don't call us-- we'll call you".

[0] - https://martinfowler.com/bliki/InversionOfControl.html [1] - https://wiki.c2.com/?HollywoodPrinciple


Is this just a well-documented API?


What does a devlog look like? "Today I decided to prompt about feature x... and it worked!"


Here's an example of one of their dev logs: https://www.youtube.com/watch?v=pdym24sg1HQ


Huh, nice. Thanks for sharing.


would you share tools you used to create it? Is voice your own?


This sounds interesting. Can you go a bit deeper or provide references on how to implement the green/red/refactor subagent pattern?


What has worked better for me is splitting authority, not just prompts. One agent can touch app code, one can only write failing tests plus a short bug hypothesis, and one only reviews the diff and test output. Also make test files read only for the coding agent. That cuts out a surprising amount of self-grading behavior.


How do you limit access like that?


It’s not an agentic pattern, it’s an approach to test driven development.

You write a failing test for the new functionality that you’re going to add (which doesn’t exist yet, so the test is red). You then write the code until the test passes (that is, goes green).


I built rlm-workflow which has stage gating, TDD and sub-agent support: https://skills.sh/doubleuuser/rlm-workflow/rlm-workflow


That's the cool bit - you don't have to. CC is perfectly well aware and competent to implement it; just tell it to.


"So this is how liberty dies... with thunderous applause.” - Padmé Amidala

s/liberty/knowledge


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: