Hacker Newsnew | past | comments | ask | show | jobs | submit | ChadMoran's commentslogin

I've launched 3 Rails SaaS products in the last 6 months, all profitable. In the world of LLMs things like this feel less valuable. I can kick off a Claude Code prompt and in 1 hour have a decent design system with Rails components.

Things like this likely need to be AI-first moving forward. This feels built for humans.


Personally, if I feel like you vibe coded your SaaS I’m probably not gonna pay for it. You can obviously tell when a project is vibe coded just based on the way it looks, the weird bugs you see and the poor documentation.

There’s definitely a market for good looking UI that actually works and stands out from the vibe coded junk. Artisanal corn fed UI I guess.


Same here. This was human driven UI. I used AI sparingly for mostly architecture decisions on the gem. Otherwise all by hand. I'm a product designer by trade.

That's fair. I think there's a future where some folks won't want AI to generate all the things. I replied to another comment before but this was very little AI minus some architecture direction of the underlying ruby gem.

Any chance to reach out to you? I'd like to ask you some questions about those SaaS (not in a bad way, just trying to learn)

Maybe they used AI to make this ? But really though I hope they didn't and did some of designing themselves... I'm worried we are approaching a world where we never get new human designs just regurgitated designs from pre 2025.

I used AI sparingly actually. Mostly just some help for Ruby gem architecture and how to approach swapping themes on the fly otherwise all me. I'm a product designer by day so this stuff I do constantly.

I came here to say: Is someone going to tell him? Glad I’m not the only one to be like “Wait.. I can do this with an agent in no time”.

In fact, armed with Context7, Claude could recreate this whole business model in a day.


Definitely aware. I built it to scratch my own itch to be honest. I'm going the non AI route with it. Lotta slop out there. I'm sure it will improve but I'm fine with this being a side gig.

This is how I say it in my head.


Now every time I see UTC, I hear the voice of Yoda: "Universal Time, Coordinated it is."


Interesting take. As a low vision person, the icons help me scan menus like this.


This is what holds me back from Zed.


This is the crux of knowledge/tool enrichment in LLMs. The idea that we can have knowledge bases and LLMs will know WHEN to use them is a bit of a pipe dream right now.


Can you be more specific? The simple case seems to be solved, eg if I have an mcp for foo enabled and then ask about a list of foo, Claude will go and call the list function on foo.


> […] and then ask about a list of foo

Not OP, but this is the part that I take issue with. I want to forget what tools are there and have the LLM figure out on its own which tool to use. Having to remember to add special words to encourage it to use specific tools (required a lot of the time, especially with esoteric tools) is annoying. I’m not saying this renders the whole thing “useless” because it’s good to have some idea of what you’re doing to guide the LLM anyway, but I wish it could do better here.


I've got a project that needs to run a special script and not just "make $target" at the command line in order to build, and with instructions in multiple . MD files, codex w/ gpt-5-high still forgets and runs make blindly which fails and it gets confused annoyingly often.

ooh, it does call make when I ask it to compile, and is able to call a couple other popular tools without having to refer to them by name. if I ask it to resize an image, it'll call imagemagik, or run ffmpeg and I don't need to refer to ffmpeg by name.

so at the end of the day, it seems they are their training data, so better write a popular blog post about your one-off MCP and the tools it exposes, and maybe the next version of the LLM will have your blog post in the training data and will automatically know how to use it without having to be told


Yeah, I've done this just now.

I installed ImageMagik on Windows.

Created a ".claude/skills/Image Files/" folder

Put an empty SKILLS.md file in it

and told Claude Code to fill in the SKILLS.md file itself with the path to the binaries.

and it created all the instructions itself including examples and troubleshooting

and in my project prompted

"@image.png is my base icon file, create all the .ico files for this project using your image skill"

and it all went smoothly


It doesn't reliably do it. You need to inject context into the prompt to instruct the LLM to use tools/kb/etc. It isn't deterministic of when/if it will follow-through.


Sub-agents. I've had Claude Code run a prompt for hours on end.


What kind of agents do you have setup?


You can use the built in task agent. When you have a plan and ready for Claude to implement, just say something along the line of “begin implementation, split each step into their own subagent, run them sequentially”


subagents are where Claude code shines and codex still lags behind. Claude code can do some things in parallel within a single session with subagents and codex cannot.


By parallel, do you mean editing the codebase in parallel? Does it use some kind of mechanism to prevent collisions (e.g. work trees)?


Yeah, in parallel. They don't call it yolo mode for nothing! I have Claude configured to commit units of work to git, and after reviewing the commits by hand, they're cleanly separated be file. The todo's don't conflict in the first place though; eg changes to the admin api code won't conflict with changes to submission frontend code so that's the limited human mechanism I'm using for that.

I'll admit it's a bit insane to have it make changes in the same directory simultaneously. I'm sure I could ask it to use git worktrees and have it use separate directories, but I haven't (needed to) try that (yet), so I won't comment on how well it would actually do with that.


I personally do not do any writes in parallel but parallel works great for read operations like investigating multiple failing tests.


Claude Code with a good prompt can run for hours.


Okay but when will we get visibility into this other than we're at the 50% of the limit? If you're going to introduce week long limits, transparency into use is critical.


This. Optimize for the good actors, not the bad ones.


If you aren't hitting the limits you aren't writing great prompts. I can write a prompt and have it go off and work for about an hour and hit the limit. You can have it launch sub-agents, parallelize work and autonomously operate for long periods of time.

Think beyond just saying "do this one thing".


How is that a great prompt having it run for an hour without your input? Sounds like it’s just generating wasteful output.


Who said it was writing code for an hour? Solving complex problems by problem solving, writing SQL, querying data, analyzing data. formulating plans.

What do you do for hours?

If all you're thinking about is code output, you're thinking too small.


You should really read this.

https://www.anthropic.com/news/claude-4

It was given a task and it solved a problem by operating for 7 hours straight.


With 7 hour tasks it might become worthwhile to invest in a RAM-based local solution with DeepSeek Coder? I've heard that you can run it with 300-700GB. With such long tasks, Claude may run out of usage, right? So queueing it up on a local server may make sense? Always looking for an excuse to do things in-house, but it has to make sense :)


I have not tested Claude Code but that's impressive because other agents get stuck long before that.


Takes proper prompt crafting but Claude Code is really impressive.


It can be fixing unit tests and stuff for quite a while, but I usually find it cheats the goal when unattended.


That clears up a lot for me. I don't think I've ever had it take for than a couple of minutes. If it takes more than a minute I usually freak out and press stop


I've used CC a lot and to great effect, but it never runs more than 10 mins (Opus). Completely independent for 60 min, sounds impressive. Can you share some insights on this? Really curious; I can also share recents prompts of mine.


"You are an expert software engineer

Your key objective is to fix a bug in the XYAZ class. Use a team of experts as sub-agents to complete the analysis, debugging and triage work for you. Delegate to them, you are the manager and orchestrator of this work.

As you delegate work, review and approve/reject their work as needed. Continue to refine until you are confident you have found a simple and robust fix"


Wow! I will try that. Really cool. Never tried the mythical sub-agent feature, not sure if it was really a thing due to the sparse docs. The "You are an expert software engineer" really helps? Probably good idea to mention "simple" because Claude sometimes settles for an overengineered solution.


> The "You are an expert software engineer" really helps?

Anecdata, but it weirdly helped me. Seemed BS for me until I tried.

Maybe because good code is contextual? Sample codes to explain concepts may be simpler than a production ready code. The model may have the capability to do both but can't properly disguished the correct thing to do.

I don't know.


Maybe it's not the "expert", but "software engineer" part that works? Essentially it's given a role. This constrains it a bit; e.g. it's not going to question the overall plan. Maybe this helps it take a subordinate position rather than an advisor or teacher. Which may help when there is a clear objective with clear boundaries laid out? Anyway, I will try myself and simply observe it if makes a difference.


Are there some good examples/wiki/knowledge base on how to do this? I'll read 2 competing theories on the same day so I'm kinda confused.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: