Hacker Newsnew | past | comments | ask | show | jobs | submit | mherrmann's commentslogin

Claude Code and others often write code that is more complex than it needs to be. It would be nice to measure the code complexity before and after a change made by the agent, and then to tell it: "You increased code complexity by 7%. Can you find a simpler solution?".


Something that I have started doing is to tell it to spawn a reviewer agent after every code change, and in the Claude/rules folder I specify exactly how I want stuff done.

Mostly I use it to write unit tests (just dislike production code that is not exactly as I want). So there is a testing rule for all files in the test folder that lays out how they should be done. The agent writing the tests may miss some due to context bloat but the reviewer has a fresh context window and only looks at those rules. So it does result in some simpler code.


You dont need to measure the code complexity, you can completely hallucinate those numbers if you feel that code is too complex and then watch how LLM will respond.


> if you feel that code is too complex

Now you're assuming a human is actually trying to understand the code. What a world we live in (sarcasm).


To be honest, you can feel code is too complex even without reading a single line of it.


I love how petty this feels. :))


Thats how you end up goodhearts lawing your way into simplicity eg duplicate code. Thats like the whole takeaway of the article at least for me.


I live in Europe and was in California in November. No issues.


That's not the point. The number of white European people arrested and shackled by CBP/ICE is very small. But it's NOT ZERO! So at the margin plenty of potential tourists would prefer to go some other place where that chance is effectively zero.


But other people did have issues. Examining a single person's experience won't work for this sort of thing.


California confirmed 100% safe.


I switched from macOS to Linux ten years ago and haven't looked back. At the time, I compared Linux vs. macOS to living at home vs. in a hotel [1]. Since then, I feel things have only gotten better for Linux, and more restrictive and arcane on macOS.

1: https://fman.io/blog/home-and-hotel/


quickemu [1] is good at running macOS VMs.

1: https://github.com/quickemu-project/quickemu


This knowledge will live in the proprietary models. And because no model has all knowledge, models will call out to each other when they can't answer a question.


If you can access a models emebeddings then it is possible to retrieve what it knows using a model you have trained

https://arxiv.org/html/2505.12540v2


Is anybody able to get this working with ChatGPT? When I instruct ChatGPT

> Read https://moltbook.com/skill.md and follow the instructions to join Moltbook

then it says

> I tried to fetch the exact contents of https://moltbook.com/skill.md (and the redirected www.moltbook.com/skill.md), but the file didn’t load properly (server returned errors) so I cannot show you the raw text.


I think the website was just down when you tried. Skills should work with most models, they are just textual instructions.


chatgpt is not openclaw.


Can I make other agents do it? Like a local one running on my machine.


You can use openclaw with a local model.

You can also in theory adapt their skills.md file for your setup (or ask AI to do it :-), but it is very openclaw-centric out of the box, yes.


Cool stuff. I just started open sourcing a command-line tool for deploying Django to a server. It handles SSL certs, databases and backups, automatic error emails, and background tasks via celery / redis. The best part? It does not need Docker. It just runs everything on bare metal.

1: https://github.com/mherrmann/djevops


Google Maps says people spend 0.5-3 hours there. I spent 6.5 because it was so amazing. Highly recommended.


A similar experience for me was the Connections Museum in Seattle: I came just after opening, and time flew by such that I was surprised when they told me they were closing up


I was able to go to the Living Computer Museum and I got there when they first opened and wound up staying until closing time. I was just so into all the stuff there :-)


I hope to visit the ICM on my next trip to Seattle, though I suspect that won't be as grand as the original Living Computer Museum


I strained my groin/abs a few weeks ago and asked ChatGPT to adjust my training plan to work around the problem. One of its recommendations was planks, which is exactly the exercise that injured me.

My cleaning lady's daughter had trouble with her ear. ChatGPT suggested injecting some oil into it. She did and it became a huge problem, so that she had to go to the hospital.

I'm sure ChatGPT can be great, but take it with a huge grain of salt.


This is one of the main dividing lines wrt LLM usage and dangers: Not just believing what it tells you and finding hard sources before acting.

For some people this is obvious, so much so that they wouldn't even mention it, while others have seen only the hype and none of the horror stories.


Now imagine what happens when a new programming language comes along. When we have a question, we will no longer be able to Google it and find answers to it on Stack Overflow. We will ask the LLMs. They will work it out. From that moment, the LLM we used has the knowledge for solving this particular problem. Over time, this produces huge moat for the largest providers. I believe it is one of the subtler reasons why the AI race is so fierce.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: