Hacker Newsnew | past | comments | ask | show | jobs | submit | spawarotti's commentslogin

Smartness and happiness are like test coverage.

If you are not smart or have no tests, you will not be happy.

If you are smart or have high test coverage, you may or may not be happy.


There are two types of people: those who do backups, and those who will do backups.


At this point AGENTS.md is a README.md with enough hype behind it to actually motivate people to populate it with contents. People were too lazy to write docs for other people, but funnily enough are ok with doing it for robots.

This situation reminds me a bit of ergonomic handles design. Designed for a few people, preferred by everyone.


I think it’s the reverse - people were too lazy to read the docs so nobody was motivated to write them.

With an agent I know if I write once to CLAUDE.md and it will be read by 1000’s of agents in a week.


I like this insight. We kind of always knew that we wanted good docs, but they're demotivating to maintain if people aren't reading them. LLMs by their nature won't be onboarded to the codebase with meetings and conversations, so if we want them to have a proper onboarding then we're forced to be less lazy with our docs, and we get the validation of knowing they're being used.


I still don't get why it can't be just README.md. Just make sure it's minimal bullshit inside.


I mean the agents are to lazy to read any of this anyway and often will forget the sort of instructions being spam these with after 3 more instructions too.


The difference now is that people are actively trying to remove people (others and themselves) from software development work, so the robots have to have adequate instructions. The motivation is bigger. To dismantle all human involvement with software development is something that everyone wants, and they want it yesterday.


everyone? source?


It's sort of obvious. Humans cost more money than coding agents. The more you can have a coding agent do, the less you have to pay a human to do.

This aligns pretty clearly with the profit motive of most companies.


And a related page, in the other direction: https://www.futuretimeline.net/


In what sense related?


Very good online course on debugging: Software Debugging on Udacity by Andreas Zeller

https://www.udacity.com/course/debugging--cs259


Udacity is owned by Accenture? That is...Surprising.


Great advice. I follow it in my coding efforts and it has never failed me. Great book about this: Unit Testing Principles, Practices, and Patterns, Vladimir Khorikov, 2020

https://www.manning.com/books/unit-testing


How do you deal with serializing properties "by reference"? E.g., if 3 objects reference object "Foo", then Foo is serialized once instead of being duplicated in the json 3 times?


It depends. I don’t tend to end up with deep object graphs that need to be saved/ reloaded.

It might be that we serialize foo and foo has a list of references to its 3 children. The “parent” reference from the child back to foo is marked as do not serialize; an “after rehydration” function on foo could then set the value each child’s parent reference.

But more often — say baz bar and bam reference foo — the speed at which baz changes is different to the speed at which foo changes. The reference to foo from Baz is marked do not serialize. Baz also has a property indicating the ID of Baz. (For IStashy<K> - K is the type used for the keys, the IDs; it might be a string or an int or a guid, I tend to use string. All objects in the system have the same kind of ID, and it is unique per type.)

Generally if cyclic data structures are possible then some part of the cycle will be marked as no serializable and I’ll keep a key reference adjacent to it.

Situations that triggers huge cascading saves — they’re kind of an anti pattern for how I work. If one little change changes everything then perhaps it can be calculated on the fly from a pure function, not persisted at all— or perhaps there’s over-coupling etc.


I am especially excited for the Smart ComboBox:

https://devblogs.microsoft.com/dotnet/introducing-dotnet-sma...

In general, I see the idea of semantic matching instead of textual matching as one of the great, pragmatic applications of the current technology.

Somewhat related fun application of this concept is this: https://neal.fun/infinite-craft/ (the combination outputs are generated by LLMs)


Jake, I just read the blog post you shared. I loved it. The message and the writing style.


I'm glad you love it; it's a fraught, difficult subject, for obvious reasons, and yet also, I think, an important one, despite the difficulty. Recent experiences have made it unpleasantly germane to me.


Currently some programmers, and with time more, have to write, integrate and debug LLMs, hence for the programming to end, other LLMs would have to be able to do so, too. LLMs successfully modifying other LLMs is, like, singularity. In other words, the moment programming ends is the same moment we all are going to die.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: