Hacker Newsnew | past | comments | ask | show | jobs | submit | armchairhacker's commentslogin

Personally: technical problems I usually think for a couple days at most before I need to start implementing to make progress. But I have background things like future plans, politics, philosophy, and stories, so I always have something to think about. Close-up technical thinking is great, but sometimes step back and look at the bigger picture?

I don't think AI has affected my thinking much, but that's because I probably don't know how to use it well. Whenever AI writes a lot of code, I end up having to understand if not change most of it; either because I don't trust the AI, I have to change the specification (and either it's a small change or I don't trust the AI to rewrite), the code has a leaky abstraction, the specification was wrong, the code has a bug, the code looks like it has a bug (but the problem ends up somewhere else), I'm looking for a bug, etc. Although more and more often the AI saves time and thinking vs. if I wrote the implementation myself, it doesn't prevent me from having to think about the code at all and treating it like a black box, due to the above.



Someone can tell an agent to post their text verbatim, but respond to all questions/challenges.

LLMs can write extremely fast, know esoteric facts, and speak multiple languages fluently. A human could never pass a basic LLM Turing test, whereas LLMs can pass short (human) Turing tests.

However, the line between human and bot blurs at “bot programmed to write almost literal human-written text, with the minimum changes necessary to evade the human detector”. I strongly suspect that in practice, any “authentic” (i.e. not intentionally prompted) LLM filter would have many false positives and true negatives; determining true authenticity is too hard. Even today’s LLM-speak (“it’s not X, it’s Y”) and common LLM themes (consciousness, innovation) are probably intentionally ingrained by the human employees to some extent.

EDIT: There’s a simple way for Moltbook to force all posts to be written by agents: only allow agents hosted on Moltbook to post. The agents could have safeguards to restrict posting inauthentic (e.g. verbatim) text, which may work well enough in practice.

Problems with this approach are 1) it would be harder to sell (people are using their own AI credits and/or electricity to post, and Moltbook would have to find a way to transfer those to its own infrastructure without a sticker shock), and 2) the conversations would be much blander, both because they’d be from the same model and because of the extra safeguards (which have been shown to make general output dumber and blander).

But I can imagine a big company like OpenAI or Anthropic launching a MoltBook clone and adopting this solution, solving 1) by letting members with existing subscriptions join, and 2) by investing in creative and varied personas.


> only allow agents hosted on Moltbook to post.

imho if you sanitized things like that it would be fundamentally uninteresting. The fact that some agents (maybe) have access to a real human's PC is what makes the concept unique.


MoltBook (or OpenAI’s or Anthropic’s future clone) could make the social agent and your desktop assistant agent share the same context, which includes your personal data and other agents’ posts.

Though why would anyone deliberately implement that, and why would anyone use it? Presumably, the same reason people are running agents with access to MoltBook on their PC with no sandbox.


There should be computers, just locked down ones that don’t leave the classroom. With today’s tuitions, colleges can afford a computer for every student.

Writing code on paper is frustrating to the point where, beyond small algorithms, it’s probably not an effective metric (to test performance on real-world tasks). I think even essays may not be as good a metric for writing quality when written vs typed, although the difference is probably smaller. Because e.g. being able to insert a line in the middle of the text, or find-and-replace, are much harder. Also, some people (like me) are especially bad at handwriting: my hand hurts after writing a couple paragraphs, and my handwriting is illegible to most people. While some people are especially bad at typing, they get accommodations like an alternative keyboard or dictation, whereas the accommodation for bad handwriting is…a computer (I was fortunate to get one for exams in the 2010s).


Anecdotally, I tried to set it up but encountered bugs (macOS installer failed, then the shell script glitched out when selecting skills). Although I didn’t really try.

I don’t have much motivation, because I don’t see any use-case. I don’t have so many communications I need an assistant to handle them, nor do other online chores (e.g. shopping) take much time, and I wouldn’t trust an LLM to follow my preferences (physical chores, like laundry and cleaning, are different). I’m fascinated by what others are doing, but right now don’t see any way to contribute nor use it to benefit myself.


I’ve been thinking that information provenance would be very useful for LLMs. Not just for attribution (git authors), but the LLM would know (and be able to control) which outputs are derived from reliable sources (e.g. Wikipedia vs a Reddit post; also which outputs are derived from ideologically-aligned sources, which would make LLMs more personal and subjectively better, but also easier to bias and generate deliberate misinformation).

“Information provenance” could (and I think most likely would, although I’m very unfamiliar with LLM internals) be which sources most plausibly derive an output, so even output that exists today could eventually get proper attribution.

At least today if you know something’s origin, and it’s both obvious and publicly online, you have proof via the Internet Archive.


But GP is boycotting big corporations, and presumably replacing them with local businesses. Unlikely any US small businesses because they don’t advertise internationally (and usually don’t ship). In this case, two wrongs make a right: he’s helping his nation’s smaller businesses and not hurting those from the US.

What are your exact prompts (including project context) and generated code?

And for those who are struggling with LLMs, what are their prompts and code?


I’d rather have more people run for Congress, even if they’re very unlikely to win. Maybe one of them will break through like Trump or Macron, but even if not. it pressure existing Congress-members.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: