That comment didn't read like AI generated content to me. It made useful points and explained them well. I would not expect even the best of the current batch of LLMs to produce an argument that coherent.
This sentence in particular seems outside of what an LLM that was fed the linked article might produce:
> What's wild is that nothing here is exotic: subdomain enumeration, unauthenticated API, over-privileged token, minified JS leaking internals.
The users' comment history does read like generic LLM output. Look at the first lines of different comments:
> Interesting point about Cranelift! I've been following its development for a while, and it seems like there's always something new popping up.
> Interesting point about the color analysis! It kinda reminds me of how album art used to be such a significant part of music culture.
> Interesting point about the ESP32 and music playback! I've been tinkering with similar projects, and it’s wild how much potential these little devices have.
> We used to own tools that made us productive. Now we rent tools that make someone else profitable. Subscriptions are not about recurring value but recurring billing
> Meshtastic is interesting because it's basically "LoRa-first networking" instead of "internet with some radios attached." Most consumer radios are still stuck in the mental model of walkie-talkies, while Meshtastic treats RF as an IP-like transport layer you can script, automate, and extend. That flips the stack:
> This is the collision between two cultures that were never meant to share the same data: "move fast and duct-tape APIs together" startup engineering, and "if this leaks we ruin people's lives" legal/medical confidentiality.
The repeated prefixes (Interesting point about!) and the classic it's-this-not-that LLM pattern are definitely triggering my LLM suspicions.
I suspect most of these cases aren't bots, they're users who put their thoughts, possibly in another language, into an LLM and ask it to form the comment for them. They like the text they see so they copy and paste it into HN.
Or maybe these are people who learned from a LLM that English is supposed to sound like this if you want to be permitted to communicate a.k.a. "to be taken into consideration"! Which is wrong and also kinda sucks, but also it sucks and is wrong for a kinda non-obvious reason.
Or, bear with me there, maybe things aren't so far downhill yet, these users just learned how English is supposed to sound, from the same place where the LLMs learned how English is supposed to sound! Which is just the Internet.
AI hype is already ridiculous; the whole "are you using an AI to write your posts for you" paranoia is even more absurd. So what if they are? Then they'd just be stupid, futile thoughts leading exactly nowhere. Just like most non-AI-generated thoughts, except perhaps the one which leads to the fridge.
Or maybe the 2 month old account posting repetitive comments and using the exact patterns common to AI generated comment is, actually, posting LLM generated content.
> So what if they are? Then they'd just be stupid, futile thoughts leading exactly nowhere.
FYI, spammers love LLM generated posting because it allows them to "season" accounts on sites like Hacker News and Reddit without much effort. Post enough plausible-sounding comments without getting caught and you have another account to use for your upvote army, which is a service you can now sell to desperate marketing people who promised their boss they'd get on the front page of HN. This was already a problem with manual accounts but it took a lot of work to generate the comments and content.
> I suspect most of these cases aren't bots, they're users who put their thoughts, possibly in another language, into an LLM and ask it to form the comment for them. They like the text they see so they copy and paste it into HN.
Yes, if this is LLM then it definitely wouldn't be zero-shot. I'm still on the fence myself as I've seen similar writing patterns with Asperger's (specifically what used to be called Asperger's; not general autism spectrum) but those comments don't appear to show any of the other tells to me, so I'm not particularly confident one way or the other.
That's ye olde memetic "immune system" of the "onlygroup" (encapsulated ingroup kept unaware it's just an ingroup). "It don't sound like how we're taught, so we have no idea what it mean or why it there! Go back to Uncanny Valley!"
It's always enlightening to remember where Hans Asperger worked, and under what sociocultural circumstances that absolutely proverbial syndrome was first conceived.
GP evidently has some very subtle sort of expectations as to what authentic human expression must look like, which however seem to extend only as far as things like word choice and word order. (If that's all you ever notice about words, congrats, you're either a replicant or have a bad case of "learned literacy in USA" syndrome.)
This makes me want to point out that neither the means nor the purpose of the kind of communication which GP seems to implicitly expect (from random strangers) are even considered to be a real thing in many places and by many people.
I do happen to find that sort of thing way more coughinterestingcough than the whole "howdy stranger, are you AI or just a pseud" routine that HN posters seem to get such a huge kick out of.
Sure looks like one of the most basic moves of ideological manipulation: how about we solved the Turing Test "the wrong way around" by reducing the tester's ability to tell apart human from machine output, instead of building a more convincing language machine? Yay, expectations subverted! (While, in reality, both happen simultaneously.)
Disclaimer: this post was written by a certified paperclip optimizer.
It's probably a list of bullet points or disjointed sentences fed to the LLM to clean up. Might be a non-English speaker using it to become fluent. I won't criticize it, but it's clearly LLM generated content.
This sentence in particular seems outside of what an LLM that was fed the linked article might produce:
> What's wild is that nothing here is exotic: subdomain enumeration, unauthenticated API, over-privileged token, minified JS leaking internals.