Hacker Newsnew | past | comments | ask | show | jobs | submit | dgacmu's commentslogin

I find it kind of helpful and interesting to see a subset of these called out with a bit of data. Helps keep my LLM detector trained (the one in my brain, that is) and I think it helps a little about expressing the community consensus against this crap. In this case, I'm glad the GP posted something, as it's definitely not mistaken.

As a specific example: The generated diagram showing the expression tree under "build in python" is simply wrong. It doesn't correspond to the expression x * 2 + 1, which should have only 1 child node on the right. The "GIL Released - Released" is just confusing. The dataflow omits the fact that the results end up back in python - there should be a return arrow. etc., etc.

If you use diagrams like this, at least ensure they are accurately conveying the right understanding.

And in general, listen to the person I'm responding to -- be really deliberate with your graphics or omit. Most AI-generated diagrams are crap.


It's written in rust, but why do you believe it was co-authored with Claude? The README in github specifically says:

> This project was built using Gemini CLI

https://github.com/sashiko-dev/sashiko


Claude snitches on you in your commits. You can just look at the history.

Only two commits have 'Co-Authored-By: Claude' and they're both PR contributions from a non-google email.

An enthusiastic two thumbs up to this approach. It's exactly what I run at home that has been working solidly. I run on an N100, which is just a hair smaller than an i5-8500, with 32GB DRAM and a 1TB SSD (total overkill). I keep it under proxmox; the box also runs my unifi SDN controller, pihole, and a linux VM for various little services. Two USB dongles for z-wave / zigbee / matter (because I'm a glutton for punishment). Backed up to a NAS. It's fast, easy, and has been very reliable.

Mass-market paperbacks are definitely dying, but trade paperbacks continue to sell (at rates lower than mass-market, obviously):

https://www.publishersweekly.com/pw/by-topic/industry-news/p...

(trade paperbacks are the larger paperback editions printed on better paper than the mass market paperbacks, but still soft-cover.)

John Scalzi posted about this a few months ago:

"All my recent books went from hardcover to trade paperback and almost all of my backlist in mass market has now migrated to trade. The role of mass market paperbacks is now handled almost entirely by ebooks."

https://bsky.app/profile/scalzi.com/post/3m7xzfxxcg222


It may be interesting to note: according to the prices on Amazon for books that are out of print in mass market format, there is a significant demand among fans of the form factor.

I used to prefer trades but have gone all in on mass market editions. They just feel better in my hands, especially larger volumes. Plus I can stuff it in a coat pocket on my way out the door.

And FWIW, I’ve found that the “printed by Amazon” editions have actually been higher quality than recent offsets. For example, the newest editions of Hitchiker’s Guide seem to have been laid out without any regard to the inner margin. It’s fiddly to read the first word on each line.

Meanwhile the Star Wars Legends mass markets fulfilled by Amazon in Italy and France have thicker, brighter, paper and clean margins.

For the mass market format, I have to take what I can get, and I’m glad that there are still reasonably priced editions available.


Gemini 3.1 pro under a Google AI pro subscription has just recently started imposing really small weekly limits. I went from it feeling unlimited to hitting a 4 day quota in 2 hours of use. Very odd. Wonder if too many people jumped on with the 3.1 pro release.

Same experience with the recent quota change on Google AI Pro. People have asked for yearly plans but given the vague and ever shifting limits it would be crazy to commit long-term.

Something I try very hard to impress on my PhD students is that the process of writing is part of the process of thinking. We often have cool things in our head that don't sound right when we write them down, and that's usually because the thing in our head was more amorphous than we realized. The time you put in getting the written expression of it to work is actually helping you crystallize what you're thinking in the first place.

Would anyone notice if you spell-checked or got narrow feedback about grammar? No. I'm not dang, but perhaps a very reasonable interpretation of the rules is: If the AI is generating the words, don't. If it tells you something about your words and you choose to revise them without just copying words the AI output, it's still your words.

(As an experiment, I took that paragraph and threw it into gemini to ask for spell and grammar checking. It yelled at me completely incorrectly about saying "I'm not dang". Of its 4 suggestions, only 1 was correct, and the other 3 would have either broken what I was trying to say or reduced the presence of my usual HN comment voice. So while I said the above, perhaps I'm wrong and even listening to the damn box about grammar is a bad idea.)

That said, I often post from my phone and have somewhat frequent little glitches either from voice recognition or large clumsy thumbs, and nobody has ever seemed to care except me when I notice them a few minutes after the edit button goes away.


Shipping from China to a US coastal warehouse is probably modestly under $200, including packaging, assuming it's shipped in a 40" container. Possibly less if there's other cargo that can be used to fill the remaining space.

I suspect the domestic costs are really dependent on volume (like, can you ship a container of 45 TVs to a warehouse near NYC or do you have to ship each unit individually) and I don't feel confident estimating that side of it.


Probably, but on the other hand, this is almost literally the definition of technical debt -- it's great to get fixes uptreamed precisely so that you don't have to maintain your own fork, keep it in sync, etc. an LLM can likely lower the burden of that but the burden still exists.

Yeah, but what can you do if you need a thing done and now there's an option to have it done.

I don't disagree.

I assume that most of these purely llm generated unwanted contributions will just end up in dead end forks, because my impression is that a lot of them are just being generated as GitHub activity fodder. But the stuff that really solves a problem for a person - eh, good. Problem solved is problem solved. (Unless it creates new problems)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: