Hacker Newsnew | past | comments | ask | show | jobs | submit | more blazespin's commentslogin

That was funny. Weird the pdf is missing all those pages. You'd think they'd just redact.


> You'd think they'd just redact.

Maybe that's driven by the long and embarrassing track record of lawyers, court clerks and admins somehow failing to properly redact text from PDFs again and again (eg only covering text with black boxes while leaving the text metadata intact). This despite it now being an infamous recurring mistake in a common yet crucial part of their jobs.


the way it will work is ai slop will rank high, and pomelli will generate the best ai slop.


Good thing to do regardless of the source, AI or Human, right?

I do verify by using topics I'm an expert in and I find hallucination to be less of an issue than depth of nuance.

For topics I'm just learning, depth of nuance goes over my head anyways.


I agree with this as good practice in general, but I think the human vs LLM thing is not a great comparison in this case.

When I ask a friend something I assume that they are in good faith telling me what they know. Now, they could be wrong (which could be them saying "I'm not 100% sure on this") or they could not be remembering correctly, but there's some good faith there.

An LLM, on the other hand, just makes up facts and doesn't know if they're incorrect or not or even what percentage sure it is. And to top things off, it will speak with absolute certainty the whole time.


That’s why I never make friends with my LLMs. It’s also true that when I use a push motorized lawn mower it has a different safety operating model than a weed whacker vs a reel mower vs an industrial field cutter and bailing system. But we still use all of these regularly and no one points out the industrial device is extraordinarily dangerous and there’s a continuum of safety with different techniques to address the challenges for the user to adopt. Arguably LLMs maybe shouldn’t be used by the uniformed to make medical decisions and maybe it’s dangerous that people do. But in the mean time I’m fine with having access to powerful tools and using them with caution but using them for what gives me value. I’m sure we will safety wrap everything if soon enough to the point it’s useless and wrapped in advertisements for our safety.


Very unlikely Emmett lied. What would be the point.


If anyone involved was going to be alarmist it was him.


No sub. Did they just re-tweet reuters or did they separately confirm?


They separately confirmed it, although in terms of timing they were scooped by Reuters which generally means you publish what you have, when you have it.


Also in the news today, the 86B share sale is back on.

I mean... come on.


Post this on a thread or something. Unattributed complaints like this are dangerous.


The difference is nearly everyone else doesn't stand to seriously benefit from the implosion of OpenAI.


Yeah, I can't imagine why DeepMind would possibly want to see OpenAI incinerated.

When you have such a massive conflict of interest and zero facts to go on - just sit down.

also - "people I respect, in particular Helen Toner and Ilya Sutskever, so I feel compelled to say a few things."

Toner clearly has no real moral authority here, but yes, Ilya absolutely did and I argued that if he wanted to incinerate OpenAI, it was probably his right to, though he should at least just offload everything to MSFT instead.

But as we all know - Ilya did a 180 (surprised the heck out of me).


The obvious answer to twitter. We need vanity metrics though if it's going to win the optics battle.

Some kind of push mechanisms would be great too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: