Hacker Newsnew | past | comments | ask | show | jobs | submit | krupan's commentslogin

It wouldn't matter of you did all that because you could still ask the AI, "what would my friend Bob think about this?" And the AI, who heard Bob talking in his phone when he thought he was alone in the other room, could tell you.

Right but that’s where the controls could be, it would just pretend to not know about Bob due to consent controls etc, but of course this would limit the usefulness.

People stating he must have hit his equity cliff, does anyone grant equity at only a 2-year cliff?

People stating he can sell equity on a secondary market, do you have experience doing that? At the last start up I was at, it didn't seem like anyone was just allowed to do that


> People stating he must have hit his equity cliff, does anyone grant equity at only a 2-year cliff?

Who knows what a "top AI whatever" can negotiate, contracts can vary a lot depending on who's involved in them.


The way the safety concerns are written, I get the impression it has more to do with humans' mental health and loss of values.

I really think we are building manipulation machines. Yes, they are smart, they can do meaningful work, but they are manipulating and lying to us the whole time. So many of us end up in relationships with people who are like that. We also choose people who are very much like that to lead us. Is it any wonder that a) people like that are building machines that act like that, and b) so many of us are enamored with those machines?

Here's a blog post that describes playing hangman with Gemini recently. It very well illustrates this:

https://bryan-murdock.blogspot.com/2026/02/is-this-game-or-i...

I completely understand wanting to build powerful machines that can solve difficult problems and make our lives easier/better. I have never understood why people think that machine should be human-like at all. We know exactly how intelligent powerful humans largely behave. Do we really want to automate that and dial it up to 11?


Fiction is my favorite, because the authors admit they are just making it up.

Sounds like planet described in The Naked Sun by Isaac Asimov

Thanks for writing that. It reminds me that there are many things we build and they work (for some definition of work) even though we don't fully understand them.

Did the first people that made fire understand it? You mentioned bridge building. How many bridges have failed for unknown (at the time) reasons? Heck, are we sure that every feature we put into a bridge design is necessary or why it's necessary? Repeat this thought for everything humans have created. Large software projects are difficult to reason about. You'll often find code that works because of a delightfully surprising combination of misunderstandings. When humans try to modify a complex system to solve one problem they almost always introduce new behavior, the law of unintended consequences.

All that being said, we usually don't get anywhere without at least a basic understanding of why doing X leads to Y. The first humans that made fire had probably observed the way fires started before they set out to make their own. Same with bridges and cars and computers.

So yes, you are absolutely correct that nobody fully understands how AI/LLMs work. But also, we kinda do understand. But also also, we're probably at a stage where we are building bridges that are going to collapse, boilers that will explode, or computer programs that are one unanticipated input away from seg faulting.


Government is doing stuff that's awful and whenever people propose solutions to this someone always complains that government "won't be able to get anything done." That's the point! What are you worried about?

"...for the negligible price of occasionally having to look at ads..."

"negligible price" and "occasionally" are outright lies in just that one tiny snippet of this opinion piece. I was going to highlight more, but there are just too many. Read this article if you love examples of private industry using rhetoric techniques straight from 1984


I was genuinely surprised at how much outright dishonesty was in this editorial.

Amazing when it comes to Epstein (and other controversial actions/decisions) how detailed and open minded we are when it comes to someone we liked and supported, and how cut and dry we want things to be when it's someone we don't like.


I don't know, it seems Bill Gates pretty quickly flushed any goodwill he had cultivated with the Bill & Melinda Gates foundation once the Epstein stuff came to light, although views of him before that point were certainly mixed still.

I think the difference with Chomsky is that he is in many ways a modern-day guru with adherents who are naturally resistant to viewing their teacher and leader in a negative light.


Nah, anyone associating with Epstein can get flushed down the toilet. Maybe it's a generational thing.


Let's hope you never wind up in the massive email history of a terrible person because people like you will wish you dead.


If I end up in the massive email history of one of the most vile men in US history, please flush me down the toilet assuming I haven't already heavily invested in the Remington retirement plan


Then be careful to never do anything that is interesting enough or important enough that it will become newsworthy, or write any books or articles on interesting or important topics, or even posting helpful things on social media.

There are a lot of people in Epstein's email history who are there because the above kinds of things caught his interest, and he wanted to discuss or recommend them to others.


Sorry sheikh, busy opening windows in Moscow.

Best I can do is drown you in a bathroom sink around February 2029, not getting dirty toilet water on my good shoes again.

/s


This is so disheartening. Time to short more tech stocks


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: