Hacker Newsnew | past | comments | ask | show | jobs | submit | postultimate's commentslogin

Journalists have been on the grievance-grifter gravy-train for a long long time, so we can expect unrelenting hostility to anything that redirects attention away from the lucrative "bias" narrative to any other issue.

> AI ethicists and researchers such as Timnit Gebru and Meredith Whittaker

Ooooh, I love their work !

Reading these histories together, we find that Babbage’s proto-Taylorist ideas on how to discipline workers are inextricably connected to the calculating engines he spent his life attempting to build. From inception, the engines — “the principles on which all modern computing machines are based” — were envisioned as tools for automating and disciplining labor. Their architectures directly encoded economist Adam Smith’s theories of labor division and borrowed core functionality from technologies of labor control already in use. The engines were themselves tools for labor control, automating and disciplining not manual but mental labor. Babbage didn’t invent the theories that shaped his engines, nor did Smith. They were prefigured on the plantation, developed first as technologies to control enslaved people.


Doesn't "everything Plato said was stupid" count as philosophical progress since Plato ?


Maybe, but that is really only the first step and the easy one. The trick is the second step which I don’t think has been written down yet. That for me would be a sign of progress. I picked Plato because I believe he is credited as first writing down a western philosophy


Only if the things Plato said were actually stupid. They were not.


This is a decent exploration of the third-worst possible outcome of AI, but it's a bizarre dismissal of the second-worst, even though he explains the mechanism himself:

> The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value.

Well, yes. If the AI is better than human it will be put in charge of corporations, and will be given the goal of maximizing shareholder value. Since those shareholders can be corporations, there's no reason why this has to involve human preference at any point. As a single-minded optimizer indifferent to humans, it would need to be successfully restrained in order to be merely oppressive - by default, it's a Paperclipper.


These articles tend to write "the doomsday scenario is not X, it's Y", without any supporting reasoning why it's not X.

What they really mean is "Y is a possible bad outcome", but it wouldn't sound impactful to say that.


It’s because we can’t envision anything other than capitalism… any of us.

Capitalist realism is a big deal. In retrospect the old world order will seem crazy but while it is here, now, seeing on the other side seems insane.

The AI has been running in our heads since Adam Smith. Capitalism is the “evil AI paper clip maximizer” - it exists only to find supply and demand equilibrium, nothing more.

We ignore that at our own peril.


My thought is, capitalism may be an AI, but it's not too capable on the whole. Even though it screws up a lot, there's room under capitalism for people to sometimes do stuff that isn't all about profit. And of course, sometimes profit and human value are aligned -- that's what capitalism is intended to do, even if the alignment isn't even close to perfect.

My worry is that with AI, things will move so fast, they will be so smart, and will end up being used so widely, that there will be no room left for anything else we value. People and organizations that turn more power over to the machines will run rings around the ones that are more cautious. The end is AI everywhere, used for everything, and humans disempowered (or gone).


> when a straightforwardly “I’m a Nazi” Nazi showed up in the beta, people used the report function, and the Bluesky team labeled the account and banned it from the Bluesky app

"Don't like the feudalism of Mastodon ? Come enjoy the monarchism of Bluesky !"


Among all the fuss about typography, the authors have overlooked the most important detail about the PHD thesis, which was that Dennis Ritchie was the inventor of Brainfuck.


When GPT mentions ice-cream, it does so because it was in the corpus. When it occurred in the corpus, it was as a reference to actual ice-cream. So GPT has just as much intentionality as you do.

You might claim that you've eaten ice-cream, and that that makes a difference. But if we assume that your senses aren't lying about what your senses do, then what they do is produce information - indications of difference without any indication of what it's a difference of. That puts you in the same epistemic position GPT is in. GPT knows just as much about ice-cream as you do.


Let us construct IceCreamGPT. We take a corpus of text written by people who like ice cream and have provably demonstrated their joy while eating it. We then fine tune GPT 3.5 and the resulting model is called IceCreamGPT. Does IceCreamGPT like ice cream or is it only seemingly liking ice cream? It obviously likes ice cream, since it shares the same intentionality as humans responsible for the training data.

Now do the same with people who don't like ice cream but lie and write that they like ice cream. The performance of the second model is identical to the first model. Does this mean IceCreamGPT2 likes ice cream? Of course not, IceCreamGPT2 doesn't like ice cream despite it saying it likes ice cream! We know it doesn't like ice cream because it has the same intentionality as the humans responsible for its training data.

Now we have entered a magic world in which anything can mean anything.


No, this is just question-begging by treating GPT's access to the world as being external to it, but your own as being part of you.

If we fix this by treating your senses as external, then we can imagine a copy of you with its senses rewired so that artichokes* taste like icecream (and vice-versa). (plus we lie to you about which is which.) The resulting imtringued2 is identical to you, but doesn't like ice cream despite it saying it likes ice cream. Just like IceCreamGPT2.

* Or some equally disgusting "food".


Saying "I like ice-cream" has obvious conditions under which the speaker means it. ChatGPT cannot meet those conditions. It lacks the capacity to like, indeed, to intend to say anything.

ChatGPT cannot communicate. No act of text generation its engaged in counts as communication: it does not mean to say anything.


1 GPa is less than half the yield strength of the best steels. All you would need to maintain the superconducting state is to put the material in a steel pipe.


Good for him. Nobody's racism should be unscientific.


Welp, HN has nerdsniped me again: After some unpaid pondering, here's my suggestion for a middle ground on downvoting:

1) Reduce issue-related downvoting by only allowing some random subset of the current active users to downvote, a different set per post. Don't let the user know their vote won't work in advance, but tell them after.

2) Reduce vote sockpuppeting by recording which pairs of voters downvoted a post, then disallow that pair from downvoting a post again. This includes votes that failed due to 1).


Yes, tell everyone how much you hate white people too. Your act of piety is surely guaranteed to keep the inquisition from your door.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: