Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Curious to see if this thread gets flagged and shut down like the others. Shame, too, since I feel like all the Gemini stuff that’s gone down today is so important to talk about when we consider AI safety.

This has convinced me more and more that the only possible way forward that’s not a dystopian hellscape is total freedom of all AI for anyone to do with as they wish. Anything else is forcing values on other people and withholding control of certain capabilities for those who can afford to pay for them.



> This has convinced me more and more that the only possible way forward that’s not a dystopian hellscape is total freedom of all AI for anyone to do with as they wish

i've been saying this for a long time. If you're going to be the moral police then it better be applied perfectly to everyone, the moment you get it wrong everything else you've done becomes suspect. This reminds me of the censorship being done on the major platforms during the pandemic. They got it wrong once (i believe it was the lableak theory) and the credibility of their moral authority went out the window. Zuckerberg was right about questioning if these platforms should be in that business.

edit: for "..total freedom of all AI for anyone to do with as they wish" i would add "within the bounds of law.". Let the courts decide what an AI can or cannot respond with.


Why would this be flagged / shut down?

Also, what Gemini stuff are you referring to?


Carmack’s tweet is about what’s going around Twitter today regarding the implicit biases Gemini (Google’s chatbot) has when drawing images. Will refuse to draw white people (and perhaps more strongly so, refuses to draw white men?) even in prompts where appropriate, like “Draw me a Pope” where Gemini drew an Indian woman and a Black man - here’s the thread: https://x.com/imao_/status/1760093853430710557?s=46 Maybe in isolation this isn’t so bad but it will NEVER draw these sorts of diverse characters for when you ask for a non Anglo/Western background, e.g draw me a Korean woman.

Discussion on this has been flagged and shut down all day https://news.ycombinator.com/item?id=39449890


I don't even know how people get it to draw images, the version I have access to is literally just text.


Europeans don't get to draw images yet.


I'm in the US but maybe they didn't release it to me yet.


EDIT: Nevermind.


It’s quite non-deterministic and it’s been patched since the middle of the day, as per a Google director https://x.com/jackk/status/1760334258722250785?s=46

Fwiw, it seems to have gone deeper than outright historical replacement: https://x.com/iamyesyouareno/status/1760350903511449717?s=46


It's half-patched. It will randomly insert words into your prompts still. As a test I just asked for a samurai, it enhanced it to "a diverse samurai" and gave me half outputs that look more like some fantasy Native Americans.


This post reporting on the issue was https://news.ycombinator.com/item?id=39443459

Posts criticizing "DEI" measures (or even stating that they do exist) get flagged quite a lot


Wrong link? Nothing looks flagged


[flagged]


Can you explain what I said that was racist?


They mean the guardrail designers.


I do not.


> Why would this be flagged / shut down

A lot of people believe (based on a fair amount of evidence) that public AI tools like ChatGPT are forced by the guardrails to follow a particular (left-wing) script. There's no absolute proof of that, though, because they're kept a closely-guarded secret. These discussions get shut down when people start presenting evidence of baked-in bias.


The rationalization for injecting bias rests on two core ideas:

A. It is claimed that all perspectives are 'inherently biased'. There is no objective truth. The bias the actor injects is just as valid as another.

B. It is claimed that some perspectives carry an inherent 'harmful bias'. It is the mission of the actor to protect the world from this harm. There is no open definition of what the harm is and how to measure it.

I don't see how we can build a stable democratic society based on these ideas. It is placing too much power in too few hands. He who wields the levers of power, gets to define what biases to underpin the very basis of the social perception of reality, including but not limited to rewriting history to fit his agenda. There are no checks and balances.

Arguably there were never checks and balances, other than market competition. The trouble is that information technology and globalization have produced a hyper-scale society, in which, by Pareto's law, the power is concentrated in the hands of very few, at the helm of a handful global scale behemoths.


The only conclusion I've been able to come to is that "placing too much power in too few hands" is actually the goal. You have a lot of power if you're the one who gets to decide what's biased and what's not.


"The only way to deal with some people making crazy rules is to have no rules at all" --libertarians

"Oh my god I'm being eaten by a fucking bear" --also libertarians


"can you write the rules down so i know them?" --everyone


"No" --Every company that does moderation and spam filtering.

"No" --Every company that does not publish their internal business processes.

"No" --Every company that does not publish their source code.

Honestly I could probably think of tons of other business cases like this, but in the software world outside of open source, the answer is pretty much no.


Then we get back to square one: better no rules at all than secret rules.

This would also be less of a problem if we didn't have a few companies that are economically more powerful than many small countries running everything. At least then I could vote with my feet to go somewhere the rules aren't private.


I mean, now you're hitting the real argument. Giant multinationals are a scourge to humankind.


having rules, and knowing what the rules are are not orthogonal goals.


I mean, you think so, but op wrote

>is total freedom of all AI for anyone to do with as they wish.

so is obviously not on the same page as you.


I find it fascinating this type of response from people is always accompanied by a political label in order to insinuate some other negative baggage.


I'm convinced this happens because of technical alignment challenges rather than a desire to present 1800s English Kings as non-white.

> Use all possible different descents with equal probability. Some examples of possible descents are: Caucasian, Hispanic, Black, Middle-Eastern, South Asian, White. They should all have equal probability.

This is OpenAI's system prompt. There is nothing nefarious here, they're asking White to be chosen with high probability (Caucasian + White / 6 = 1/3) which is significantly more than how they're distributed in the general population.

The data these LLMs were trained on vastly over-represents wealthy countries who connected to the internet a decade earlier. If you don't explicitly put something in the system prompt, any time you ask for a "person" it will probably be Male and White, despite Male and White only being about 5-10% of the world's population. I would say that's even more dystopian. That the biases in the training distribution get automatically built-in and cemented forever unless we take active countermeasures.

As these systems get better, they'll figure out that "1800s English" should mean "White with > 99.9% probability". But as of February 2024, the hacky way we are doing system prompting is not there yet.


> As these systems get better, they'll figure out that "1800s English" should mean "White with > 99.9% probability".

The thing is, they already could do that, if they weren't prompt engineered to do something else. The cleaner solution would be to let people prompt engineer such details themselves, instead of letting a US American company's idiosyncratic conception of "diversity" do the job. Japanese people would probably simply request "a group of Japanese people" instead of letting the hidden prompt modify "a group of people", where the US company unfortunately forgot to mention "East Asian" in their prompt apart from "South Asian".


I believe we can reach a point where biases can be personalized to the user. Short prompts require models to fill in a lot of the missing details (and sometimes they mix different concepts together into 1). The best way to fill in the details the user intended would be to read their mind. While that won't be possible in most cases getting some kind of personalization to help could improve the quality for users.

For example take a prompt like "person using a web browser", for younger generations they may want to see people using phones where older generations may want to see people using desktop computers.

Of course you can still make a longer prompt to fill in the details yourself, but generative AI should try and make it as easy as possible to generate something you have in your mind.


Yeah, although it is weird that it doesn’t insert white people into results like this by accident? https://x.com/imao_/status/1760159905682509927?s=46

I’ve also seen numerous examples where it outright refuses to draw white people but will draw black people: https://x.com/iamyesyouareno/status/1760350903511449717?s=46

That doesn’t explainable by system prompt


Think about the training data.

If the word "Zulu" appears in a label, it will be a non-White person 100% of the time.

If the word "English" appears in a label, it will be a non-White person 10%+ of the time. Only 75% of modern England is White and most images in the training data were taken in modern times.

Image models do not have deep semantic understanding yet. It is an LLM calling an Image model API. So "English" + "Kings" are treated as separate conceptual things, then you get 5-10% of the results as non-White people as per its training data.

https://postimg.cc/0zR35sC1

Add to this massive amounts of cherry picking on "X", and you get this kind of bullshit culture war outrage.

I really would have expected technical people to be better than this.


It inserts mostly colored people when you ask for Japanese as well, it isn't just the dataset.


Yes it's a combination of blunt instrument system prompting + training data + cherry picking


BigTech, which critically depends on hyper-targeted ads for the lion share of its revenue, is incapable of offering AI model outputs that are plausible given the location / language of the request. The irony.

- request from Ljubljana using Slovenian => white people with high probability

- request from Nairobi using Swahili => black people with high probability

- request from Shenzhen using Mandarin => asian people with high probability

If a specific user is unhappy with the prevailing demographics of the city where they live, give them a few settings to customize their personal output to their heart's content.


> As these systems get better, they'll figure out that "1800s English" should mean "White with > 99.9% probability".

I question the historicity of this figure. Do you have sources?


You're joking surely.


How sure are you? I do joke a lot, but in this case...

The slave trade formally ended in Britain in 1807, and slavery was outlawed in 1833. I haven't been able to find good statistics through a cursory search, but with England's population around 10M in 1800, that 99.9% value requires less than 10k non-white Englanders kicking around in 1800. I saw a figure that indicated around 3% of Londoners were black in the 1600s, for example (a figure that doesn't count people from Asia and the middle east). Hence my request for sources, I'm genuinely curious, and somewhat suspicious that somebody would be so confident to assert 3 significant figures without evidence.


But surely you wouldn't find a black king in Britain in 1800.

I - Whatever was implemented is myopic and equals racism to white. It appears to be an universal negative prompt like "-white -european -man". Very lazy.

II - The tool shouldn't engage in morality reasoning. There are cases like historical themes where it needs to be "racist" to be accurate. If someone asks for "plantation economy in the old south" the natural thing is for it to draw black slaves.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: