Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How can a language model pose a national security risk?


It cannot. Remember when GPT-2 was too dangerous to release? And when industry “leaders” were begging for a moratorium on models stronger than GPT-4?

The idea that this technology carries existential risk is how OpenAI and others generate the hype that generates investment.


Well, would you say Internet turned to even more shit now that majority of content is AI generated?


Less advanced things have been labeled a national security risk.

It's currently quasi-illegal in the US to open source tooling that can be used to rapidly label and train a CNN on satellite imagery. That's export controlled due to some recent-ish changes. The defense world thinks about national security in a much broader sense than the tech world.

See https://www.federalregister.gov/documents/2020/01/06/2019-27...


Governments everywhere are racing to attach them to weapons.


Genuine question. Regarding language models specifically, would it really have value to be strapped on weapons?


Value to you or me? Unlikely. Value to others who wish to cut the cost of killing, increase the speed of killing, or launder accountability? Undoubtedly.

Siri, use his internet history to determine if he's a threat and deal with him appropriately.

https://en.m.wikipedia.org/wiki/AI-assisted_targeting_in_the...

> AI can process intel far faster than humans.[5][6] Retired Lt Gen. Aviv Kohavi, head of the IDF until 2023, stated that the system could produce 100 bombing targets in Gaza a day, with real-time recommendations which ones to attack, where human analysts might produce 50 a year

Putting them on weapons so they can skip the middle man is the next logical step.


Encryption




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: