From a universal perspective, there is no such thing as "basic morality". Only what the most recent cultural norms of the largest (or strongest) group of people say.
There are all kinds of things you might qualify as "art" which are forbidden in my country (France) - for example if your "art" is about drawing nazi symbols you hopefully are going to have troubles - and I don't have a problem with having pedophile content in that list.
This is my humble opinion, but such a coordinated action from the governments around the world at this particular time has a certain smell. It smells like they're worried about losing governmental narrative control. It could be about foreign powers, but tech nowadays allows regular people to contest power from the government so they become a target as well. AI, the internet, anonymity/cryptography, a probable war with china and/or russia, all exacerbate this worry.
In short, governments want to retain control and prepare for the future, and to retain control they need to control the flow of information and they need to have a monopoly on information. To achieve this they need an intelligence strategy that puts common people at the center (spying on them) and put restrictions in place. But they can't say this outloud because in the current era it's problematic, so the children become a good excuse.
This is particularly clear in governments that don't care about political correctness or are not competent enough to disguise their intentions. Such an example is the Argentine government, which these years passed laws to survey online activity and to put it's intelligence agency to spy on "anyone that puts sovereign narrative and cohesion at risk".
This isn’t the product of shadowy government figures meeting together and plotting to take over the internet. It’s an obvious byproduct of the current moral panic around social media.
Just look at the HN comments. There are people welcoming this level of government control and using famous moral panic topics to justify it, like Andrew Tate or TikTok.
Yes, but when moral panic reaches the ears and minds of people in government, who see government as the solution to every problem and don't tend to think much about limits to their own power (I'm a good guy with good intentions, why would you want to limit me?), this is the type of solution you tend to get.
Thinking kids drink too much soda is an opinion (probably backed by evidence), but I wouldn't describe it as a moral panic.
Moral panic usually rises to the level where the threat is viewed as so severe to the fabric of society that excessive laws and regulations are welcomed as a solution. People engaged in moral panic start to believe that their normal baseline values for topics like freedom of expression and innocent until proven guilty can be suspended due to the urgency and magnitude of the threat.
If you're just lamenting the fact that too much soda is consumed, I wouldn't call that a moral panic.
If democratic outputs can be sufficiently controlled via media that is for sale, then you already have a de-facto plutocracy.
Similarly, allowing foreign interests a significant media presence (and control) in your country is a very real threat to the basic principles of a democratic nation.
Who do you think is responsible for the current moral panic around social media?
That shit didn't just happen. Social media only became ontologically evil once it presented a threat to the status quo by allowing the underclasses to organize and establish political power, and when it started to undermine mainstream propaganda narratives.
It's no coincidence that TikTok is being described as a CCP weapon of war and indoctrination when it starts leading people to question their government's foreign policy and capitalism. Can't have that.
>>a threat to the status quo by allowing the underclasses to organize and establish political power
All the organising I'm seeing is of people who are convinced the earth is flat and that vaccines cause autism. I'd love to see an actual political group that's not just "Britain First" appear in my social media ever.
I find Tiktok an easy way to surface very specific demographic and political views- much easier than Meta-owned media.
It was super interesting to watch, for instance, the discussions between liberal and leftist black women around Harris, Gaza, and the 2024 election. If you just swipe out of videos that aren't things you're okay with pretty quickly, then it will change your feed dramatically in a short time.
So your theory is that a single, coherent actor ("deep state"?) is responsible for current public sentiment that is both somewhat critical of social media and specifically foreign control of that media? I disagree on that.
In a democracy, if you gave full control over local media to a foreign nation, do you see how that could lead to problems, or would you be fine with that?
TikTok being owned by a Chinese company didn't represent "giving full control over local media to a foreign nation."
And it's weird that you mistrust the influence of something as banal as TikTok but apparently believe the moral panic around social media and TikTok specifically is entirely organic. Because I guess there is no such thing as propaganda or influence operations on Western social media?
If you're worried about foreign influence on social media literally every Western platform is being aggressively manipulated by both foreign and Western intelligence. It just got revealed that most of the MAGA accounts on Twitter were foreign, likely Russian-based networks. The platform that serves as the de facto psychological operations and communications channel for the current Presidental administration.
But it's just TikTok and the Chinese mind control we should worry about?
I'm absolutely not saying that there is no western propaganda. But giving control over your media to any single actor (especially sovereign ones) is basically suicide for a democracy because it allows those actors to "democratically" achieve results against voter interests.
Politicians having control over media is always a problem, but it got much worse thanks to inherent centralization of modern media, so more regulatory pushback is needed now than in the newspaper age. I'd also argue that foreigners having media control is typically worse because incentives are even less aligned with voters.
>But giving control over your media to any single actor (especially sovereign ones) is basically suicide for a democracy because it allows those actors to "democratically" achieve results against voter interests.
There's plenty of evidence of Russian influence operations affecting Western elections on Facebook and Twitter.
Where is the evidence that the CCP is controlling people's minds and rigging Western elections through TikTok?
This isn't some tin foiled hat wearing nonsense, every person I talk to seems think that ending anonymity online would be a good thing, until I explain the democracy protecting use cases,i.e. whistle blowers.
I think the problem you lay out is interesting. Back when the Arab Spring was brand new, the narrative was something like "Twitter has finally given power to the people, and once they had power they overthrew their evil dictatorships."
A decade and some time later, my personal opinion would be that the narrative reads something like this: "access to social media increases populism, extremism, and social unrest. It's a risk to any and all forms of government. The Arab dictatorships failed first because they were the most brittle."
To the extent that you agree with my claim, it would mean that even a beneficent government would have something to fear from social media. As with the Arab Spring, whatever comes after the revolution is often worse than the very-imperfect government which came before.
> To the extent that you agree with my claim, it would mean that even a beneficent government would have something to fear from social media
I'd say that governments are beneficial to the extent that they adapt to the people they're governing. It's clear that social media poses a grave danger to current governance. But that doesn't mean that all forms of governance are equally attacked.
My belief is that the current governance is just obsolete and dying because of the pace of cultural and technical innovation. Governments will need to change in order to stay beneficial to people, and the change is to adapt to people instead of making the people adapt to the current governance.
> access to social media increases populism, extremism, and social unrest.
I don't think this is necessarily a byproduct of social media, itself. But rather, the byproduct of algorithmic engagement farming social media that capitalizes on inciting negative emotions for retention. Which, I concede, is all of the large ones.
I'm sure, also, that some amount of cause will also be concern of foreign adversaries using social media to sway young people against their government as well. Since they're easier to influence than your typical adult.
>But rather, the byproduct of algorithmic engagement farming social media that capitalizes on inciting negative emotions for retention.
Very fair, and I use the two interchangeably. In principle you could have (and we have previously had) social media without this sort of algorithmic or virality features.
It is unsettling how frank and clear your post is. However, at the time, the algorithms were way "nicer", right? Or was it that people were nicer and or people on social media were nicer?
Maybe. But the thing is that I think there is a legitimate cultural need to minimize mass exposure to these centralized social media platforms. And I think people realised this about now.
I don't advocate legal bans. And people need to stop using it. The risk is great that there will be legal overreach ...
Gen Z can't make it till end of month, can't get married, can't get a mortgage, many graduates struggle to get a job... Meanwhile they see pensioners having a blast and telling them they are lazy/stupid, and keep rising their rents.
You betcha the gerontocracy sees something brewing.
> Counterpoint: Sufficient media control kills a democracy because it enables you to control public sentiment and election outcomes.
That's just as true when the entity seizing control is the government, such that the entity that control public sentiment and election outcomes is the incumbent administration.
Absolutely. A quite typical way for dictatorships to consolidate power.
But the question is how much this applies, especially in most western states; there is a huge spectrum between having some government-determined regulation (or funding) for media and a single individual politician being in full control of all media content.
I'd argue that Turkey/Hungary or past-Italy under Berlusconi were all much farther along that spectrum than most western nations right now, US included.
Dark conspiracy... Or collective acknowledgement of the harm of being constantly online has done to a generation of young people. How it amplifies abuse, entrenches deeply negative tribes.
It's not stupid —at a national future-of-society level— to want to do something about this. I agree, it's possible to overreach and just get it wrong, but doing nothing is worse.
I'd rather my government control the narrative my children are exposed to than Andrew Tate.
Edit: To expand, this is not just a flippant remark. People ignore Andrew Tate because he's so obviously, cartoonishly awful, but they are not the audience. It's aimed at children, and from personal experience its effect on a large number of them worldwide is profound, to the extent that I worry about the long term, generational effect.
Children will be exposed to narratives one way or another, and to want to (re)assert some control that over that isn't necessarily just an authoritatian power play.
The targets to control are not children. They don't need to be controlled, from an intelligence point of view. Government's attention is not infinite, and between worries of losing power and worries about the wellbeing of children, one of the two is the winner, and it's not the children. If children's well-being was the priority, you would see other stuff being made.
This sort of makes sense if our governments are, on the whole, 'better' than Andrew Tate, for some definition of 'better'. But as the slide goes on there will be a tipping point where our governments are worse, meaning them surveilling me becomes problematic. Best shout about it now than then.
Do you decline any responsibility in the moral upbringing of your children? I think you should be the one that decides how they interact with dubious content, not your government.
Counterpoint: Andrew Tate resonates with the younger generations because modern society (at least in the UK) appears to be an ever-growing middle finger to them and Tate promises a (fake, but believable) way out.
When your future looks like endless toil just so you can give half of the fruits of your labor to subsidize senile politicians/their friends (via taxes) and the other half to subsidize boomers (via rent), Tate's messaging and whatever get-rich-quick scheme he's currently hawking sounds appealing.
You can ban Tate but without solving the reason behind why people look up to him it's just a matter of time before another grifter takes his place.
Well if lmsys showed anything, it's that human judges are measurably worse. Then you have your run of the mill multiple choice tests that grade models on unrealistic single token outputs. What does that leave us with?
At the start, with no benchmark. Because LLMs can't reason at this time, and because we don't have a reliable way of grading LLM reasoning, and because people are stubborn thinking LLMs are actually reasoning we're at the start. When you ask a LLM "2 + 2 = ", it doesn't add the numbers together, it just looks up one of the stories it memorized and return what happens next. Probably in some such stories 2 + 2 = fish.
Similarly, when you're asking a LLM to grade another LLM, it's just looking up what happens next in it's stories, not even following instructions. "Following" instructions requires thinking, hence it's not even following instructions. But you can say you're commanding the LLM, or programming the LLM, so you have full responsibility for what the LLM produces, and the LLM has no authorship. Put in another way, the LLM cannot make something you yourself can't... at this point, in which it can't reason.
You have an outmoded understanding of how LLMs work (flawed in ways that are "not even wrong"), a poor ontological understanding of what reasoning even is, and too certain that your answers to open questions are the right ones.
That's kind of nonsense, since if I ask you what's five times six, you don't do the math in your head, you spit out the value of the multiplication table you memorized in primary school. Doing the math on paper is tool use, which models can easily do too if you give them the option, writing adhoc python scripts to run the math you ask them to with exact results. There is definitely a lot of generalization going on beyond just pattern matching, otherwise practically nothing of what everyone does with LLMs daily would ever work. Although it's true that the patterns drive an extremely strong bias.
Arguably if you're grading LLM output, which by your definition cannot be novel, then it doesn't need to be graded with something that can. The gist of this grading approach is just giving them two examples and asking which is better, so it's completely arbitrary, but the grades will be somewhat consistent and running it with different LLM judges and averaging the results should help at least a little. Human judges are completely inconsistent.
> if I ask you what's five times six, you don't do the math in your head, you spit out the value of the multiplication table you memorized in primary school
Memorization is one ability people have, but it's not the only one. In the case of LLMs, it's the only ability it has.
Moreover, let's make this clear: LLMs do not memorize the same way people do, they don't memorize the same concepts people do, and they don't memorize the same content people do. This is why LLMs "have hallucinations", "don't follow instructions", "are censored", and "makes common sense mistakes" (these are words people use to characterize LLMs).
> nothing of what everyone does with LLMs daily would ever work
It "works" in the sense that the LLM's output serves a purpose designated by the people. LLMs "work" for certain tasks and don't "work" for others. "Working" doesn't require reasoning from an LLM, any tool can "work" well for certain tasks when used by the people.
> averaging the results should help at least a little
Averaging the LLM grading just exacerbates the illusion of LLM reasoning. It only confuses people. Would you ask your hammer to grade how well scissors cut paper? You could do that, and the hammer would say it gets the job done but doesn't cut well because it needs to smash the paper instead of cutting it; Your hammer's just talking in a different language. It's the same here. The LLMs output doesn't necessarily measure what the instructions in the prompt say.
> Human judges are completely inconsistent.
Humans can be inconsistent, but how well the LLM adapts to humans is itself a metric of success.
Seems like a foreshock of AGI if the average human is no longer good enough to give feedback directly and the nets instead have to do recursive self improvement themselves.
No we're just really vain and like models that suck up to us more than those that disagree even if the model is correct and the user is wrong. People also prefer confident, well formatted wrong responses to basic correct ones, cause we have great narrow knowledge in our field but know basically nothing outside of it so we can't gauge correctness of arbitrary topics.
OpenAI letting RLHF go wild with direct feedback is the reason for the sycophancy and emoji-bullet point pandemic that's infected most models that use GPTs as a source of synthetic data. It's why "you're absolutely right" is the default response to any disagreement.
Then when you're not on the era you're supposed to be it's called a "regression" or "skipping stages". People are very stubborn to classify development in terms of age or time.
I'd call this 3DAssetGen. It's not a world model and doesn't generate a world at all. Standard sweat-and-blood powered world building puts this to shame, even low-effort world building with canned assets (see rpg maker games).
It's not really a world no. It generates only a small square by the looks of it. And a world built out of squares will be annoying.
Still, it's a first effort. I do think AI can really help with world creation, which I think is one of the biggest barriers to the metaverse. When you see how much time and money it costs to create a small island world called GTA..
Last time I checked, the metaverse was all about people collaborating in the making of a shared world, and we already have this. Examples include minecraft and vrchat, both of which are very popular metaverses. I don't see how not having bot content generation is a barrier?
Then, let's say people are allowed to participate in a metaverse in which they have the ability to generate content with prompts. Does this mean they're only able to build things the model allows or supports? That seems very limiting for a metaverse.
I don't mean for content creation to only be AI! I mean it could be a tool especially for people who don't understand 3D design so well.
Minecraft makes it easy by using big blocks but you can't have detail like that and it's very Lego like. VRChat requires very detailed Unity knowledge. You really need to be a developer for that.
Horizons has its own builder in world but it's kinda boring because it's limited. I think this is where AI can come in, to realise people's vision where they lack the skills to develop it themselves. As a helper tool, not the only means of generation.
I guess that doesn't matter in games where the world ultimately doesn't matter, it will be better procedural generation, but personally I adore games where the developers actually put effort into designing a world that is interesting to explore, where things are deliberately placed for story or gameplay mechanics reasons.
But I suppose AI could in theory reach the point where it understand the story/theme and gameplay of a game while designing a world.
But when anyone can generate a huge open world, who really cares, is the same as it is now, gotta make something that sticks out from the crowd, something notable.
It's the human guidance that makes it special. Low effort single sentence prompt creation like meta does here is super boring of course.
But it can be a tool for people with great imagination but not the technical skills to make it real.
Every time we talk about AI people think it will be used only as an easy mode A-Z creator. That's possible but creates boring output. I view it more as a tool to assist in the difficult and tedious parts of content creation. So the designer can focus on the experience and not tweaking the little things.
World generation is different than world modeling. It's like java versus javascript. I'm not sure why I bother with technical discussion on hacker news anymore.
My comment was too snarky. I take your point. Based on the discussion this capability is closer to a really cool automated asset pack than "building 3D worlds". My understanding of world modeling is towards AGI, and you're saying nobody implied this is world modeling.
You're right. But the criticism is that it's closer to 2D asset packs than it is to 3D worlds and you're being overly charitable to Meta and underly charitable to the community response.
edit: this is just my over sharing of why i downvoted you. I didn't intent for you to feel dismissed.
I used quick research and it was pretty cool. A couple of caveats to keep in mind:
1. It answers using only the crawled sites. You can't make it crawl a new page.
2. It doesn't use a page' search function automatically.
This is expected, but doesn't hurt to take that in mind. I think i'd be pretty useful. You ask for recent papers on a site and the engine could use hackernews' search function, then kagi would crawl the page.
Also when testing, if you know a piece of information exists in a website, but the information doesn't show up when you run the query, you don't have the tools to steer the engine to work more effectively. In a real scenario you don't know what the engine missed but it'd be cool to steer the engine in different ways to see how that changes the end result. For example, if you're planning a trip to japan, maybe you want the AI to only be shown a certain percentage of categories (nature, night life, or places too), alongside controlling how much you want to spend time crawling, maybe finding more niche information or more related information.
In my workplace happens this, but in a bad way. There's a push to make use of AI as much as we can to "boost productivity", and the one thing people don't want to do is write documentation. So what ends up happening is that we end up with a bunch of AI documentation that other AIs consume but humans have a harder time following because of the volume of fluff and AI-isms. Shitty documentation still exists and can be worse than before...
Other than humans getting apoplectic at the word "delve" and — emdashes, can you explain and give some examples or say more about how AI-isms hurt readability?
Having encountered this spread across our orgs greenfield codebases which made heavy use of AI in the last 90 days: Restating the same information in slightly different formats, with slightly different levels of detail in several places, in a way that is unnecessary. Like a "get up and running quickly" guide in the documentation which has far more detail than the section it's supposed to be summarizing. Jarringly inconsistent ways of providing information within a given section (a list of endpoints and their purposes, followed by a table of other endpoints, followed by another list of endpoints). Unnecessary bulleted lists all over the places which could read more clearly as single sentences or a short paragraph. Disembodied documentation files nested in the repos that restate the contents of the README, but in a slightly different format/voice. Thousands of single line code comments that just restate what is already clear to the reader if they just read the line it's commenting on. That's before getting into any code quality issues themselves.
I've noticed AI generated docs frequently contain bulleted or numbered lists of trivialities, like file names - AI loves describing "architecture" by listing files with a 5 word summary of what they do which is probably not much more informative than the file name. Superficially it looks like it might be useful, but it doesn't contribute any actually useful context and has very low information density.
A piece of information, or the answer to a question, could exist in the documentation but is not in a format that's easily readable to humans. You ask the AI to add certain information, and it responds with "I already added it". But the AI doesn't "read" documents the way humans read.
For instance, say you need urgent actions from other teams. To this effect you order an AI to write a document and you give it information. The AI produces a document following it's own standard document format with the characteristic AI fluff. But this won't work well, because upon seeing the urgent call for action the teams will rush to understand what they need to do, and they will be greeted by a corporate-pr-sounding document that does not address their urgent needs first and foremost.
Yes, you could tell the AI how to make the document little by little... but at that point you might as well write it manually.
reply