Shut down the behavior with regulations or shut down the companies. Meta and TikTok have no natural right to exist if they are a net negative to society.
Specifically, I believe Section 230 protections shouldn't apply to algorithmicly promoted content. TikTok hosting my video isn't inherently an endorsement of what I'm saying, but proactively pushing that video to people is functionally equivalent even if you want to quible over dictionary definitions. These algorithms take these platforms from dumb content-agnostic pipes that deserve protections to editorial enterprises that should bear responsibility for what they promote.
There is a decent legal argument to be made that §230 doesn't immunize platforms for the speech of their algorithm, to the extent that said speech is different from the speech of the underlying content. (A simple, if absurd, example of this would be if I ran a web forum and then created a highlight page of all of the defamatory comments people posted, then I'm probably liable for defamation.)
The problem of course is that it's difficult to disentangle the speech of algorithmic moderation from the speech of the content being moderated. And the minor issue that the vast majority of things people complain about is just plain First Amendment-protected speech, so it's not like the §230 protections actually matter as the content isn't illegal in the first place.
I don't think we even need to go that far. Just remove protection for paid advertisements. It's absurd that Meta cannot be held liable for the ads they promote when a newspaper can be held liable if they were to publish the same ad.
But isn't this difficult when the tech bosses are in cahoots with the country bosses? And honestly even if the leadership changes, I somehow have a feeling the techs will naturally switch boats as well - might be a reason why the opposition doesn't paint them that much nowadays, to make sure they switch along.
Listing content alphabetically or chronologically is technically an "algorithm" too. What I'm specifically challenging here is the personalized algorithm designed to keep individual users on the platform based off a user profile influenced by countless active and passive choices the user has made over time. The type of HN algorithm that serves the same content to every user based off global behavior is fine in my book because it is both less exploitative of the user base and a reflection of that user base's proactive decisions in upvoting/downvoting content.
So if HN added anything personalized, like allowing you to show fewer stories on topics you dislike, it would lose protection? I can't get on board with that.
I also think it would be extremely unpopular. People like their recommendation engines. They want Netflix to show them more similar shows. They want Reddit to help them find more similar subreddits. I know there are HN users who don't want any of these recommendation engines, but on the whole people actually want them.
>They want Netflix to show them more similar shows.
Perhaps that example was a little too revealing on your end. Netflix doesn't have/need Section 230 protections and they're doing fine.
I'm not suggesting these algorithms should be illegal, just that Section 230 protections were defined too broadly because they predated the feasibility of these type of algorithms. These platforms would be free to continue algorithmic promotion, but I believe these algorithms would be less harmful if the platforms had to worry about potential legal liability.
Think YouTube and copyright for comparison. The DMCA is far from perfect, but we have YouTube as an example of a platform that survived and even thrived in the transition from a world that didn't care about copyrighted internet video to one in which they that needed to moderate with copyright in mind.
Cigarettes weren’t made illegal. Cigarette companies are not liable for their user’s choice to consume them. What’s your point?
> Perhaps that example was a little too revealing on your end. Netflix doesn't have/need Section 230 protections and they're doing fine.
Perhaps it was a little too revealing on your end that you conveniently ignored my other example of Reddit.
If you need to cherry pick to make your point it doesn’t look very strong.
I still don’t see consistency in your argument that Section 230 should still apply to Hacker News but not, for example, Reddit, simply because one of them allows users to personalize the content they see.
> Cigarette companies are not liable for their user’s choice to consume them.
They kind of were. Not completely liable, but partially. Because... um, well, uh, yeah, they are. They are literally liable.
If you produce cigarettes, you are partially responsible for people smoking. Smoking is also not a "choice", come on now. The only people who believe that are people trying to sell you cigarettes or people who have never smoked.
That's why you can't advertise cigarettes anywhere anymore and they're very hard to find. And, when you do find them, the box tells you "hey please don't smoke this". R.J. Reynolds didn't do that by fucking choice, we forced them.
Cigarette companies paid billions, and continue to pay, for the societal harm they cause. That's a liability. They're not legally liable in the sense that nobody is going to jail. But they have financial liabilities. Because they do, literally, cause financial harm.
I don't think people really understand just how harshly we ran Tobacco companies into the ground. Many pay more per cigarette for liability than they pay to make the cigarette.
This is the type of comment that suggests you aren't engaging with what I'm saying beyond a superficial level. My argument is consistent. I'm not cherry-picking examples. The differentiator I'm criticizing is the personalized nature of the algorithms. But rather than engaging with the merit of that distinction, you're acting as if there is no distinction at all. I'm not sure if there is much point in contuning the conversation from there.
I think the other person's issue with your position is that the distinction is entirely arbitrary. You're not giving any reasons why the demarcation line for which feed algorithms are OK and which are not is there instead of anywhere else. It seems to be just "Facebook and TikTok are bad; Their feeds are personalized recommendation engines; Therefore personalized recommendation engines are bad, and other feed algorithms are OK".
>I think the other person's issue with your position is that the distinction is entirely arbitrary.
Basically all laws related to speech are abitrary. Can you define a clear and self-evident line between pornography and art as an example? Or do you agree with the Supreme Court that we just "know it when [we] see it"?
>You're not giving any reasons why the demarcation line for which feed algorithms are OK and which are not is there instead of anywhere else.
Let me just copy and paste what I said before: "The type of HN algorithm that serves the same content to every user based off global behavior is fine in my book because it is both less exploitative of the user base and a reflection of that user base's proactive decisions in upvoting/downvoting content." I can understand if one of you want to challenge that line of thought, but you both acting like I didn't give any reasoning at all is bizarre and gives me the impression that you aren't actually reading what I'm writing.
> Basically all laws related to speech are abitrary.
True. This is a fair point. But the expected counter argument would be that the exact line isn't the issue instead it's the justification for the principle.
IE why is personalized algorithms more dangerous than general ones.
My answer (because I mostly agree with you) is that the difference is that personalized algorithms almost feel like brain hacking. And this brain hacking simply doesn't work at scale when applied to vague general algorithms.
>Basically all laws related to speech are abitrary. Can you define a clear and self-evident line between pornography and art as an example? Or do you agree with the Supreme Court that we just "know it when [we] see it"?
I'm a free speech absolutist, so I personally don't find which laws already exist on the matter to be a compelling argument. If it was up to me, I'd get rid of any such laws.
>The type of HN algorithm that serves the same content to every user based off global behavior is fine in my book because it is both less exploitative of the user base and a reflection of that user base's proactive decisions in upvoting/downvoting content.
The argument hinges entirely on the relative exploitativeness of different feed algorithms, but that metric is merely asserted with no support.
Typically free speech absolutism leads individuals into logical traps they find difficult to dig themselves out of.
But we don't even need that in this case. Private property can have all kinds of restrictions put on it based on the potential dangers and harms it causes. This in fact is one of the most common attacks on speech I see right now (Meta et el) that they will just put age requirements on sites.
>Typically free speech absolutism leads individuals into logical traps they find difficult to dig themselves out of.
Yes, "free speech absolutists" tend to define these terms in ways to hide the true arbitrary nature of their beliefs. The obvious test case is do they believe in legalizing CSAM. Either they answer "yes" and ostracize themselves from almost all of society or they say "no" and have to come up with arbitrary rules why this specific content doesn't count as speech. Either way, self-applying the label is its own red flag.
I wasn't the one who brought up free speech into the discussion; slg was. That aside, whether it curtails it or not would depend on how one defines "speech". Even if the particular way in which a website displays information is not speech, I still think it would be an overreach for a government to legislate how websites are allowed to function. If I as a user want to see a feed populated by recommended content, and the site's operators want to show it to me, what business does the government have stepping into our interaction?
I don't believe the argument was that personalized algorithmic recommendations need be forbidden per se, but that doesn't mean that should be the default, nor that companies should be able to wash their hands (under section 230) of what they promote
Like the other posts you're arguing against have said, the argument is not that social media or personalized algorithms should be "illegal"
And "are we going to pretend" is a non-argument that works both ways: "Are we really going to pretend individualized algorithmic social media hasn't caused harm to society on par with smoking?" would be equally unconvincing
What do you think about the case of Lucy Connolly, who, during a riot where rioters were burning down hotels housing immigrants, tweeted that people should burn down hotels housing immigrants and was arrested for that?
That is not comparable because of the little you have over the algorithm for the other cases. On bandcamp, you can select the genre and a sorting criteria and have very good control over the list. But on Spotify, it’s very obscure, with things you’ve never asked for being in front even before your own library.
for me, the distinction is control. If I'm filtering out things I don't like, I'm in control. If the system is filtering out items or promoting items, I think it fair it take on more responsibility.
A system doesn't want your feed empty because they want your eyes, but because money. When they choose what goes into the feed, they should gain increased liability for what comes out. The risk they take on for more money. If that money is not worth it, don't recommend.
I enjoyed the internet in the beforetimes. Recommendations were limited to "this is objectively related, this is new, this is upvoted, this is by someone you follow or someone they follow, or this is randomly chosen." I still feel there is some liability there, but it is less than when it changes to "this is something we have determined we should show you based on your personal past behavior." That feels different than liking a category when the meta-categories are picked for you. Especially when those meta-categories allow for things you would not want to opt in to, like doomscroll material.
I like some of the stuff I get algorithmically. I never would have searched for a soul cover of Slim Shady, but I'm glad I found it. And I'm glad I found knot tying videos. I think there is space for fancy feeds. But I think it should come closer to being a publisher. This _will_ depress throughput creation if things all have to be monitored which will change the economies and maybe that means some businesses can't exist as they do today. I'd likely pay a subscription to a LearnTok that had curated, quality material.
I'm paying for Netflix to do that as a feature. Instagram uses that to drive engagement to sell ads. Disabling personalized content on Netflix is a revenue-neutral choice. On Instagram, that would mean their ad revenue takes a huge dive. Apples aren't oranges.
1.) I do not know anyone who would particularly like netflix recommendation algorithm.
2.) Netflix algorithm is not relevant to "Section 230 protections", because it does not contain any data from third parties. All of that is Netflix content.
Theres a paper that studied the spread of misinformation online, back before COVID - they found that messages cascaded through more science and research oriented networks differently than they cascaded through conspiracy communities.
Popularity is not a sign of Signal. It’s a sign of being able to scratch the limbic system and zeitgeist at the same time.
For a site like HN, popularity isn’t a good predictive signal.
But algorithmic feeds can actually be useful for discovery of related material - I want Youtube to show me more Japanese jazz and video essays about true crime based on my watch history, I wanted Twitter to show me more accounts from writers and game developers because I follow them (before the platform went full Nazi) and I like that Facebook shows me people and information from my local area. Forcing all platforms to use only alphabetical or chronological feeds because of the exploitative way some platforms use algorithms seems awfully close to the "banning math" argument people used to use about cryptography and DRM, and it would remove a lot of legitimate use from the internet.
It's all about who controls the algorithm. A sensible approach would be to decouple recommendations from platforms, to treat them like plug-ins that the user must be allowed to add or disable. You want to use YouTube's recommendation algorithm on YouTube? Great, but there needs to be an off-switch and a way to change over to another provider. This is classic anti-trust stuff, breaking up a sector into interoperable pieces.
The anti-trust argument doesn't work for me. Neither Youtube nor any other single platform represent a "sector" in the way Standard Oil or Ma Bell represented a "sector", they don't "control the algorithm" in any sense beyond implementing code on their site. Certainly not in the way that a monopoly preventing other entities from competing against it by controlling access to some physical resource. Other video hosting sites besides Youtube exist, other social media platforms exist, so competition exists.
And besides, what's likely to happen is that you'll only have a few "algorithm providers" controlling access the entire web which only centralizes it even more.
Really nice to see someone else bringing this up. Algorithmic editorial decisions are still editorial decisions. I think ultimately search and other forms of selective content surfacing should not have ever been exempt. They were never carriers. I appreciate that this would make the web as we know it unusable. I think failing to tackle this problem has will also make the web unusable, and in a worse way.
> I think ultimately search and other forms of selective content surfacing should not have ever been exempt. They were never carriers. I appreciate that this would make the web as we know it unusable
I can’t be the only one confused at these calls to have the government destroy things like searching the web, am I?
How is this a real idea being proposed on Hacker News, of all places? Not that long ago it was all about freedom on the Internet and getting angry when the government interfered with our right to speech online, and now there are calls to do drastic measures like make search engines legally untenable to run in the United States?
It’s also confusing that nobody calling for banning things or making the web unusable appears to be making the connection that the internet is global. If we passed laws that forced Google and Bing to shut down because they’re liable for results they index, what do you think the population will do? Shrug their shoulders and give up on the internet? Or go use a search engine from another country?
> How is this a real idea being proposed on Hacker News, of all places? Not that long ago it was all about freedom on the Internet and getting angry when the government interfered with our right to speech online
I can be upset about the government trying to make the world worse, and about other huge balls of power who have been making the world shitty in an ongoing fashion. Freedom of speech doesn't mean shit if a handful of people can buy up or otherwise absorb control of 90% of media and choose who gets heard. The call for regulation is an acknowledgment that the market fucked this one up. When the government threatens speech, I'll call for civil disobedience and proactive protections. When oligarchs threaten speech I'll call for regulation and punishment.
> It’s also confusing that nobody calling for banning things or making the web unusable appears to be making the connection that the internet is global. If we passed laws that forced Google and Bing to shut down because they’re liable for results they index, what do you think the population will do?
You assume that the only way to get a good, free search engine is to give control of it to some private entity. That if we don't do it in the US, people with turn to someplace else. I think you may be lacking in imagination. At a minimum, the possibility exists for nonprofit organizations to run quality search engines, but it's also possible to decouple the indexing business from the ranking provider. Google could run an index and charge for access, and ranking providers could build on top of that and recoup costs with non-tracking ads, donations, sales, whatever business model they please. Just because an unregulated market doesn't come up with a good solution doesn't mean a market under different constraints won't find a better way. And if nothing works out you always have the option of grants or a public digital infrastructure approach. There are so many things to try beyond shrugging and declaring that the market has ordained five dudes arbiters of the internet as experienced by most people.
> I can’t be the only one confused at these calls to have the government destroy things like searching the web, am I?
if you find this distressing then i imagine you find it equally as distressing as a couple of corporations destroy something.
the reason the word *enshittification” has become so ubiquitous is because corporations are actively destroying the internet and desperately trying to convince us the internet is separate from “the real world”.
sometimes stopping a person from burning the house down is necessary. no matter how loudly they cry about their freedom to have a bonfire in the living room.
What we need is quite simply a very good protocol for distributed search. It takes some storage, some bandwidth and some cpu cycles. Have people contribute those and earn queries and indexing. Make it very good but simple enough for a half decent programmer to make a lvl 1 node that can only announce it exists. Trackers, supper nodes, ban lists, ranking algo's etc etc Write server code in all the languages, have phone and desktop clients. There can be subscription based clients too so that the cpu, storage, bandwidth can be done for you by a company.
oddly enough the TikTok referred to here was to be shut down in the US. But then the executive branch ignored the law while it could organize handing the company over to Larry Ellison instead. But these allegations date to when the company was fully under the control of ByteDance, and not US-regulated entities at all.
> oddly enough the TikTok referred to here was to be shut down in the US. But then the executive branch ignored the law while it could organize handing the company over to Larry Ellison instead
Which should make people think twice when they call for government regulation on speech as a solution to content they don't want other people to see.
The more you give the government power to control speech, the more they'll use those laws to further their own interests.
Wouldn’t we need to shut down all news outlets, all the twitters and all the newspapers then? They might not be on the toxic spectrum as meta/tiktok, but are very close
There are people in this thread directly calling for us to strip protections from search engines and force them to shut down.
I think a lot of this discussion has become detached from reality and we’re just entertaining some people’s impossible fantasies about shutting down the internet and returning to the past.
Human instinct is always to ban and fight everything as soon as any change happens in society. The same biological motivation to doomscroll fuels our instincts to panic and doompost about how society is ruined unless we do [brash action].
>> Meta and TikTok have no natural right to exist if they are a net negative to society.
Exactly. And when we are done with them we will shut down Molson and Anheuser-Busch. Then we can go after the people who make selfy sticks. Then the company that owns that truck that cut me off last week. Basically, organization who i dislike should not be allowed to exist.
Regulating content that makes people enraged seems like a slippery slide towards regulating any kind of "unwanted" speech. I get regulating CSAM, calls for violence or really obvious bullying (serious ones like "kill yourself" to a kid), but regulating algorithms that show rage bait leaves a lot of judgement to the regulators. Obviously I don't trust TikTok or Meta at all, but I don't trust the current or the future governments with this much power.
For example, some teen got radicalized with racist and sexist content. That's bad in my opinion, as I'm not a racist or a sexist. But should racist or sexist speech be censored or regulated? On what grounds? How do we know other unpopular (now or in the future) speech won't be censored or regulated in the future? Again, as much as I'm not a racist or sexist, I don't think the government should have a say in whether a company should be able to promote speech like "whites/blacks are X" or "men/women are Y". What's next? Should we regulate speech about religion (Christians/Muslims/atheists are Z) or ethics (anti-war people or vegans are Q) or politics or drugs or sex?
The current situation is shitty, but giving too much power to regulators will likely make it way shittier. If not now, in the future, since passed regulations are rarely removed.
At least in the US the government can't regulate speech (for the most part). But what we could do is regulate recommendation algorithms or other aspects of the overall design in a way that's generalized enough to be neutral in regards to any particular speech. And such regulations don't need to apply to any entity below some MAU or other metric.
Even just mandating interoperability would likely do since that would open up the floor to competitors. Many users are well aware of the issues but don't feel they have a viable alternative that satisfies their goals.
In theory I'm OK (kinda) with regulating the "overall design" somehow, but I don't see how it's going to work. Forced interoperability is a (very?) good idea, as it's really general, but it also doesn't address directly what the article and most comments talk about - the rage bait. I just can't imagine regulations (or "laws" or whatever the correct term is) that deal specifically with the algos that push rage bait that can't be later abused, if passed, to deal with other unpopular speech. And it seems like people want some laws to directly deal with that - the bad types of speech or algos themselves.
To clarify, I use "rage bait" as an example phrase, but it includes algos that only promote engagement at any cost and other things that aren't outright dangerous, but we think are dangerous. Not, like I said, CSAM or yelling FIRE or telling people to kill themselves.
Interoperability sidesteps the issue by giving users the choice of which algorithm (or algorithm provider) to use. The majority might or might not agree with that approach - for example obviously tobacco has not been left purely to the individual's judgment in the west.
Agreed, you can't regulate speech in a targeted manner while also not doing so. You're forced to find some common aspect much more general than "rage bait". Perhaps prohibiting the targeting of certain metrics? Or even prohibiting their collection in the first place.
> You're forced to find some common aspect much more general than "rage bait". Perhaps prohibiting the targeting of certain metrics? Or even prohibiting their collection in the first place.
Can you elaborate, give some ideas, examples, etc.? What metrics? How can you define them in a consistent, safe way?
We're talking generalized metrics. I have no idea which ones - I wasn't claiming to have solved the problem. The point is that if you can identify a general characteristic that is being used in a way which disproportionately contributes to a particular outcome then you can filter on that.
Estimated user age is an example of a metric largely unrelated to concerns regarding free speech. I doubt it has much relevance to the problem we're taking about here but hopefully you can imagine that prohibiting the targeting of ads or the curation of an algorithmic feed based on that metric would not be expected to unduly disadvantage any particular sort of speech.
> The point is that if you can identify a general characteristic that is being used in a way which disproportionately contributes to a particular outcome then you can filter on that.
In a non-adversarial political context where we trust the government and the future ones, sure, but I think without any strong guardrails, we could enact a law that's good today, but will be exploited in the future.
For targeting minors with any kind of political speech - I'd love it if it wasn't legal. But that brings its own can of worms. There's enough discussion on HN on the implications of age verification, whether on how it's done technically (privacy-preserving or not (ZKP or just shady 3rd parties); FOSS or not; on the ISP, OS or app level, etc.) and whether the mere precedent could trigger additional issues down the road.
Anyway, I'd love a society where everything is perfect, but I'm afraid of what might actually happen. With a benevolent god as a permanent ruler, I'd be happy with 100% prosecution rate against all kinds of littering, hate speech and whatnot, but in reality random crimes are easier to evade than a law passed down by a malevolent government, so I'm strongly against any kind of overreach. (Because the law tomorrow could be one we must evade if we want to resist an unethical government). Someone will likely chime in with "but complete and massive overreach has never happened so far", to which I'd reply - we're close to the point where technology will let the ones in power grab that power absolutely and forever if we them grab too much in the beginning.
> where we trust the government and the future ones
Has never and will never exist.
> we could enact a law that's good today, but will be exploited in the future.
Sure but that's how pretty much all legislation works.
> I'd love it if it wasn't legal. But that brings its own can of worms.
It's probably fine as long as you include the clause "knowingly and intentionally". That doesn't imply age verification or anything else, merely that you act on information that you have and are aware of (and that you not intentionally design systems to work around that).
Also note that I never said anything about underage users. My example was targeting based on estimated user age. So in that example the age is estimated and it is illegal to target anything based on the value. (Of course to avoid a very silly loophole you'd also need to disallow targeting based on verified age as well.)
> I get regulating CSAM, calls for violence or really obvious bullying (serious ones like "kill yourself" to a kid)
I’ve reported videos that look like sexual exploitation, videos that call for violence and videos that promote hate (xyz people are cockroaches) and all I’ve gotten is that “it does not go against community guidelines” with a link to block the person who created them. So any concerns of “where do we draw the line” are in my opinion pointless because the bare minimum isn’t even being done.
I agree with your CSAM and explicit calls for violence examples - they probably should be regulated. But a few comments ago in another thread someone didn't like me calling people in the workplace who annoy me with their mindless chit chat "corporate drones". My post could be construed as promoting hate. Where do we draw the line from "cockroaches" to "drones"? Do I have to call a certain "protected class" drones for it to qualify as hate speech?
What if I didn't say anything bad about a race or a sex, but said:
> I have coworkers that pester with me with their small talk about the weather every time I see them. I hate those fucking cockroaches.
That's in bad taste, sure, but should it be regulated? You may know I obviously don't hate-hate them (they're just annoying, but most of them are good people) or actually consider them cockroach-like in any meaningful aspect (they're obviously people, but with annoying tendencies). But would a regulator know the difference? What about a malicious regulator who gets paid by (ok, this is a silly example, but bear with me) the weather-talking coworker lobby to censor me? In many not-so-silly examples a regulator could silence anyone for anything (politics, sex, drugs, ethics), as long as it uses a bad word or says anything negative about anyone. I don't want to live in such a society. That much power would be abused sooner or later.
I'm sorry but are you saying it's hard to figure out what to do so let's do nothing? Banning racist and sexist content is not a slippery slope. It's just banning racist and sexist content, slope is only slippery because the salivating mouths of these social platforms grease them.
Also, I don't think people are advocating censorship, they are advocating not promoting assholes. You can have your little blog and be racist on it all you want, but let's not give these people equivalent of nukes for communication.
> are you saying it's hard to figure out what to do so let's do nothing?
I'm fine with doing something, but the current "something" seems slippery.
> Banning racist and sexist content is not a slippery slope. It's just banning racist and sexist content, slope is only slippery because the salivating mouths of these social platforms grease them.
But what is "racist", exactly? See why I think it's a slippery slope and why it's ill-defined:
1. We could agree that "Let's go out and kill/enslave all the $race/$gender" is racist, but that's bad if we switch $race to any group, as it's speech that incites violence.
2. What about "$race is genetically inferior in a way (less intelligent, less athletic, more prone to $bad_behavior)"? I honestly think most differences in race/ethnicity is due to environmental factors, but what if there actually are difference in intelligence or anything like that? Should we ban speech that discusses that? Black people win running races and are great at basketball. They're prone to certain diseases, as are Caucasians or Asians. So would you ban discussing that? Or would you ban blindly asserting that $race is $Y without some sort of proof?
3. What about statements like "There are way more male bus drivers because X"? Or "men are better at Y, but women are better at Z"?
What do you think the definition of racism and sexism in this context should be? I think the line is where we incite violence towards a group, but not about discussing differences that may or may not be true.
> Also, I don't think people are advocating censorship, they are advocating not promoting assholes. You can have your little blog and be racist on it all you want, but let's not give these people equivalent of nukes for communication.
I think restricting a platform (or anyone or anything) from promoting someone IS censorship. If it's not censored, why shouldn't I be able to promote it? This honestly feels disingenuous - like "we pretend that the racist isn't censored and can have his little blog, but it's illegal to promote his little blog".
> I'm sorry but are you saying it's hard to figure out what to do so let's do nothing?
That seems more reasonable than the alternative, which is to make modifications to a complex system which you aren't sure what the outcome will be. You're more likely to cause bigger problems.
All the more reason for regulation. If people catch on to the fact that they are being manipulated and abused by the platforms to "drive engagement" they might abandon them or spend less time on them. If the government regulates these platforms so that they are safer or at least less harmful people will feel better about using them giving the government a larger platform to use to control the masses.
> If people catch on to the fact that they are being manipulated and abused by the platforms
I am not trying to be funny or anything but this sounds like "if only fat kid realized that eating 10 apple pies before bedtime might be the reason s/he is fat" We already know what social media platforms are doing, not to just young people but to all people.
> If the government regulates these platforms
This is like saying "congressman care about our debt so they will vote to reduce their own salaries by 90%" - the government is not going to regulate tools they are using to control the narrative/masses etc...
Shut down the behavior with regulations or shut down the companies. Meta and TikTok have no natural right to exist if they are a net negative to society.