I think the other person's issue with your position is that the distinction is entirely arbitrary. You're not giving any reasons why the demarcation line for which feed algorithms are OK and which are not is there instead of anywhere else. It seems to be just "Facebook and TikTok are bad; Their feeds are personalized recommendation engines; Therefore personalized recommendation engines are bad, and other feed algorithms are OK".
>I think the other person's issue with your position is that the distinction is entirely arbitrary.
Basically all laws related to speech are abitrary. Can you define a clear and self-evident line between pornography and art as an example? Or do you agree with the Supreme Court that we just "know it when [we] see it"?
>You're not giving any reasons why the demarcation line for which feed algorithms are OK and which are not is there instead of anywhere else.
Let me just copy and paste what I said before: "The type of HN algorithm that serves the same content to every user based off global behavior is fine in my book because it is both less exploitative of the user base and a reflection of that user base's proactive decisions in upvoting/downvoting content." I can understand if one of you want to challenge that line of thought, but you both acting like I didn't give any reasoning at all is bizarre and gives me the impression that you aren't actually reading what I'm writing.
> Basically all laws related to speech are abitrary.
True. This is a fair point. But the expected counter argument would be that the exact line isn't the issue instead it's the justification for the principle.
IE why is personalized algorithms more dangerous than general ones.
My answer (because I mostly agree with you) is that the difference is that personalized algorithms almost feel like brain hacking. And this brain hacking simply doesn't work at scale when applied to vague general algorithms.
>Basically all laws related to speech are abitrary. Can you define a clear and self-evident line between pornography and art as an example? Or do you agree with the Supreme Court that we just "know it when [we] see it"?
I'm a free speech absolutist, so I personally don't find which laws already exist on the matter to be a compelling argument. If it was up to me, I'd get rid of any such laws.
>The type of HN algorithm that serves the same content to every user based off global behavior is fine in my book because it is both less exploitative of the user base and a reflection of that user base's proactive decisions in upvoting/downvoting content.
The argument hinges entirely on the relative exploitativeness of different feed algorithms, but that metric is merely asserted with no support.
Typically free speech absolutism leads individuals into logical traps they find difficult to dig themselves out of.
But we don't even need that in this case. Private property can have all kinds of restrictions put on it based on the potential dangers and harms it causes. This in fact is one of the most common attacks on speech I see right now (Meta et el) that they will just put age requirements on sites.
>Typically free speech absolutism leads individuals into logical traps they find difficult to dig themselves out of.
Yes, "free speech absolutists" tend to define these terms in ways to hide the true arbitrary nature of their beliefs. The obvious test case is do they believe in legalizing CSAM. Either they answer "yes" and ostracize themselves from almost all of society or they say "no" and have to come up with arbitrary rules why this specific content doesn't count as speech. Either way, self-applying the label is its own red flag.
I wasn't the one who brought up free speech into the discussion; slg was. That aside, whether it curtails it or not would depend on how one defines "speech". Even if the particular way in which a website displays information is not speech, I still think it would be an overreach for a government to legislate how websites are allowed to function. If I as a user want to see a feed populated by recommended content, and the site's operators want to show it to me, what business does the government have stepping into our interaction?
I don't believe the argument was that personalized algorithmic recommendations need be forbidden per se, but that doesn't mean that should be the default, nor that companies should be able to wash their hands (under section 230) of what they promote
Like the other posts you're arguing against have said, the argument is not that social media or personalized algorithms should be "illegal"
And "are we going to pretend" is a non-argument that works both ways: "Are we really going to pretend individualized algorithmic social media hasn't caused harm to society on par with smoking?" would be equally unconvincing
What do you think about the case of Lucy Connolly, who, during a riot where rioters were burning down hotels housing immigrants, tweeted that people should burn down hotels housing immigrants and was arrested for that?