Just some personal observations, maybe relevant to recent misinformation topics, or maybe not.
Facebook, Twitter, Instagram etc. are all blocked in mainland China. Many people, especially those young and well-educated, use VPN to access them, have an account and follow world news and foreign celebrities sometimes.
There are popular and feature-rich counterparts in mainland, and most people prefer to have their daily sharing and discussion (including domestic politics) there instead of on social media based in US. Therefore most of the time, their accounts on Facebook or Twitter are mainly for reading rather than sharing and posting.
However, they might comment (register a new account if needed) on topics conveying a message related to China they strongly disagree (you could argue they are brainwashed), in a short period and coordinated way (they read repost from the same domestic website), through a few common IP addresses (the same VPN), with bad-written English (seldom practice writing before). But they are not bots and it's inappropriate to label these as the typical state-run misinformation campaign.
If you speak Chinese, you could find many discussions where ordinary people complain their accounts get blocked because of pro-China comments. e.g. https://weibo.com/1401527553/I32ryx2cu
Of course, these observations aren't necessarily contradictory with recent reports blaming China propaganda. I just want to show how some false positivity could happen, since there's some difference of the behaviour of China's users. Maybe a better algorithm is needed to distinguish them from government-backed activities.
Some? Given how much blatant propaganda I see in my Facebook feed, from very clearly real people, who very clearly believe the garbage they spew, I would say that the correct answer is most.
There's no shortage of people who will happily repeat the most inane, counterfactual political rhetoric. Don't assume they are paid shills, bots, or foreign spies.
I couldn't give concrete evidence to support nor oppose your arguments since I'm not a sociologist by any means. For any reasoning I could think up, or just observation I have to say, you might always find much more "another factors". How I wish you could speak a little Chinese so that you can investigate each pro-China account with social engineering and provide more proof on your conjecture.
However, I would like to translate a comment from that Weibo link (feel free to call it propaganda):
The coverage of this blocking issue by several English-language media reflects the fact that Western society and the media do not understand China's public opinion changes in recent years, and it also reflects their failure to understand the widespread acceptance of anti-secession and support for government values. According to their theory, the civilians in the communist countries must be anti-communist. When the facts are contrary to their belief in the theory, they choose to invent the facts to correct the theory.
And another comment:
we know them well, they know nothing about us.
But all in all, I can't make 100% sure they are not bots/brainwashed/part of the operation/mindless repetition... Any help to know the truth better instead of just speculations would be appreciated.
Well it’s quite in fashion these days to call any person you disagree with a “bot.” Even though my Twitter account is very old I’ve been accused of being a “Russian bot” nearly every time I post something that diverges from the political “Standard Model.” I suppose it’s easier to manage life if you think that everyone real agrees with you and everyone who disagrees with you is a Perl script running in Vladivostok.
It might actually be a derisive term, like calling someone an NPC for similar reason; ie - a way of dehumanizing the other.
On the other hand, I know of one "user" on The Hill, that is a known Russian troll farm of some sort (not necessarily based in Russia entirely); I've caught it talking with itself in Russian, I've also seen it posting - simultaneously - both liberal and conservative talking points in different threads at the same moment in time. Tone, spelling, and grammar are also different - especially at different times of the day. It also goes offline every now and again. Others on the forums over there have noticed that same thing.
I've carried on multiple "conversation" threads with it, and it can be both intriguing, knowledgable, and ignorant depending on when you "catch" it (ie - whomever is typing on the other end).
I've made multiple attempts to figure it out - talking with it, at it, etc - and it has never disclaimed that it is a collective or not based in Russia. I am not sure what or who it is - some suspect some arm of the IRA, but it could be anything.
I don't call it a bot, because there are real humans behind it. I am not actually sure they are all Russian, because it's english is quite good at times, though I have seen hints of both British and American spellings, idioms, knowledge, etc. I suspect it's some kind of global operation, probably with paid actors of some sort.
I've only rarely have seen anything online in such forums or elsewhere that looks and acts like an actual "bot".
About the "brainwashing", how do most Chinese people look at their government? What about the people of the autonomous regions?
From what I know about the soviet satellite states, most people hated the soviets, but none would be stupid enough to say it in public. I wonder how much is the same in China.
Not the same at all. Most Chinese immigrated outside China in the past two decades or so have pro-China opinions and are becoming more pro-China as they live abroad. Younger generations born after 90s living in China also have more pro-China viewpoints. You'd still hear them critisize the government a lot but what they care about is dramatically different from people in western world. In issues like Hong Kong, it's quite rare to see any mainland Chinese supporting rioters in HK, no matter where they live.
Thanks for your reply, but I have a followup question.
> In issues like Hong Kong, it's quite rare to see any mainland Chinese supporting rioters in HK, no matter where they live.
It's clear that Taiwan supports the HK protesters, so I'm wondering what the other regions are really thinking, like Xinjiang, Inner Mongolia, Tibet, etc. Not what they are publicly saying, but what they are thinking privately. To be clear, I have no idea, that's why I'm asking.
In the comments of the link in my original posts, besides condemning western hypocrisy, there are also a lot of people criticizing the firewall and information control. People express different opinions. I think it's hard to describe how they look in a few words.
It’s a biased sample, but most Chinese people I have had such conversations with deeply fear and hate their government. However, there is some really weird cultural interactions from a western perspective.
What’s generally overlooked is how much stability is lost in such a repressive environment. Being the 1,000th person to riot is dangerous, being the 100 millionth is not. So, things can rapidly switch from everything seeming to be ok to total chaos almost overnight.
I know that the propaganda department requires it's employees to register accounts on Chinese social apps and actively ask them to post messages to flood the different opinions on hot issues.
It's not news to Chinese as we call them 5毛 (50 cents)
I've seen something similar in some extremist bot accounts, where the tweet picture and text has a titillating and sexual tone (like catfishing) but includes an unrelated political hashtag at the end. These bots try to get certain hashtags trending without the users who fav the tweets realizing that.
>Researchers from the International Cyber Policy Centre (ICPC) at the Australian Strategic Policy Institute have conducted a preliminary analysis of the dataset. Our research indicates that the information operation targeted at the protests appears to have been a relatively small and hastily assembled operation rather than a sophisticated information campaign planned well in advance.
>However, our research has also found that the accounts included in the information operation identified by Twitter were active in earlier information operations targeting political opponents of the Chinese government, including an exiled billionaire, a human rights lawyer, a bookseller and protestors in mainland China. The earliest of these operations date back to April 2017.
>This is significant because—if the attribution to state-backed actors made by Twitter is correct—it indicates that actors linked to the Chinese government may have been running covert information operations on Western social media platforms for at least two years.
>Research limitations: ICPC does not have access to the relevant data to independently verify that these accounts are linked to the Chinese government; this research proceeds on the assumption that Twitter’s attribution is correct. It is also important to note that Twitter has not released the methodology by which this dataset was selected, and the dataset may not represent a complete picture of Chinese state-linked information operations on Twitter.
Chinese foreign propaganda has been typically bad, but it's a little wild it's still this bad especially after two years. Almost makes attribution harder to believe.
Very anemic effort, but I think Twitter also proactively suspended 200,000 accounts as well. The operation probably pivoted following the crackdown. Still it's a little difficult reconciling how low effort this is with the Chinese influence everywhere narrative.
>The ICPC’s preliminary research indicates that the information operation targeting the Hong Kong protests, as reflected in this dataset, was relatively small hastily constructed, and relatively unsophisticated. This suggests that the operation, which Twitter has identified as linked to state-backed actors, is likely to have been a rapid response to the unanticipated size and power of the Hong Kong protests rather than a campaign planned well in advance. The unsophisticated nature of the campaign suggests a crude understanding of information operations and rudimentary tradecraft that is a long way from the skill level demonstrated by other state actors. This may be because the campaigns were outsourced to a contractor, or may reflect a lack of familiarity on the part of Chinese state-backed actors when it comes to information operations on open social media platforms such as Twitter, as opposed to the highly proficient levels of control demonstrated by the Chinese government over heavily censored platforms such as WeChat or Weibo.
I'm really curious how Twitter attributed the campaign was state backed, something Facebook and Google did not claim. Were these accounts transparently VPNing from the mainland or was there something more sophisticated happening.
Sounds like a classic lowest-bidder government contractor situation. "Sure, we have 100,000 accounts, and we'll charge 1.25 RMB per tweet per account." "Fantastic! You get the contract!" "Oh shit we need to find 100,000 accounts fast and cheap."
"I met this girl on the internet, she said she'd do anything for 1.25 RMB. Anything? Really?! ...So I got her to post tweets supporting the Chinese government."
I've seen a few of these, they appear in the replies to threads created by BBC journos saying things like (in translation): "support the central government"
These accounts were pretty perplexing until this article spelled out what should have been obvious to me.
CNN/MSNBC/Fox news. They are all looking for easy stories with numbers they can quote. Twitter provides that justification. I hear their talking heads say things like "X has been tweeted 10,000 times" as if that means anything. They rely on twitter because they don't do real reporting anymore. I'd bet good money they don't have anyone on staff fluent in the relevant languages. Twitter+Google translate has replaced all the actual reporters.
It's not really an argument, but it could make people think they're more of an outlier in doubting the government. Once you doubt your convictions you might start to reconsider the arguments you've already been exposed to.
Perhaps, "convince" is the wrong verb to use for targets of disinformation campaigns?
The fact is "enough" people are susceptible to this stuff. It is worth doing because, sadly, critical thinking is rapidly becoming an educational luxury reserved for the elite. I don't know where all this will end up, but it's going to be ugly.
Such statements are not in isolation though. Violence and murder come first, then come fear and obedience, and then it's up to the obedient to find rationalizations for whatever words they get fed, or keep their mind off what doesn't make sense. A single such statement doesn't put you in line, but if you already are in line, then it's just one of the thousands of little bricks that keep you in line, and your brain on autopilot.
Why realize you've been fooled and exploited by people who don't really care about you, have excused their crimes, and then feel shitty because you don't dare to oppose them -- when you can just project all that on those who do oppose them? Then not seeing a silly and cheap slogan as silly and cheap becomes just another test of obedience, to excel at and be proud of. The Naked Emperor isn't just a story, it happens all the time. Identification with aggressors and projection of their bad qualities on their victims and those who oppose them is very common, or as Arno Gruen wrote, "the ubiquity of this phenomenon determines the course of human history".
Who with a pulse would buy that? Those who are afraid of the responsibilities that come with not buying it. The continued violence is the lifeblood, not the propaganda; minus the movement, it all seems very banal and ridiculous, like Eichmann on trial.
Likewise, who would be so silly to think a downvote is an argument? Yet it's religiously done, and I wouldn't be suprised if to some people something being greyed out signals that it is not an okay opinion to have. Done consistently, it helps. It still needs to violence and the lies, the destruction of facts, but it's a useful addition to them.
>Who subscribes to porn bots? I would have thought only other porn bots follow porn bots?
Clearly if they exist they must have some marginal use. It's like saying "who falls for nigerian prince scams?" or "who is influenced by ads?". Clearly some people are, otherwise they wouldn't be quite as popular. I'm sure some real people follow porn bots in order to... see porn.
As for the influence I think in isolation it's probably negligible, but if people get flooded by propaganda on social networks it may be quite efficient. "Everybody says it, it can't be completely wrong". The echo-chamber nature of many social networks probably plays into that too. Many people would probably be more likely to follow an obvious bot spamming propaganda that aligns with their beliefs than a real person holding a different opinion. For instance an obviously biased propaganda outlet like Prager U has 2 million followers on Youtube, I doubt all of those are bots.
PragerU isn't obvious to most people that's what's insidious about it. It's a fake "university" for people who are too ignorant or low intelligent to know what a real one looks like. Compare PragerU to "A People's History of the United States" book. Both are highly politically biased, both oppose "mainstream" education, but one is intellectually much better than the other. Discerning which requires higher level thinking many people lack.
I think you're wrong to assume that it only works with people who are "too ignorant or low intelligent". Confirmation bias is not limited to dumb people. It's always more pleasant and comforting to listen to people agreeing with you (even if what they're saying isn't super insightful) rather than people arguing against your values (even if what they're saying is very interesting).
Searching for contradictory viewpoints and actually managing to listen to the argument with an open mind takes effort. That's the big problem with most social media currently, they amplify this bias to the extreme. You can hold a fringe belief like "earth is flat" and still manage to find thousand of people agreeing with you, spending hours every day consuming flat earth content and avoiding any dissenting voice.
"So there are bots that have been posting "porn" for a bunch of years. And now they post political messages."
The article says accounts like this can be purchased cheaply from resellers.
"How does that influence anybody?"
I think the most effective use is in replies to other tweets.
All the replies to a tweet are shown if you look at a tweet. You might look at the profile of some user who makes a tweet, but perhaps won't take such a deep look at their profile to see the earlier porn tweets.
High number of tweets, high number of followers, long time on the site would indicate 'actual person' rather than bot, or so you'd think without scrolling down far enough to see it filled with spam posts.
Google, perhaps? I've been obsessing about (creating) a federated information store recently, and this was one of my points to address.
In my view bad actors don't have to just misinform, but make it more difficult to locate. Doing that seems shockingly easy, as much of our ability to locate data online is limited to brittle keywords.
Not to deviate, but in case anyone is curious: One of my attempts at mitigating this problem is via content hashes. Building a UX where the content hashes are often the point of reference and indexed by search engines might allow us to locate data and related data based on content hashes, rather than easily flooded semantic keys. Metadata pertaining to the hashes would also have to be provided for them to be useful, eg "related:04136e24". Lots of to discuss, but I'm trying to limit my off-topic, despite it being related to [dis|mis]information.
edit: I don't understand the downvote - can someone explain? Not sure exactly if something I said is factually incorrect? To be clear I wasn't saying Google was the thing running the bots, but that Google / search engines might be the point.
They are a stepping stone, not intended to directly influence anyone: hashtags appearing en masse on "influential" (due to bulk porn subscribers) but utterly unrelated accounts will add weight to on-topic posts using the same tags elsewhere.
Not much different from the link farms of the early days of SEO which were never meant for human eyes.
Not sure if it's 100% the case here, but a lot of bots respond to popular threads from legitimate people, either countering or supporting their position.
Less about the credibility of the account, and more about creating a lot of buzz or, well, counter-buzz around another thread.
The irony here seems to be that the Chinese government seems to have discovered (or long known and recently harnessed) the power of celebrity culture for advertising.
George Monbiot on that theme (outside China) recently:
As for the lack of sophistication: we tend to presume evil super-geniuses. What in fact makes for audacious acts (evil or otherwise) is disinhibition. The ability to act with immunity or impunity, or acting without regard or knowledge for possible consequences, is what tends to distinguish normal from both evil and heroic acts. When doing evil, the consequence is harm done to others, heroism and bravery are disregard to help others.
Ignorance, ideology, and legal shields are all forms of providing that immunity or impunity.
I must disagree slightly. It is a sensible let tweets influence opinion in the sense of "what do many think about it?". Also to some minute degree it shapes just from "normalization" as it turns "unheard of idea" to "commonly proposed idea". Said ideas could still be horrible and foolish but they get more attention but it isn't viewed as bizzare, horrible, and foolish anymore.
Of course people should also apply enough critical thinking to realize when the norm is utterly messed up, the masses are being a bunch of idiots or it starts to look very astroturfy.
UK government is in the middle of collapsing in on itself, so anything non-brexit related has been pushed far on the backburner. I think even if you had an emergency legislation to feed the orphans or something it would still be ignored for the next few weeks.
It is very meta to ask this question, and to see the replies to this question, as the trope: "The US meddles in foreign revolutions" is common for (social) media propaganda (bots).
> Claim: The US is supporting and encouraging Hong Kong protests.
> Verdict: Conspiracy theory without evidence.
> For years, pro-Kremlin media has used the narrative about anti-government protests being funded by the US. Examples include colour revolutions in post-soviet states, the “Arab Spring” revolts, and Euromaidan in 2014.
> The Hong Kong protests began in June 2019 because of a controversial extradition law that would allow for the transfer of suspects to face trial on the Chinese mainland.
Even if the US is involved in the Hong Kong protests, at least it'd be making good on the typical mission of spreading freedom and democracy for reasons other than "y'all got oil and we want it" (unless Hong Kong's been sitting on a massive petroleum deposit all this time, but that'd be news to me).
It's a matter of probability. They are so often involved [1] that they get blamed even when they appear not to be involved. That's especially true when people know that they have incentives for being involved, regardless of whether or not they are actually involved.
Stating "The US is funding the Hong Kong protests" may very well be found true later on, but right now, it is a conspiracy theory without any evidence, not a matter of probability based on arbitrary priors. This conspiracy theory is actively used in online propaganda with an aim to erode trust in the US, playing on plausible blame and prejudice. It is a distraction tactic, where two wrongs somehow make a right, or make us feel better about the dangerous road taken, because we conclude that nowhere is safe.
Really no better than: "Let's discuss: Employee China stole something from the communal fridge." "Sure, but what about Employee US? I judged him stealing last year. I assume it very probable that Employee US is stealing from Employee China right now. Maybe that's why Employee China was so hungry, he was forced to steal, because Employee US started it. Maybe Employee China did not even steal anything, just took the blame for an unredeemable thief. Let's discuss and pontificate about that hypothetical instead!"
This wasn't the claim, so please don't quote it as it were. That's not a quotation from anyone here.
> later on
That's not how induction works. What we're doing is making a prediction. When we observe that something happens with a certain frequency, we judge that the probability of it occuring in the future is proportional to that frequency. I'm not saying the US is involved in HK. I'm saying it's justified to conclude that the US probably is involved in HK (and to be explicit: this is not equivalent to saying that the US did or did not cause HK.) People do not wait for an object to fall before they make their prediction and that's because we have sufficient historical data as well as explanatory theories to support our expecations.
> without any evidence
The evience is the history of the US's behavior and their current incentives to do so.
> arbitrary priors
The priors are not arbitrary. They are consistent and theoretically accounted for by several branches of IR theory.
> conspiracy theory
Geopolitical neorealism, for example, is not a conspiracy theory, it's one of the leading schools of thought in IR theory at the moment.
> propaganda with an aim to erode trust in the US
What "erodes trust" in the US is the US's behavior, not pointing out facts about it.
> stealing last year.
We're not talking about one incident. We're talking about an extensively documented history amounting to a consistent pattern of behavior which is trivially explainable using mainstream IR theory.
Nobody is denying what China does. To point out additional facts is not to contradict any other facts.
It is a restate of the conspiratorial claim in my first post. One is justified to say anything one pleases, but it could still be a detraction or pointless speculation: "News flash: Alice Zhang has long hair. Women frequently have long hair. But, Bobby Joe is a surfer dude, and surfer dudes like long hair and frequently have long hair too. I haven't seen Bobby yet, or saw a photo of him, but I relevantly pose that I am justified in saying -- using my a priori knowledge of surfer dudes -- that Bobby, a man, likely has long hair too. My evidence is that Bobby, being a surfer dude, has an incentive to like long hair. I am not contradicting that Alice does not have long hair, just complementing the discussion with extensively documented history of surfer dudes and the likelihood of Bobby's hair length."
One is not justified in belieiving anything one pleases. There are things which are justified and things which are not and a coherent epistemology distinguishes between the two.
Can you describe your coherent epistemology? The one that leads you to believe a throughly context-free, slapdash list of low-quality wiki text spanning hundreds of topics over hundreds of years counts as a citable piece of evidence for... anything really?
It doesn't. It's just a summary, akin to a comment.
> evidence for
It wasn't intended to be evidnence for anything but a reminder of the pattern of behavior that the US is throughoughly documented (elsewhere) to have engaged in over the years.
Mature democracies like the US and UK have what is called soft power, by demonstrate how effective their democracies are at home they can influence the people in Hong Kong without directly conducting disinformation
Great point. In a possibly strange sense, the fact that the us and the uk just carry on having basically very safe and functional democracies even with the complete lockdown in the uk from brexit, and perhaps slightly lesser lockdown in the us (polarization of dems vs republicans) is impressive. but it's really depressing to be living through it.
The US has a military presence in 150 countries (that we know about). There isn't much that the US isn't involved in, especially when it involves a military superpower on the other side.
Maybe they realized the 'America fuck yeah' movement is over and they aren't the global police. No but jokes aside I think it has some parallels with decreasing interference in middle east and in general. I don't really follow these things so I could be wrong
Normally the US govt is too afraid of Chinese imports to criticize China (look at history of Taiwan). Trump of court is a wildcard on Twitter but his administration policies and diplomacy aren't nearly as radical. But we do have the tariff issue at the moment, which might have some background connections.
Weird that he's not tweeting about it though. Maybe because a standoff lacks the immediacy of specific punctuated incidents.
And ultimately it's an issue between one and a half foreign countries that doesn't really impact US interests so much compared to much larger issues with China.
Why should they? Chinese money is just too juicy to give up... until they realize what kind of influence CCP have on their country, it will be too late by then anyway.
During the WW2 era, Japan had horrible atrocities against China and other countries, they were no modernizing power. One example is the Nanking or Nanjing Massacre where 50-300k people were murdered. https://en.wikipedia.org/wiki/Nanjing_Massacre.
You should definitely have some history and humanity lessons.
Japan was not simply taking over China, it was doing live human expriements, massive raping & killing to the civilians.
By the way, your comment shows exactly something you are probably going against (having seen some indirected reports contain ideology indication, then started to hate the target as a whole).
Facebook, Twitter, Instagram etc. are all blocked in mainland China. Many people, especially those young and well-educated, use VPN to access them, have an account and follow world news and foreign celebrities sometimes.
There are popular and feature-rich counterparts in mainland, and most people prefer to have their daily sharing and discussion (including domestic politics) there instead of on social media based in US. Therefore most of the time, their accounts on Facebook or Twitter are mainly for reading rather than sharing and posting.
However, they might comment (register a new account if needed) on topics conveying a message related to China they strongly disagree (you could argue they are brainwashed), in a short period and coordinated way (they read repost from the same domestic website), through a few common IP addresses (the same VPN), with bad-written English (seldom practice writing before). But they are not bots and it's inappropriate to label these as the typical state-run misinformation campaign.
If you speak Chinese, you could find many discussions where ordinary people complain their accounts get blocked because of pro-China comments. e.g. https://weibo.com/1401527553/I32ryx2cu
Of course, these observations aren't necessarily contradictory with recent reports blaming China propaganda. I just want to show how some false positivity could happen, since there's some difference of the behaviour of China's users. Maybe a better algorithm is needed to distinguish them from government-backed activities.