Anonymous ID: e1e8d4 Aug. 26, 2022, 6:44 p.m. No.17448122   🗄️.is đź”—kun   >>8193 >>8372

==Social media experiment reveals potential to 'inoculate' millions of users against misinformation=

Short animations giving viewers a taste of the tactics behind misinformation can help to “inoculate” people against harmful content on social media when deployed in YouTube’s advert slot, according to a major online experiment led by the University of Cambridge.

Working with Jigsaw, a unit within Google dedicated to tackling threats to open societies, a team of psychologists from the universities of Cambridge and Bristol created 90-second clips designed to familiarise users with manipulation techniques such as scapegoating and deliberate incoherence.

This“prebunking” strategy pre-emptively exposes people to tropes at the root of malicious propaganda, so they can better identify online falsehoods regardless of subject matter.

Researchers behind the Inoculation Science project compare it to a vaccine: by giving people a “micro-dose” of misinformation in advance, it helps prevent them falling for it in future – an idea based on what social psychologist’s call “inoculation theory”.

The findings, published in Science Advances, come from seven experiments involving a total of almost 30,000 participants – including the first “real world field study” of inoculation theory on a social media platform – and show a single viewing of a film clip increases awareness of misinformation.

The videos introduce concepts from the “misinformation playbook”, illustrated with relatable examples from film and TV such as Family Guy or, in the case of false dichotomies, Star Wars ("Only a Sith deals in absolutes").

“YouTube has well over two billion active users worldwide. Our videos could easily be embedded within the ad space on YouTube to prebunk misinformation,” said study co-author Prof Sander van der Linden, Head of the Social Decision-Making Lab (SDML) at Cambridge, which led the work.

Google – YouTube’s parent company – is already harnessing the findings. At the end of August, Jigsaw will roll out a prebunking campaign across several platforms in Poland, Slovakia, and the Czech Republic to get ahead of emerging disinformation relating to Ukrainian refugees.

 

The campaign is designed to build resilience to harmful anti-refugee narratives, in partnership with local NGOs, fact checkers, academics, and disinformation experts.

https://www.cam.ac.uk/stories/inoculateexperiment?utm_source=twitter&utm_medium=social

—

A "pre-boonking misinformation" article is a decent read into enemy tactics and information control techniques.

Consider it counter-intelligence four our purposes.

Anonymous ID: e1e8d4 Aug. 26, 2022, 6:54 p.m. No.17448176   🗄️.is đź”—kun   >>8278 >>8372

Unheard Voice

Stanford Internet Observatory collaborated with Graphika to analyze a large network of accounts removed from Facebook, Instagram, and Twitter in our latest report. This information operation likely originated in the United States and targeted a range of countries in the Middle East and Central Asia.

 

In July and August 2022, Twitter and Meta removed two overlapping sets of accounts for violating their platforms’ terms of service. Twitter said the accounts fell foul of its policies on “platform manipulation and spam,” while Meta said the assets on its platforms engaged in “coordinated inauthentic behavior.” After taking down the assets, both platforms provided portions of the activity to Graphika and the Stanford Internet Observatory for further analysis.

 

Our joint investigation found an interconnected web of accounts on Twitter, Facebook, Instagram, and five other social media platforms that used deceptive tactics to promote pro-Western narratives in the Middle East and Central Asia. The platforms’ datasets appear to cover a series of covert campaigns over a period of almost five years rather than one homogeneous operation.

 

These campaigns consistently advanced narratives promoting the interests of the United States and its allies while opposing countries including Russia, China, and Iran. The accounts heavily criticized Russia in particular for the deaths of innocent civilians and other atrocities its soldiers committed in pursuit of the Kremlin’s “imperial ambitions” following its invasion of Ukraine in February this year. A portion of the activity also promoted anti-extremism messaging.'''

 

We believe this activity represents the most extensive case of covert pro-Western influence operations on social media to be reviewed and analyzed by open-source researchers to date. With few exceptions, the study of modern influence operations has overwhelmingly focused on activity linked to authoritarian regimes in countries such as Russia, China, and Iran, with recent growth in research on the integral role played by private entities. This report illustrates the much wider range of actors engaged in active operations to influence online audiences.

 

At the same time, Twitter and Meta’s data reveals the limited range of tactics influence operation actors employ; the covert campaigns detailed in this report are notable for how similar they are to previous operations we have studied. The assets identified by Twitter and Meta created fake personas with GAN-generated faces, posed as independent media outlets, leveraged memes and short-form videos, attempted to start hashtag campaigns, and launched online petitions: all tactics observed in past operations by other actors.

 

Importantly, the data also shows the limitations of using inauthentic tactics to generate engagement and build influence online. The vast majority of posts and tweets we reviewed received no more than a handful of likes or retweets, and only 19% of the covert assets we identified had more than 1,000 followers.

 

https://cyber.fsi.stanford.edu/io/news/sio-aug-22-takedowns

—

 

Are the botnets losing influence? I assume most interactions on Twitter are bots at this point, particularly outside of my account's "bubble". Still, I think that well-designed bots can play a pretty big role in shaping narratives. At the very least, they might be most effective as a "jammer" of sorts - drowning out organic voices/opinions.