Anonymous ID: ffe5be Jan. 25, 2021, 1:59 p.m. No.12711193   🗄️.is 🔗kun   >>1198 >>1214 >>1273 >>1307 >>1345 >>1403 >>1424 >>1453 >>1469 >>1503

 

Twitter officially looking for snitches and mobs to commence if yor tweet is frowned on:

 

https://twitter.com/TwitterSupport/status/1353766523664531459

 

https://twitter.com/Malcolm_fleX48/status/1353811100849078273

 

Cerno Retweeted

Malcolm Fle✘

@Malcolm_fleX48

·

46m

Ever wanted to be a part of the Gestapo but didn't have the credentials?

 

Now you can drop dimes like the big boys with "BirdwatchTrade mark sign"

Quote Tweet

Cerno

@Cernovich

· 1h

Twitter’s new #birdwatch program outsources surveillance culture to the entire population. How empowering!

 

 

https://twitter.com/TwitterSupport/status/1353766523664531459

Twitter Support

@TwitterSupport

Bird Today we’re introducing

@Birdwatch

, a community-driven approach to addressing misleading information. And we want your help. (1/3)

 

Twitter Support

@TwitterSupport

Replying to

@TwitterSupport

We’re looking for people to test this out in the US –– you can add notes with helpful context to Tweets that you think are misleading.

 

For now, these notes won’t appear directly on Twitter, but anyone in the US can view them at: https://birdwatch.twitter.com (2/3)

11:07 AM · Jan 25, 2021·Sprinklr

 

 

Twitter Support

@TwitterSupport

·

3h

Replying to

@TwitterSupport

We'll use the notes and your feedback to help shape this program and learn how to reach our goal of letting the Twitter community decide when and what context is added to a Tweet.

 

For details and how to apply to be a part of Birdwatch: https://blog.twitter.com/en_us/topics/product/2021/introducing-birdwatch-a-community-based-approach-to-misinformation.html (3/3)

 

https://blog.twitter.com/en_us/topics/product/2021/introducing-birdwatch-a-community-based-approach-to-misinformation.html

 

 

https://twitter.com/micsolana/status/1353806151373086722

Thread

See new Tweets

Conversation

Cerno Retweeted

Mike Solana

@micsolana

henceforth, "truth" will be determined by popular vote. what could possibly go wrong?

 

Cerno Retweeted

Mike Solana

@micsolana

·

1h

imagine you are an executive at twitter, surveying the dystopian, tribalistic information hellscape your company more or less architected. there is an angry mob outside screaming. you think to yourself, "ok, but what if we gave the mob weapons?"

 

 

https://twitter.com/elizableu/status/1353815147358412800

Cerno Retweeted

Eliza

@elizableu

31m

'“Birdwatch allows people to identify information in Tweets they believe is misleading and write notes that provide informative context.”

 

Twitter y’all can do this but you can’t remove child porn?

 

https://twitter.com/JuanIsidro/status/1353806640781393921

Juan

@JuanIsidro

·

1h

Replying to

@Cernovich

We're already seeing the hall monitor patrol salivating over their badges…

 

Tell me this isn't sociopathic behavior.

 

https://twitter.com/TherealScwiggs/status/1353815108779192321

TheRealScwiggs

@TherealScwiggs

·

36m

Replying to

@JuanIsidro

and

@Cernovich

Bet Twitter will still ignore snitching on “sniffers” and full on pedos.

Anonymous ID: ffe5be Jan. 25, 2021, 1:59 p.m. No.12711198   🗄️.is 🔗kun   >>1214 >>1250 >>1273 >>1307 >>1345 >>1403 >>1424 >>1453 >>1469 >>1473

>>12711193

Product

Introducing Birdwatch, a community-based approach to misinformation

By Keith Coleman

Monday, 25 January 2021

People come to Twitter to stay informed, and they want credible information to help them do so. We apply labels and add context to Tweets, but we don't want to limit efforts to circumstances where something breaks our rules or receives widespread public attention. We also want to broaden the range of voices that are part of tackling this problem, and we believe a community-driven approach can help. That’s why today we’re introducing Birdwatch, a pilot in the US of a new community-driven approach to help address misleading information on Twitter.

 

Here’s how it works

 

Birdwatch allows people to identify information in Tweets they believe is misleading and write notes that provide informative context. We believe this approach has the potential to respond quickly when misleading information spreads, adding context that people trust and find valuable. Eventually we aim to make notes visible directly on Tweets for the global Twitter audience, when there is consensus from a broad and diverse set of contributors.

 

In this first phase of the pilot, notes will only be visible on a separate Birdwatch site. On this site, pilot participants can also rate the helpfulness of notes added by other contributors. These notes are being intentionally kept separate from Twitter for now, while we build Birdwatch and gain confidence that it produces context people find helpful and appropriate. Additionally, notes will not have an effect on the way people see Tweets or our system recommendations.

 

PlayPlay

Seek

 

00:00

Current time00:00

Duration00:00

Toggle Mute

Building together

 

To date, we have conducted more than 100 qualitative interviews with individuals across the political spectrum who use Twitter, and we received broad general support for Birdwatch. In particular, people valued notes being in the community’s voice (rather than that of Twitter or a central authority) and appreciated that notes provided useful context to help them better understand and evaluate a Tweet (rather than focusing on labeling content as “true” or “false”). Our goal is to build Birdwatch in the open, and have it shaped by the Twitter community.

 

To that end, we’re also taking significant steps to make Birdwatch transparent:

 

All data contributed to Birdwatch will be publicly available and downloadable in TSV files

As we develop algorithms that power Birdwatch — such as reputation and consensus systems — we aim to publish that code publicly in the Birdwatch Guide. The initial ranking system for Birdwatch is already available here.

We hope this will enable experts, researchers, and the public to analyze or audit Birdwatch, identifying opportunities or flaws that can help us more quickly build an effective community-driven solution.

 

We want to invite anyone to sign up and participate in this program, and know that the broader and more diverse the group, the better Birdwatch will be at effectively addressing misinformation. More details on how to apply here.

 

What’s next

 

We know there are a number of challenges toward building a community-driven system like this — from making it resistant to manipulation attempts to ensuring it isn’t dominated by a simple majority or biased based on its distribution of contributors. We’ll be focused on these things throughout the pilot.

 

From embedding a member of the University of Chicago’s Center for RISC on our team to hosting feedback sessions with experts in a variety of disciplines, we’re also reaching beyond our virtual walls and integrating social science and academic perspectives into the development of Birdwatch.

 

We know this might be messy and have problems at times, but we believe this is a model worth trying. We invite you to learn alongside as we continue to explore different ways of addressing a common problem. Follow @Birdwatch for the latest updates and to provide feedback on how we are doing.

 

@kcoleman

Keith Coleman

 

‎@kcoleman‎ verified

 

Vice President, Product

 

Only on Twitter

@Twitter

#OnlyOnTwitter

Anonymous ID: ffe5be Jan. 25, 2021, 2:07 p.m. No.12711273   🗄️.is 🔗kun   >>1307 >>1345 >>1403 >>1424 >>1453 >>1469 >>1517

>>12711193

>>12711198

>>12711214

 

please take note to most recent retweet

 

"a big q"

 

has a Q but used used as q as in question.

 

anons

WE know ZACTLY why this Birdwatch program is taking flight right now

 

https://twitter.com/DG_Rand/status/1353804344987164672

Keith Coleman 🌱😀🙌 Retweeted

David G. Rand

@DG_Rand

·

1h

This work of ours is relevant in light of Twitter's announcement today of BirdWatch, their exploration of have users help with fact-checking

 

Our results suggest this may be very promising! Although a big q is how well this will work when anyone can rate any piece of content

 

 

https://twitter.com/DG_Rand/status/1314212731826794499

 

David G. Rand

@DG_Rand

· Oct 8, 2020

Police cars revolving lightWorking paper alert!Police cars revolving light

"Scaling up fact-checking using the wisdom of crowds"

 

We find that 10 laypeople rating just headlines match performance of professional fact-checkers researching full articles- using set of URLs flagged by internal FB algorithm

 

https://psyarxiv.com/9qdza/

 

 

 

https://twitter.com/DG_Rand/status/1314212733936529408

David G. Rand

@DG_Rand

·

Oct 8, 2020

Replying to

@DG_Rand

Fact-checking could help fight misinformation online:

 

➤ Platforms can downrank flagged content so that fewer users see it

 

'➤ Corrections can reduce false beliefs (forget backfires: e.g. https://''link.springer.com/article/10.1007/s11109-018-9443-y by

@thomasjwood'''

 

@EthanVPorter

)

 

Police cars revolving lightBut there is a BIG problem!Police cars revolving light

 

https://link.springer.com/article/10.1007/s11109-018-9443-y

 

Original Paper

Published: 16 January 2018

The Elusive Backfire Effect: Mass Attitudes’ Steadfast Factual Adherence

Thomas Wood & Ethan Porter

Political Behavior volume 41, pages135–163(2019)Cite this article

 

8610 Accesses

 

109 Citations

 

297 Altmetric

 

Metricsdetails

 

Abstract

Anonymous ID: ffe5be Jan. 25, 2021, 2:10 p.m. No.12711307   🗄️.is 🔗kun   >>1345

>>12711193

>>12711198

>>12711214

 

>>12711273 continued:

 

Fact-checking could help fight misinformation online:

➤ Platforms can downrank flagged content so that fewer users see it

'➤ Corrections can reduce false beliefs (forget backfires: e.g. https://''link.springer.com/article/10.1007/s11109-018-9443-y by

 

https://link.springer.com/article/10.1007/s11109-018-9443-y

 

Abstract

Can citizens heed factual information, even when such information challenges their partisan and ideological attachments? The “backfire effect,” described by Nyhan and Reifler (Polit Behav 32(2):303–330. https://doi.org/10.1007/s11109-010-9112-2, 2010), says no: rather than simply ignoring factual information, presenting respondents with facts can compound their ignorance. In their study, conservatives presented with factual information about the absence of Weapons of Mass Destruction in Iraq became more convinced that such weapons had been found. The present paper presents results from five experiments in which we enrolled more than 10,100 subjects and tested 52 issues of potential backfire. Across all experiments, we found no corrections capable of triggering backfire, despite testing precisely the kinds of polarized issues where backfire should be expected. Evidence of factual backfire is far more tenuous than prior research suggests. By and large, citizens heed factual information, even when such information challenges their ideological commitments.

 

This is a preview of subscription content, access via your institution.

 

Notes

1.

Google News search for the “backfire”, “backlash”, or “boomerang” effect and the names of Nyhan or Reifler returns over 300 unique articles. The 2010 backfire paper has also enjoyed remarkable academic attention. Among all papers printed in Political Behavior in the last 10 years, “When Corrections Fail” has been cited four times as much as the next most cited paper.

 

2.

In this way, the apparent difficulty in making one’s policy preferences fit with one’s factual attitudes is redolent of Americans’ struggle to have their policy preferences fit with each other—what Converse (1964) famously described as poor “constraint.”

 

3.

To avoid the possibility of unintended panel conditioning, we excluded any Turk worker which had been participated in a prior study.

 

4.

The choice of the OLS model, and the specific measures for agreement, ideology, and correction, were chosen to be consistent with Nyhan and Jason (2010).

 

5.

This relationship persisted if we compare respondents along the partisan scale. This result is described in Sect. A.14.1 on p. xxvi.

 

6.

Of course, the attitudinal consequence of this fact remains at a respondent’s discretion, but functional democratic competence would seem to require that voters adopt a common set of basic political facts.

 

7.

Three articles were taken from study 3: the original Bush WMD article, the piece by Speaker Paul Ryan criticizing President Obama’s policy toward abortion, and Secretary Hillary Clinton claim that twice as many Americans were employed in solar than in the oil industry. Three novel mock articles were also provided: Senator Sanders claiming that the EPA had found fracking was responsible for polluting water supplies, Donald Trump claiming that his tax cut plan would grow federal tax receipts, and Trump claiming that the true unemployment rate was actually higher than 30%. These mock articles can be read in Sect. A.9, which can be found in the appendix on p. xvi. The items can be read in Table 11 on p. xxiii.

 

8.

For this study, President Obama, Secretary Clinton, and Senator Sanders are deemed liberal speakers, and President Trump and President Bush are deemed conservative speakers.

 

9.

For instance, the national representative panel who adopted the correction that the flu vaccines did not induce flu infections (Nyhan et al. 2015) or the national representative panel who accepted the correction that the MMR vaccines did not cause autism (Nyhan et al. 2014).

Anonymous ID: ffe5be Jan. 25, 2021, 2:13 p.m. No.12711345   🗄️.is 🔗kun   >>1403 >>1424 >>1453 >>1469

>>12711307 continued:

 

https://t.co/Jb5DvMSz8x?amp=1

The Elusive Backfire Effect: Mass Attitudes’ Steadfast Factual Adherence

https://link.springer.com/article/10.1007/s11109-018-9443-y

 

>>12711193

>>12711198

>>12711214

>>12711273

 

10.

Coppock and McClellan (2017) report an extensive test of the Lucid sample, comparing it to Turk, the Census Bureau’s Current Population Survey, and the American National Election Study’s (ANES) face to face and online samples. Treating the ANES face-to-face sample as the “gold standard”, the Lucid sample is more psychologically similar to the ANES than the Turk sample on the Big-5 personality battery, and better matches the political knowledge and conservatism in the ANES. Coppock and McClellan also test the Lucid sample’s ability to recover treatment effects in canonical social psychology experiments. Both Lucid and Turk samples recover the same framing effect observed the General Social Survey (a massive face-to-face survey instrument), improving the appetite for public spending when it is described as “assistance to the poor” or “caring for the poor” rather than “welfare.” Both Lucid and Turk feature the same framing effect underpinning prospect theory [(the famous Tversky and Kahneman (1983) finding which shows risk tolerance is affected by framing possible outcomes as gains or losses.] Both Lucid and Turk recover indistinguishable experimental effects as observed in Hiscox (2006) in framing attitudes about free trade. Most importantly for this study, the one failed replication was on rumor corrections in the aforementioned Berinsky paper (2017), where Lucid respondents were unusually resistant to corrective information. This suggests that the Lucid sample is at least a comparably demanding sample in which to test factual adherence.

 

11.

In brief—a weak correction might inadvertently advertise the weakness of the corrective case, or a strong correction might have more obvious factual implications, and therefore inspire more forceful counterargument.

 

12.

These respondents were recruited on Mechanical Turk.

 

13.

As a robustness check—there was no significant relationship between ideology and perceived accordance, for any of the tested pairs.

 

14.

It’s instructive to consider those statement/correction pairs at either end of this spectrum. The statement by Senator Ted Cruz about the incidence of violence targeted at law enforcement, described above, was judged the most proximate correction. At the other end of this continuum is the 2012 claim by Congressman Paul Ryan that “Obama stands for an absolute, unqualified right to abortion—at any time, under any circumstances, and paid for by taxpayers” and the correction that “The number of abortions steadily declined during President Obama’s first term, with fewer abortions in 2012 than any year since 1973.” While Cruz makes a precise claim about the change in the incidence of killings of police officers, Ryan’s statements merely suggested a spike in the incidence of abortion.

 

15.

An example of a proximate correction is Representative Gutiérrez’s promise that President Obama would be the “champion…[of the] undocumented” paired with the evidence that Obama was a prodigious deporter of these residents. This correction/statement pair was scored 81.7 on a 100-pt scale of accordance.

 

16.

An example of a distant correction is Governor Romney’s description of the United States using “a credit card …issued by the Bank of China” and the correction that China holds about 15% of US debt. This correction/statement pair was scored 47.2.

 

17.

Contra our evidence in “Does Counterargument Explain Our Pattern of Findings?” section.

References

Barnes, L., Feller, A., Haselswerdt, J., & Porter, E. (2016). Information and preferences over redistributive policy: A field experiment. Working Paper.

 

Berinsky, A. J. (2017). Rumors and health care reform: Experiments in political misinformation. British Journal of Political Science, 47(2), 241–262. https://doi.org/10.1017/S0007123415000186.

 

Article

 

Google Scholar

 

Brock, T. C. (1967). Communication discrepancy and intent to persuade as determinants of counterargument production. Journal of Experimental Social Psychology, 3(3), 296–309. https://doi.org/10.1016/0022-1031(67)90031-5.

Anonymous ID: ffe5be Jan. 25, 2021, 2:18 p.m. No.12711403   🗄️.is 🔗kun   >>1424 >>1453 >>1469

who the fuck are all these GOOGLE SCHOLARS that they all talk back and forth via scholar articles about US

 

>>12711345

>>12711345

continued: https://link.springer.com/article/10.1007/s11109-018-9443-y

 

>>12711193

>>12711198

>>12711214

>>12711273

 

Article

Google Scholar

 

Bullock, J. G., Gerber, A. S., Hill, S. J., & Huber, G. A. (2013). Partisan bias in factual beliefs about politics. Working Paper, March 2013, pp. 1–73. https://doi.org/10.1561/100.00014074.

Cacioppo, J. T., Petty, R. E., & Morris, K. J. (1983). Effects of need for cognition on message evaluation, recall, and persuasion. Journal of Personality and Social Psychology, 45(4), 805–818. https://doi.org/10.1037/0022-3514.45.4.805.

 

Article

Google Scholar

Campbell, A., Converse, P. E., Miller, W. E., & Stokes, D. E. (1960). The American Voter. New York: Wiley.

 

Google Scholar

Carmines, E., & Stimson, J. (1980). The two faces of issue voting. American Political Science Review, 74(1), 78–91. https://doi.org/10.2307/1955648.

 

Article

Google Scholar

Chong, D., & Druckman, J. (2013). Counterframing effects. The Journal of Politics, 75(1), 1–16. Retrieved from http://www.journals.uchicago.edu/doi/abs/10.1017/S0022381612000837.

Cochran, W. G., & Cox, G. M. (1957). Experimental designs. New York: Wiley.

 

Google Scholar

Conover, P. J., & Feldman, S. (1981). The origins and meaning of liberal/conservative self-identifications. American Journal of Political Science, 25(4), 617. https://doi.org/10.2307/2110756.

 

Article

Google Scholar

Converse, P. E. (2006). The nature of belief systems in mass publics (1964). Critical Review: A Journal of Politics and Society, 18(1–3), 1–74. https://doi.org/10.1080/08913810608443650.

 

Article

Google Scholar

Coppock, A., & Mcclellan, O. A. (2017). Validating the demographic, political, psychological, and experimental results obtained from a new source of online survey respondents. Retrieved from http://alexandercoppock.com/papers/CM_lucid.pdf.

 

Fishkin, J. S., & Luskin, R. C. (2005). Experimenting with a democratic ideal: Deliberative polling and public opinion. Acta Politica, 40, 284–298. https://doi.org/10.1057/palgrave.ap.5500121.

 

Article

Google Scholar

Fowler, A., & Montagnes, B. P. (2015). College football, elections, and false-positive results in observational research. Proceedings of the National Academy of Sciences, 112(45), 13800–13804.

 

Article

Google Scholar

Gerber, A., & Green, D. (1999). Misperceptions about perceptual bias. Annual Review of Political Science, 2, 189–210. https://doi.org/10.1146/annurev.polisci.2.1.189.

 

Article

Google Scholar

 

Gollust, S. E., Lantz, P. M., & Ubel, P. A. (2009). The polarizing effect of News Media messages about the social determinants of health. Public Health, 99, 2160–2167. https://doi.org/10.2105/AJPH.2009.161414.

 

Article

 

Google Scholar

 

Healy, A., Malhotra, N., & Mo, C. (2010). Irrelevant events affect voters’ evaluations of government performance. Proceedings of the National Academy of Sciences, 107(29), 12804–12809. Retrieved from http://www.pnas.org/content/107/29/12804.short.

 

Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The Weirdest people in the world?’. The Behavioral and Brain Sciences, 33(2–3), 61–135. https://doi.org/10.1017/S0140525X0999152X.

 

Article

Google Scholar

Hiscox, M. J. (2006). Through a glass and darkly: Attitudes toward international trade and the curious effects of issue framing. International Organization. Cambridge University Press. http://doi.org/10.1017/S0020818306060255.

 

Howell, W. G., & West, M. R. (2009). Educating the public. Education Next, 9(3), 40–47.

Anonymous ID: ffe5be Jan. 25, 2021, 2:20 p.m. No.12711424   🗄️.is 🔗kun   >>1453 >>1469

>>12711403

 

>>12711345

>>12711345

continued: https://link.springer.com/article/10.1007/s11109-018-9443-y

 

>>12711193

>>12711198

>>12711214

>>12711273

 

continued:

who the fuck are all these GOOGLE SCHOLARS that they all talk back and forth via scholar articles about US

 

Google Scholar

Jost, J. T., Nosek, B. A., & Gosling, S. D. (2008). Ideology: Its resurgence in social, personality, and political psychology. Perspectives on Psychological Science, 3(2), 126–136. https://doi.org/10.1111/j.1745-6916.2008.00070.x.

 

Article

Google ScholarKahan, D. M., Peters, E., Dawson, E. C., & Slovic, P. (2017). Motivated numeracy and enlightened selfgovernment. Behavioural Public Policy, 1(01), 54–86. https://doi.org/10.1017/bpp.2016.2.

 

Article

Google Scholar

 

Kaplan, J. T., Gimbel, S. I., & Harris, S. (2016). Neural correlates of maintaining one’s political beliefs in the face of counterevidence. Scientific Reports, 6(1), 39589. https://doi.org/10.1038/srep39589.

 

Article

Google Scholar

Kuklinski, J. H., & Quirk, P. J. (2000). Reconsidering the rational public: Cognition, heuristics, and mass opinion. In Elements of reason: Cognition, choice and the bounds of rationality (pp. 153–182).

 

Lippman, W. (1922). Public opinion. New York: Harcourt, Brace and Company.

Google Scholar

Lord, C., Ross, L., & Lepper, M. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social. Retrieved from http://psycnet.apa.org/journals/psp/37/11/2098/.

 

Mondak, J. J. (1993). Public opinion and heuristic processing of source cues. Political Behavior, 15(2), 167–192. Retrieved from http://www.jstor.org/stable/586448.

Mondak, J. J. (1994). Policy legitimacy and the Supreme Court: The sources and contexts of legitimation. Political Research Quarterly, 47(3), 675. https://doi.org/10.2307/448848.

 

Article

Google Scholar

Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330. https://doi.org/10.1007/s11109-010-9112-2.

 

Article

Google Scholar

Nyhan, B., & Reifler, J. (2017). Answering on cue? How corrective information can produce social desirability bias when racial differences are salient. Retrieved from http://www.dartmouth.edu/~nyhan/obama-muslim.pdf.

Nyhan, B., Reifler, J., Edelman, C., Passo, W., Banks, A., Boston, E., et al. (2015). Answering on cue?. Hanover, NH: Dartmouth College.

 

Google Scholar

Nyhan, B., Reifler, J., Richey, S., & Freed, G. L. (2014). Effective messages in vaccine promotion: A randomized trial. Pediatrics, 133(4), e835–e842. https://doi.org/10.1542/peds.2013-2365.

Article

Google Scholar

Nyhan, B., Reifler, J., & Ubel, P. A. (2013). The hazards of correcting myths about health care reform. Medical Care, 51(2), 127–132. https://doi.org/10.1097/MLR.0b013e318279486b.

Anonymous ID: ffe5be Jan. 25, 2021, 2:22 p.m. No.12711453   🗄️.is 🔗kun   >>1469

>>12711424

>>12711403 (You)

 

>>12711345 (You)

 

>>12711345 (

continued: https://link.springer.com/article/10.1007/s11109-018-9443-y

>>12711193

>>12711198

>>12711214

>>12711273

 

continued batch of GOOGLE SCHOLARS

 

fuckfaces and their writings

 

Article

Google Scholar

Prior, M. (2007). Is partisan bias in perceptions of objective conditions real? The effect of an accuracy incentive on the stated beliefs of partisans. In Annual conference of the Midwestern Political Science Association.

Prior, M., Sood, G., & Khanna, K. (2015). The impact of accuracy incentives on partisan bias in reports of economic perceptions. Quarterly Journal of Political Science, 10, 489–518. https://doi.org/10.1561/100.00014127.

 

Article

Google Scholar

 

Redlawsk, D. P. (2002). Hot cognition or cool consideration? Testing the effects of motivated reasoning on political decision making. The Journal of Politics, 64(4), 1021–1044. https://doi.org/10.1111/1468-2508.00161.

 

Article

Google Scholar

 

Sanna, L. J., Schwarz, N., & Stocker, S. (2002). When debiasing backfires: Accessible content and accessibility experiences in debiasing hindsight. Journal of Experimental Psychology: Learning Memory and Cognition, 28(3), 497–502. https://doi.org/10.1037//0278-7393.28.3.497

 

Schaffner, B. F., & Roche, C. (2017). Misinformation and motivated reasoning: Responses to economic news in a politicized environment. Public Opinion Quarterly. https://doi.org/10.1093/poq/nfw043.

 

Article

Google Scholar

Skurnik, I., Yoon, C., Park, D. C., & Schwarz, N. (2005). How warnings about false claims become recommendations. Journal of Consumer Research, 31(4), 713–724. https://doi.org/10.1086/426605.

 

Article

Google Scholar

Sniderman, P. M., Brody, R. A., & Tetlock, P. E. (1993). Reasoning and choice: Explorations in political psychology. Cambridge University Press.

 

Google Scholar

Stroud, N. J. (2008). Media use and political predispositions: Revisiting the concept of selective exposure. Political Behavior, 30, 341–366. https://doi.org/10.2307/40213321.

 

Article

Google Scholar

Swire, B., Berinsky, A. J., Lewandowsky, S., & Ecker, U. K. H. (2017). Processing political misinformation: Comprehending the Trump phenomenon. Royal Society Open Science, 4(3), 160802. https://doi.org/10.1098/rsos.160802.

 

Article

Google Scholar

Taber, C. S., & Lodge, M. (2006). Motivated specticism in the evaluation of political beliefs. Political Science, 50(3), 755–769. https://doi.org/10.1111/j.1540-5907.2006.00214.x.

 

Article

Google Scholar

Thorson, E. (2015). Belief echoes: The persistent effects of corrected misinformation. Political Communication, 46(9), 1–21. https://doi.org/10.1080/10584609.2015.1102187.

 

Article

Google Scholar

Trevors, G. J., Muis, K. R., Pekrun, R., Sinatra, G. M., & Winne, P. H. (2016). Identity and epistemic emotions during knowledge revision: A potential account for the backfire effect. Discourse Processes. https://doi.org/10.1080/0163853X.2015.1136507.

 

Article

Google Scholar

 

Tversky, A., & Kahneman, D. (1983). Extensional versus intuititive reasoning: The conjuction fallacy in probability judgment. Psychological Review, 90(4), 293–315. http://psycnet.apa.org/journals/rev/90/4/293/.

 

Wood, T., & Oliver, E. (2012). Toward a more reliable implementation of ideology in measures of public opinion. Public Opinion Quarterly, 76(4), 636–662. https://doi.org/10.1093/poq/nfs045.

 

Article

Google Scholar

Zaller, J. R. (1992). The nature and origins of mass opinion. New York: Cambridge University Press.

Google Scholar

Download referenceAcknowledgements

The authors would like to thank Leticia Bode, John Brehm, DJ Flynn, Jim Gimpel, Don Green, Will Howell, David Kirby, Michael Neblo, Brendan Nyhan, Gaurav Sood, and the participants at the Center for Stategic Initiatives workshop. Research support was generously furnished by the Cato Institute, and we owe a special debt of gratitude to Emily Ekins and David Kirby. All remaining errors are the responsibility of the authors.

 

Author informationAffiliations

The Ohio State University, Derby Hall 154 N Oval Mall, Columbus, OH, 43212, USA

Thomas Wood

School of Media and Public Affairs, The George Washington University, 805 21st Street NW, Washington, DC, 20052, USA

 

Ethan Porter

Corresponding author

Correspondence to Thomas Wood.

Anonymous ID: ffe5be Jan. 25, 2021, 2:23 p.m. No.12711469   🗄️.is 🔗kun

>>12711453

 

final bit

 

>>12711424

>>12711403

>>12711345

continued: https://link.springer.com/article/10.1007/s11109-018-9443-y

>>12711193

>>12711198

>>12711214

>>12711273

 

 

Additional information

All figures and tables in this paper can be replicated with the syntax available at the Political Behavior dataverse: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/AGRX5U

 

Electronic supplementary material

Below is the link to the electronic supplementary material.

 

Supplementary material 1 (pdf 3926 KB)

Rights and permissions

Reprints and Permissions

 

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wood, T., Porter, E. The Elusive Backfire Effect: Mass Attitudes’ Steadfast Factual Adherence. Polit Behav 41, 135–163 (2019). https://doi.org/10.1007/s11109-018-9443-y

 

Download citation

Published

16 January 2018

Issue Date

15 March 2019

DOI

https://doi.org/10.1007/s11109-018-9443-y

 

Keywords

Backfire effect

Factual correction

Misinformation

Factual information

Anonymous ID: ffe5be Jan. 25, 2021, 2:26 p.m. No.12711503   🗄️.is 🔗kun   >>1517

>>12711193

 

>please take note to most recent retweet by Keith Coleman

 

the retweet has a q

 

Keith Coleman SeedlingGrinning faceRaising hands

@kcoleman

VP Product

@twitter

. Previously CEO at Yes Inc,

@google

Raising hands

San Francisco, CAJoined March 2007

 

>"a big q"

 

>has a Q but used used as q as in question.

 

anons

 

WE know ZACTLY why this Birdwatch program is taking flight right now

 

it is to silence Q people who are millions of people of all faces and colors not just whites as they are trying so very hard to paint Q

Anonymous ID: ffe5be Jan. 25, 2021, 2:27 p.m. No.12711517   🗄️.is 🔗kun

>>12711503

 

>>12711273

 

>https://twitter.com/DG_Rand/status/1353804344987164672

 

>Keith Coleman 🌱😀🙌 Retweeted

 

>David G. Rand

 

>@DG_Rand

 

 

>1h

 

>This work of ours is relevant in light of Twitter's announcement today of BirdWatch, their exploration of have users help with fact-checking

 

>Our results suggest this may be very promising! Although a big q is how well this will work when anyone can rate any piece of content

 

>https://twitter.com/DG_Rand/status/1314212731826794499