The Biden Admin is funding next-generation censorship technologies with millions of dollars in cash infusions to start-up AI censorship projects.
Their targets are Covid critics and political conservatives. Full report here:
https://www.foundationforfreedomonline.com/?page_id=875
https://twitter.com/MikeBenzCyber/status/1620105489941151749
The National Science Foundation’s “Convergence Accelerator Track F” Is Funding Domestic Censorship Superweapons
Jan 29 2023, by Mike Benz
Summary
The US government is giving millions to university labs and private firms to stop domestic US citizen opinions on social media.
The National Science Foundation is taking a program set up to solve "grand challenges" like quantum technology and using it for the science of censorship.
Government-funded projects are sorting massive databases of American political and social communities into categories like “misinformation tweeters" and "misinformation followers."
Before we describe the “Science of Censorship” federal grant program that is the focus of this report, perhaps the most disturbing introduction is to watch how grantees of your tax dollars describe themselves.
In the promo video below, National Science Foundation (NSF) grant project WiseDex explains how the federal government is funding it to provide social media platforms with “fast, comprehensive and consistent” censorship solutions. WiseDex builds sprawling databases of banned keywords and factual claims to sell to companies like Facebook, YouTube, and Twitter. It then integrates this banned-claims databases into censorship algorithms, so that “harmful misinformation stops reaching big audiences.”
Watch:
Transcript:
Posts that go viral on social media can reach millions of people. Unfortunately, some posts are misleading. Social media platforms have policies about harmful misinformation. For example, Twitter has a policy against posts that say authorized COVID vaccines will make you sick.
When something is mildly harmful, platforms attach warnings like this one that points readers to better information. Really bad things, they remove. But before they can enforce, platforms have to identify the bad stuff, and they miss some of it. Actually, they miss a lot, especially when the posts aren’t in English. To understand why, let’s consider how platforms usually identify bad posts. There are too many posts for a platform to review everything, so first a platform flags a small fraction for review. Next, human reviewers act as judges, determining which flagged posts violate policy guidelines. If the policies are too abstract, both steps, flagging and judging, can be difficult.
WiseDex helps by translating abstract policy guidelines into specific claims that are more actionable; for example, the misleading claim that the COVID-19 vaccine suppresses a person’s immune response. Each claim includes keywords associated with the claim in multiple languages. For example, a Twitter search for “negative efficacy” yields tweets that promote the misleading claim. A search on “eficacia negativa” yields Spanish tweets promoting that same claim. The trust and safety team at a platform can use those keywords to automatically flag matching posts for human review. WiseDex harnesses the wisdom of crowds as well as AI techniques to select keywords for each claim and provide other information in the claim profile.
For human reviewers, a WiseDex browser plug-in identifies misinformation claims that might match the post. The reviewer then decides which matches are correct, a much easier task than deciding if posts violate abstract policies. Reviewer efficiency goes up and so does the consistency of their judgments. The WiseDex claim index will be a valuable shared resource for the industry. Multiple trust and safety vendors can use it as a basis for value-added services. Public interest groups can also use it to audit platforms and hold them accountable. WiseDex enables fast, comprehensive, and consistent enforcement around the world, so that harmful misinformation stops reaching big audiences.
As WiseDex’s promo and website illustrate, they take tech company Terms of Services speech violations as a starting point – such as prohibitions on claims that Covid-19 vaccines are ineffective – and then construct large strings of keywords, tags, and “matching posts” to be automatically flagged by tech platforms’ algorithms:
continued…