Anonymous ID: 2a4f34 Feb. 21, 2019, 11:31 p.m. No.5321363   🗄️.is 🔗kun   >>1466

Q

 

when i put in google search:

 

"qanon"

 

Sometimes it show 4.000.000 results, but sometimes 9.000.000 results

 

It looks like they are deleting a lot, and very fast.

 

Also, when i try to find:

 

#qanon

 

on twitter, there are also many deleted tweets, and it is obvious that they are deleting a huge amount of stuff related to Q.

 

Do you have any idea how many QANON searches and tweets, and basically anything related to Q deleted?

 

And how fast?

Seconds?

Anonymous ID: 2a4f34 Feb. 21, 2019, 11:53 p.m. No.5321551   🗄️.is 🔗kun

QANON IS TRYING TO TRICK FACEBOOK’S MEME-READING AI

 

SPAMMERS, HACKERS, POLITICAL propagandists, and other nefarious users have always tried to game the systems that social media sites put in place to protect their platforms. It’s a never-ending battle; as companies like Twitter and Facebook become more sophisticated, so do the trolls. And so last week, after Facebook shared new details about a tool it built to analyze text found in images like memes, some people began brainstorming how to thwart it.

 

Social media companies are under tremendous pressure from lawmakers, journalists, and users to be more transparent about how they decide what content should be removed and how their algorithms work, especially after they’ve made a number of high-profile mistakes. While many companies are now more forthcoming, they’ve also been reluctant to reveal too much about their systems because, they say, ill-intentioned actors will use the information to game them.

 

Propagators of the false right-wing conspiracy theory QAnon took interest after “Q”—the anonymous leader who regularly posts nonsensical “clues” for followers—linked to several news articles about the tool, including WIRED’s. Rosetta works by detecting the words in an image and then feeding them through a neural network that parses what they say. The QAnon conspiracy theorists created memes and videos with deliberately obscured fonts, wonky text, or backwards writing, which they believe might trick Rosetta or disrupt this process. Many of the altered memes were first spotted on 8chan by Shoshana Wodinsky, an intern at NBC News.

 

It's not clear whether any of these tactics will work (or how seriously they have even been tested), but it's not hard to imagine that other groups will keep trying to get around Facebook. It’s also incredibly difficult to build a machine-learning system that’s foolproof. Automated tools like Rosetta might get tripped up by wonky text or hard-to-read fonts. A group of researchers from the University of Toronto’s Citizen Lab found that the image-recognition algorithms used by WeChat—the most popular social network in China—could be tricked by changing a photo’s properties, like the coloring or way it was oriented. Because the system couldn’t detect that text was present in the image, it couldn’t process what it said.

 

It’s hard to create ironclad content-moderation systems in part because it’s difficult to map out what they should accomplish in the first place. Anish Athalye, a PhD student at MIT who has studied attacks against AI, says it’s difficult to account for every type of behavior a system should protect against, or even how that behavior manifests itself. Fake accounts might behave like real ones, and denouncing hate speech can look like hate speech itself. It’s not just the challenge of making the AI work, Athalye says. “We don't even know what the specification is. We don't even know the definition of what we're trying to build."

 

When researchers do discover their tools are susceptible to a specific kind of attack, they can recalibrate their systems to account for it, but that doesn’t entirely solve the problem.

 

“The most common approach to correct these mistakes is to enlarge the training set and train the model again,” says Carl Vondrick, a computer science professor at Columbia University who studies machine learning and vision. “This could take between a few minutes or a few weeks to do. However, this will likely create an arms race where one group is trying to fix the model and the other group is trying to fool it.”

 

Another challenge for platforms is deciding how transparent to be about how their algorithms work. Often when users, journalists, or government officials have asked social media companies to reveal their moderation practices, platforms have argued that disclosing their tactics will embolden bad actors who want to game the system. The situation with Rosetta appears like good evidence for their argument: Before details of the tool were made public, conspiracy theorists ostensibly weren’t trying to get around it.

 

Today many popular platforms, such as Instagram and Snapchat, are dominated by images and videos. Memes in particular have also become a prominent vehicle for spreading political messages. In order to keep itself free of things like hate speech, violent threads, and fake news, Facebook needed to find a way to comprehensively process all of that visual data uploaded to its sites each day. And bad actors will continue to search for new ways to outsmart those systems.

 

https://www.wired.com/story/qanon-conspiracy-facebook-meme-ai/

Anonymous ID: 2a4f34 Feb. 22, 2019, 12:03 a.m. No.5321614   🗄️.is 🔗kun   >>1636

RUMOUR without SOURCE

 

Tonight Or Tomorrow A Billionaire NFL Owner Will Be Tied Human Trafficking In FL

 

A ring was just brought down in Florida and based on good Intel I’m certain Bob Kraft will be connected to it

Anonymous ID: 2a4f34 Feb. 22, 2019, 12:10 a.m. No.5321655   🗄️.is 🔗kun

>>5321636

 

https://eu.tcpalm.com/story/news/crime/indian-river-county/2019/02/21/human-trafficking-florida-massage-parlors-vero-beach-sebastian/2920354002/

 

 

https://deadspin.com/whats-up-with-the-nfl-questions-at-this-sex-trafficking-1832805259

Anonymous ID: 2a4f34 Feb. 22, 2019, 12:40 a.m. No.5321879   🗄️.is 🔗kun   >>1904

On Wednesday morning, Adam Schiff, the powerful chair of the House intelligence committee, joined journalists around the world in a nascent Twitter meme: he searched “vaccine” on Facebook and posted a screenshot of the results.

 

Schiff’s search results were indeed alarming: autofill suggestions for phrases such as “vaccination re-education discussion forum”, a group called “Parents Against Vaccination”, and the page for the National Vaccine Information Center, an official-sounding organization that promotes anti-vaccine propaganda. And while search results on Facebook are personalized to each user, a recent Guardian report found similarly biased results for a brand new account.

 

If the congressman had tried to search “vaccines” on the rival social media site Pinterest, however, he would have had little more to screenshot than a blank white screen. Recognizing that search results for a number terms related to vaccines were broken, Pinterest responded by “breaking” its own search tool.

 

As pressure mounts on Facebook to explain its role in promoting anti-vaccine misinformation, Pinterest offers an example of a dramatically different approach to managing health misinformation on social media.

 

“We’re a place where people come to find inspiration, and there is nothing inspiring about harmful content, said Ifeoma Ozoma, a public policy and social impact manager at Pinterest. “Our view on this is we’re not the platform for that.”

 

This hasn’t always been the case. Pinterest, the visual social network, faced scrutiny in 2016 after a scientific study found that 75% of posts related to vaccines were negative. The next year, Pinterest updated its “community guidelines” to explicitly ban “promotion of false cures for terminal or chronic illnesses and anti-vaccination advice” under a broader policy against misinformation that “has immediate and detrimental effects on a pinner’s health or on public safety”.

 

The policy change cleared the way for Pinterest to deploy a number of technological approaches to combating anti-vaxx propaganda. The company has banned boards by a number of prominent anti-vaccine propagandists, including the National Vaccine Information Center and Larry Cook, who runs the website and Facebook group “Stop Mandatory Vaccination”.

 

But retroactive enforcement of content rules is just one aspect of the company’s approach.

 

Take search results. The phenomenon on display in the Facebook search result screenshots is known in technology circles as a “data void”, after a paper by the Data & Society founder and researcher danah boyd. For certain search terms, boyd explains, “the available relevant data is limited, non-existent, or deeply problematic”.

 

In the case of vaccines, the fact that scientists and doctors are not producing a steady stream of new digital content about settled science has left a void for conspiracy theorists and fraudsters to fill with fear-mongering propaganda and misinformation.

 

Data voids may be relatively easy to diagnose, but they are very difficult to fix.

 

“Addressing data voids cannot be achieved by removing problematic content, not only because removal might go against the goals of search engines but also because doing so would not be effective,” boyd wrote. “Without high-quality content to replace removed content, new malicious content can easily surface.”

 

Pinterest has responded by building a “blacklist” of “polluted” search terms.

“We are doing our best to remove bad content, but we know that there is bad content that we haven’t gotten to yet,” explained Ifeoma Ozoma, a public policy and social impact manager at Pinterest. “We don’t want to surface that with search terms like ‘cancer cure’ or ‘suicide’. We’re hoping that we can move from breaking the site to surfacing only good content. Until then, this is preferable.”

 

Pinterest also includes health misinformation images in its “hash bank”, preventing users from re-pinning anti-vaxx memes that have already been reported and taken down. (Hashing is a technology that applies a unique digital identifier to images and videos; it has been more widely used to prevent the spread of child abuse images and terrorist content.)

 

And the company has banned all pins from certain websites.

 

“If there’s a website that is dedicated in whole to spreading health misinformation, we don’t want that on our platform, so we can block on the URL level,” Ozoma said.

 

Users simply cannot “pin” a link to StopMandatoryVaccinations.com or the “alternative health” sites Mercola.com, HealthNutNews.com or GreedMedInfo.com; if they try, they receive an error message stating: “Invalid parameters.”

 

Ozoma said that one challenge was the amount of anti-vaxx material that gets cross-posted from what she describes as “large social media platforms and large video platforms”, rather than from independent websites, because Pinterest doesn’t want to cut off all cross-posting of content hosted by Facebook or YouTube.

Anonymous ID: 2a4f34 Feb. 22, 2019, 1:11 a.m. No.5322147   🗄️.is 🔗kun

>>5322107

 

Calm your ass down.

 

No Patriot is homophobe, racist, etc

 

Patriots only dont like when something is pushed to aggressively.

 

If you want to talk, we will talk.

 

If you want to scream, then fuck youself. /not you/