Anonymous ID: 77be0b Sept. 16, 2018, 10:26 a.m. No.3046182   🗄️.is đź”—kun   >>6192 >>6256

>>3046165

 

Facebook announced Thursday that it would use a machine-learning model to identify potentially inaccurate photo and video content and then send those items to human fact-checkers for review. "Many of our third-party fact-checking partners have expertise evaluating photos and videos and are trained in visual verification techniques, such as reverse image searching and analyzing image metadata, like when and where the photo or video was taken," the company said. Facebook is looking for content that is manipulated, fabricated, or out of context, among other things.

<FTFA

Anonymous ID: 77be0b Sept. 16, 2018, 10:33 a.m. No.3046256   🗄️.is đź”—kun   >>6312 >>6327 >>6443 >>6467 >>6475

>>3046182

Then some excerpts from the other (Wired) article:

 

Rosetta works by combining optical character recognition (OCR) technology with other machine learning techniques to process text found in photos and videos. First, it uses OCR to identify where the text is located in a meme or video.

 

Once Rosetta knows where the words are, Facebook uses a neural network that can transcribe the text and understand its meaning. It then can feed that text through other systems, like one that checks whether the meme is about an already-debunked viral hoax.

 

Rosetta can analyze images that include text in many forms, such as photos of protest signs, restaurant menus, storefronts, and more. Viswanath Sivakumar, a software engineer at Facebook who works on Rosetta, said in an email that the tool works well both for identifying text in a landscape, like on a street sign, and also for memes—but that the latter is more challenging

 

The researchers were able to easily evade WeChat’s filters by changing an image’s properties, like the coloring or the way it was oriented. While Facebook’s Rosetta is more sophisticated, it likely isn’t perfect either; the system may be tripped up by hard-to-read text, or warped fonts. All image recognition algorithms are also still potentially susceptible to adversarial examples, slightly altered images that look the same to humans but cause an AI to go haywire.

 

But those promising numbers don’t mean AI systems like Rosetta are a perfect solution, especially when it comes to more nuanced forms of expression. Unlike a restaurant menu, it can be hard to parse the meaning of a meme without knowing the context of where it was posted. That's why there are whole websites dedicated to explaining them. Memes often depict inside jokes, or are highly specific to a certain online subculture. And AI still isn’t capable of understanding a meme or video in the same way that a person would. For now, Facebook will still need to to rely on human moderators to make decisions about whether a meme should be taken down.