Anonymous ID: df0066 May 27, 2018, 11:33 a.m. No.1558279   🗄️.is 🔗kun   >>8333

Machine learning is helping computers spot arguments online before they happen

 

‘Hey there. It looks like you’re trying to rile someone up for no good reason?’

 

It’s probably happened to you. You’re having a chat with someone online (on social media, via email, in Slack) when things take a nasty turn. The conversation starts out civil, but before you know it, you’re trading personal insults with a stranger / co-worker / family friend. Well, we have some good news: scientists are looking into it, and with a little help from machine learning, they could help us stop arguments online before they even happen.

 

The work comes from researchers at Cornell University, Google Jigsaw, and Wikimedia, who teamed up to create software that scans a conversation for verbal ticks and predicts whether it will end acrimoniously or amiably. Notably, the software was trained and tested on a hotbed of high-stakes discussion: the “talk page” on Wikipedia articles, where editors discuss changes to phrasing, the need for better sources, and so on.

 

The software was preprogrammed to look for certain features that past research has shown correlates with a conversational mood. For example, signs that a discussion will go well include gratitude (“Thanks for your help”), greetings (“How’s your day going?”), hedges (“I think that”), and, of course, the liberal use of the word “please.” All this combines to create not only a friendly atmosphere, but an emotional buffer between the two participants. It’s essentially a no-man’s-land of disagreement, where someone can admit they’re wrong without losing face.

 

On the other hand, warning signs include repeated, direct questioning (“Why is there no mention of this? Why didn’t you look at that?”) and the use of sentences that start with second person pronouns (“Your sources don’t matter”), especially when they appear in the first reply, which suggests someone is trying to make the matter personal. To add to all these signals, the researchers also gauged the general “toxicity” of conversations using Google’s Perspective API, an AI tool that tries to measure how friendly, neutral, or aggressive any given text is.

 

Using a statistical method known as logistic regression, the researchers worked out how to best balance these factors when their software made its judgments. At the end of the training period, when given a pair of conversations that started friendly but where one ended in personal insults, the software was able to predict which was which just under 65 percent of the time. That’s pretty good, although some major caveats apply: first, the test was done on a limited data set (Wikipedia talk pages, where, unusually for online discussions, participants have a shared goal: improving the quality of an article). Second, humans still performed better at the same task, making the right call 72 percent of the time.

 

But for the scientists, the work shows that we’re on the right path to creating machines that can intervene in online arguments. “Humans have nagging suspicions when conversations will eventually go bad, and this [research] shows that it’s feasible for us to make computers aware of those suspicions, too,” Justine Zhang, a PhD student at Cornell University who worked on the project, tells The Verge.

 

Research like this is particularly interesting, as it’s part of an emerging body of work that uses machine learning to analyze online discussions. Tech giants like Facebook and Google, which operate huge, influential platforms full of angry commenters, are in dire need of tech like this. Recent outcry over Russian political ads on Facebook and horrific children’s content on YouTube suggest what the stakes are. These companies hope that AI will be able to do a better job (and cost less) than human moderators.

 

Answers this post from a few breads ago >>1554945

 

Remaining Posted above

 

https://www.theverge.com/2018/5/23/17379526/machine-learning-ai-spot-arguments-online-wikipedia

Anonymous ID: df0066 May 27, 2018, 11:50 a.m. No.1558468   🗄️.is 🔗kun

The Line Between Big Tech and Defense Work

 

For months, a growing faction of Google employees has tried to force the company to drop out of a controversial military program called Project Maven. More than 4,000 employees, including dozens of senior engineers, have signed a petition asking Google to cancel the contract. Last week, Gizmodo reported that a dozen employees resigned over the project. “There are a bunch more waiting for job offers (like me) before we do so,” one engineer says. On Friday, employees communicating through an internal mailing list discussed refusing to interview job candidates in order to slow the project’s progress.

 

Other tech giants have recently secured high-profile contracts to build technology for defense, military, and intelligence agencies. In March, Amazon expanded its newly launched "Secret Region" cloud services supporting top-secret work for the Department of Defense. The same week that news broke of the Google resignations, Bloomberg reported that Microsoft locked down a deal with intelligence agencies. But there’s little sign of the same kind of rebellion among Amazon and Microsoft workers.

 

Employees from the three companies say the different responses reflect different company cultures, as well as the specifics of the contracts. Project Maven is an effort to use artificial intelligence to interpret images from drones. Amazon and Microsoft also provide the government with artifical intelligence to analyze data, including image recognition. But Project Maven’s focus on drones combined with Google’s unusually open culture—the company has been riven for months by debates and lawsuits over workplace diversity—has emboldened employees to speak out.

 

“Amazon culture is more pragmatic and less idealistic than Google,” one Amazon engineer told WIRED. “Amazon’s ethos is about business ruthlessness rather than technical purity, and that does filter down to individual tech employees.”

 

Employees are not blind to reports about difficult working conditions in Amazon warehouses, but they’re skeptical of broad critiques. “Most long-term employees are either good at ignoring what’s going on in other parts of the company or they don’t think it’s a problem and probably don’t think working with the military is a problem either,” the employee said. In 2017, Amazon shrugged off an employee petition to sever Amazon’s advertising ties with the right-wing news site Breitbart; that left employees feeling powerless about changing the company’s business decisions.

 

At Microsoft, two employees said neither they nor their coworkers had been aware of the intelligence contract before WIRED asked. One of the employees later said that Microsoft’s defense contract was “totally different” from Project Maven, and no different from any other government agency using Microsoft’s government cloud services

Google and Amazon did not respond to questions. Microsoft declined to comment, but last week the company told WIRED it has refused some commercial projects involving artificial intelligence after input from an internal ethics board. “If something bad happens, folks would and do speak up,” the Microsoft employee said.

 

https:// www.wired.com/story/the-line-between-big-tech-and-defense-work/