>>899088
The major issue with such a tool is the possibly unintended consequences that automatic detection will create.
Writing in Wired, Andy Greenberg highlighted that “throwing out well-intentioned speech that resembles harassment could be a blow to exactly the open civil society Jigsaw has vowed to protect.” When discussing the potential for “collateral damage” with its inventors, co-creator Lucas Dixon argued that the team wanted to “let communities have the discussions they want to have… there are already plenty of nasty places on the Internet.”
“What we can do is create places where people can have better conversations,” Dixon claimed, which Greenberg noted was “[favoring] a sanitized Internet over a freewheeling one.”
There also does not seem to be much appetite for such a completely sanitized Internet. Emma Llansó, director of the Free Expression Project at the nonprofit Center for Democracy and Technology, noted that “an automated detection system can open the door to the delete-it-all option, rather than spending the time and resources to identify false positives.”
Even feminists who face online trolls were nervous about the idea. Sady Doyle told Wired, “People need to be able to talk in whatever register they talk… imagine what the Internet would be like if you couldn’t say ‘Donald Trump is a moron,'” a phrase that registered a 99/100 on the AI’s personal attack scale.
“Jigsaw recruits will hear stories about people being tortured for their passwords or of state-sponsored cyberbullying,” said Cohen at a recent meeting of Jigsaw. He gave examples of “an Albanian LGBT activist who tries to hide his identity on Facebook, despite its real-names-only policy, and an administrator for a Libyan youth group wary of government infiltrators,” as people that Jigsaw were trying to protect.
Yet such a system could easily be developed by tyrannical regimes overseas to detect populist uprisings within its online borders, especially after the UN takes control over the ICANN on October 1st.