Margaret Mitchell, former head of ethics in artificial intelligence at Google, even stressed the need for data transparency from input to output of a system 'not just for sentience issues, but also bias and behavior'.
The expert's history with Google reached an important point early last year, when Mitchell was fired from the company, a month after being investigated for improperly sharing information.
At the time, the researcher had also protested against Google after the firing of ethics researcher in artificial intelligence, Timnit Gebru.
Mitchell was also very considerate of Lemoine. When new people joined Google, she would introduce them to the engineer, calling him 'Google conscience' for having 'the heart and soul to do the right thing'. But for all of Lemoine's amazement at Google's natural conversational system, which even motivated him to produce a document with some of his conversations with LaMDA, Mitchell saw things differently.
The AI ethicist read an abbreviated version of Lemoine's document and saw a computer program, not a person.
'Our minds are very, very good at constructing realities that are not necessarily true to the larger set of facts that are being presented to us,' Mitchell said. 'I'm really concerned about what it means for people to be increasingly affected by the illusion.'
In turn, Lemoine said that people have the right to shape technology that can significantly affect their lives.
'I think this technology is going to be amazing. I think it will benefit everyone. But maybe other people disagree and maybe we at Google shouldn't be making all the choices.'
pt 3 of 3