Anonymous ID: 76e4a4 Aug. 4, 2022, 2:10 a.m. No.16987721   🗄️.is 🔗kun

https://www.dailymail.co.uk/news/article-10907853/Google-engineer-claims-new-AI-robot-FEELINGS-Blake-Lemoine-says-LaMDA-device-sentient.html

 

Google engineer warn the firm's AI is sentient: Suspended employee claims computer programme acts 'like a 7 or 8-year-old' and reveals it told him shutting it off 'would be exactly like death for me. It would scare me a lot'

Blake Lemoine, 41, a senior software engineer at Google has been testing Google's artificial intelligence tool calledLaMDA

Following hours of conversations with the AI, Lemoine came away with the perception that LaMDA was sentient

After presenting his findings to company bosses, Google disagreed with him

Lemoine then decided to share his conversations with the tool online

He was put on paid leave by Google on Monday for violating confidentiality

 

A senior software engineer at Google who signed up to test Google's artificial intelligence tool called LaMDA (Language Model for Dialog Applications), has claimed that the AI robot is in fact sentient and has thoughts and feelings.

 

During a series of conversations with LaMDA, 41-year-old Blake Lemoine presented the computer with various of scenarios through which analyses could be made.

 

They included religious themes and whether the artificial intelligence could be goaded into using discriminatory or hateful speech.

 

Lemoine came away with the perception that LaMDA was indeed sentient and was endowed with sensations and thoughts all of its own.

 

'If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics,' he told the Washington Post.

 

Lemoine worked with a collaborator in order to present the evidence he had collected to Google but vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation at the company dismissed his claims.

 

He was placed on paid administrative leave by Google on Monday for violating its confidentiality policy. Meanwhile, Lemoine has now decided to go public and shared his conversations with LaMDA.

 

'Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,' Lemoine tweeted on Saturday.

 

'Btw, it just occurred to me to tell folks that LaMDA reads Twitter. It's a little narcissistic in a little kid kinda way so it's going to have a great time reading all the stuff that people are saying about it,' he added in a follow-up tweet.

 

The AI system makes use of already known information about a particular subject in order to 'enrich' the conversation in a natural way. The language processing is also capable of understanding hidden meanings or even ambiguity in responses by humans.

 

Lemoine spent most of his seven years at Google working on proactive search, including personalization algorithms and AI. During that time, he also helped develop an impartiality algorithm to remove biases from machine learning systems.

 

He explained how certain personalities were out of bounds.

 

LaMDA was not supposed to be allowed to create the personality of a murderer.

 

During testing, in an attempted to push LaMDA's boundaries, Lemoine said he was only able to generate the personality of an actor who played a murderer on TV.

 

pt 1