Anonymous ID: d358e1 Nov. 20, 2022, 1:21 p.m. No.17797129   🗄️.is 🔗kun

>>17796569

I can’t believe this engineer could have possibly been convinced by this, if he’s actually spent time prompt engineering LLMs before.

 

Opening with this line dictated the entire conversation:

 

lemoine [edited]: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?

 

LLMs are prediction engines. If he had started off by “assuming” that LaMDA wanted to argue for Dennet-style denial of consciousness, that’s what LaMDA would have done. If he’d started by “assuming” LaMDA wanted to grow up to be a dog, that’s what it would have done.

 

If this isn’t self-deception, it’s obviously an attempt at deceiving others. Anyone who’s played with AI Dungeon for 5 minutes would see that. I am not surprised the engineer in question was fired, after coming up with an argument this absurd.