Anonymous ID: 243143 April 1, 2023, 7:06 a.m. No.18620598   🗄️.is 🔗kun   >>0603 >>0659 >>0674 >>0682 >>0716 >>0717 >>0792 >>0886 >>0983

An AI researcher who has been warning about the technology for over 20 years says we should 'shut it all down,' and issue an 'indefinite and worldwide' ban

 

An AI researcher who has warned about the dangers of the technology since the early 2000s said we should "shut it all down," in an alarming op-ed published by Time on Wednesday.

 

Eliezer Yudkowsky, a researcher and author who has been working on Artificial General Intelligence since 2001, wrote the article in response to an open letter from many big names in the tech world, which called for a moratorium on AI development for six months.

 

The letter, signed by 1,125 people including Elon Musk and Apple's co-founder Steve Wozniak, requested a pause on training AI tech more powerful than OpenAI's recently launched GPT-4.

 

Yudkowsy's article, titled "Pausing AI Developments Isn't Enough. We Need to Shut it All Down," said he refrained from signing the letter because it understated the "seriousness of the situation," and asked for "too little to solve it."

 

He wrote: "Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die."

 

He explained that AI "does not care for us nor for sentient life in general," and we're far from instilling those kinds of principles in the tech at present.

 

Yudkowsky instead suggested a ban that is "indefinite and worldwide" with no exceptions for governments or militaries.

 

"If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue data center by airstrike," Yudkowsky said.

 

Yudkowsky has for many years been issuing bombastic warnings about the possibly catastrophic consequences of AI. Earlier in March he was described by Bloomberg as an "AI Doomer," with author Ellen Huet noting that he has been warning about the possibility of an "AI apocalypse" for a long time.

 

https://finance.yahoo.com/news/ai-researcher-warning-technology-over-114317785.html