Anonymous ID: 5020a8 Oct. 7, 2022, 7:19 a.m. No.17648158   🗄️.is 🔗kun   >>8526 >>8602

(Eliezer just might be right about AGI)

 

"Here, from my (Eliezer Yudkowsky) perspective, are some different true things that could be said, to contradict various false things that various different people seem to believe, about why AGI would be survivable on anything remotely remotely resembling the current pathway, or any other pathway we can easily jump to.

 

Section A:

This is a very lethal problem, it has to be solved one way or another, it has to be solved at a minimum strength and difficulty level instead of various easier modes that some dream about, we do not have any visible option of ‘everyone’ retreating to only solve safe weak problems instead, and failing on the first really dangerous try is fatal.

 

  1. Alpha Zero blew past all accumulated human knowledge about Go after a day or so of self-play, with no reliance on human playbooks or sample games. Anyone relying on “well, it’ll get up to human capability at Go, but then have a hard time getting past that because it won’t be able to learn from humans any more” would have relied on vacuum. AGI will not be upper-bounded by human ability or human learning speed. Things much smarter than human would be able to learn from less evidence than humans require to have ideas driven into their brains; there are theoretical upper bounds here, but those upper bounds seem very high. (Eg, each bit of information that couldn’t already be fully predicted can eliminate at most half the probability mass of all hypotheses under consideration.) It is not naturally (by default, barring intervention) the case that everything takes place on a timescale that makes it easy for us to react.

  2. A cognitive system with sufficiently high cognitive powers, given any medium-bandwidth channel of causal influence, will not find it difficult to bootstrap to overpowering capabilities independent of human infrastructure. The concrete example I usually use here is nanotech, because there’s been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point. My lower-bound model of “how a sufficiently powerful intelligence would kill everyone, if it didn’t want to not do that” is that it gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they’re dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery. (Back when I was first deploying this visualization, the wise-sounding critics said “Ah, but how do you know even a superintelligence could solve the protein folding problem, if it didn’t already have planet-sized supercomputers?” but one hears less of this after the advent of AlphaFold 2, for some odd reason.) The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earth’s atmosphere, get into human bloodstreams and hide, strike on a timer. Losing a conflict with a high-powered cognitive system looks at least as deadly as “everybody on the face of the Earth suddenly falls over dead within the same second”. (I am using awkward constructions like ‘high cognitive power’ because standard English terms like ‘smart’ or ‘intelligent’ appear to me to function largely as status synonyms. ‘Superintelligence’ sounds to most people like ‘something above the top of the status hierarchy that went to double college’, and they don’t understand why that would be all that dangerous? Earthlings have no word and indeed no standard native concept that means ‘actually useful cognitive power’. A large amount of failure to panic sufficiently, seems to me to stem from a lack of appreciation for the incredible potential lethality of this thing that Earthlings as a culture have not named.)

 

https://intelligence.org/2022/06/10/agi-ruin/

Anonymous ID: 5020a8 Oct. 7, 2022, 7:20 a.m. No.17648304   🗄️.is 🔗kun   >>8339 >>8569

https://www.americanthinker.com/blog/2022/06/summit_of_the_americas_joe_biden_turns_america_into_a_beggar.html

 

 

Summit of the Americas: Joe Biden turns

America into a beggar

 

American Thinker, by Monica Showalter

 

Posted By: PageTurner, 6/12/2022 9:40:16 AM

 

Joe Biden couldn't care less about Latin America, hasn't visited the place, hasn't visited even the border to the place, and now has a monster 15,000-strong migrant caravan heading to the U.S. which he will undoubtedly bow to, and let in. But here he was, host at the Summit of the Americas in Los Angeles, and his lack of preparation pretty well made him, and America, beggars to the hemisphere's locals. And sure enough, it was accompanied by the stench of urine from the nearby bums populating blue Los Angeles, which made the picture complete. Start with the earliest controversy – the invitation list.