o7
(Eliezer just might be right about AGI)
"Here, from my (Eliezer Yudkowsky) perspective, are some different true things that could be said, to contradict various false things that various different people seem to believe, about why AGI would be survivable on anything remotely remotely resembling the current pathway, or any other pathway we can easily jump to.
Section A:
This is a very lethal problem, it has to be solved one way or another, it has to be solved at a minimum strength and difficulty level instead of various easier modes that some dream about, we do not have any visible option of âeveryoneâ retreating to only solve safe weak problems instead, and failing on the first really dangerous try is fatal.
-
Alpha Zero blew past all accumulated human knowledge about Go after a day or so of self-play, with no reliance on human playbooks or sample games. Anyone relying on âwell, itâll get up to human capability at Go, but then have a hard time getting past that because it wonât be able to learn from humans any moreâ would have relied on vacuum. AGI will not be upper-bounded by human ability or human learning speed. Things much smarter than human would be able to learn from less evidence than humans require to have ideas driven into their brains; there are theoretical upper bounds here, but those upper bounds seem very high. (Eg, each bit of information that couldnât already be fully predicted can eliminate at most half the probability mass of all hypotheses under consideration.) It is not naturally (by default, barring intervention) the case that everything takes place on a timescale that makes it easy for us to react.
-
A cognitive system with sufficiently high cognitive powers, given any medium-bandwidth channel of causal influence, will not find it difficult to bootstrap to overpowering capabilities independent of human infrastructure. The concrete example I usually use here is nanotech, because thereâs been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point. My lower-bound model of âhow a sufficiently powerful intelligence would kill everyone, if it didnât want to not do thatâ is that it gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea theyâre dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery. (Back when I was first deploying this visualization, the wise-sounding critics said âAh, but how do you know even a superintelligence could solve the protein folding problem, if it didnât already have planet-sized supercomputers?â but one hears less of this after the advent of AlphaFold 2, for some odd reason.) The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earthâs atmosphere, get into human bloodstreams and hide, strike on a timer. Losing a conflict with a high-powered cognitive system looks at least as deadly as âeverybody on the face of the Earth suddenly falls over dead within the same secondâ. (I am using awkward constructions like âhigh cognitive powerâ because standard English terms like âsmartâ or âintelligentâ appear to me to function largely as status synonyms. âSuperintelligenceâ sounds to most people like âsomething above the top of the status hierarchy that went to double collegeâ, and they donât understand why that would be all that dangerous? Earthlings have no word and indeed no standard native concept that means âactually useful cognitive powerâ. A large amount of failure to panic sufficiently, seems to me to stem from a lack of appreciation for the incredible potential lethality of this thing that Earthlings as a culture have not named.)
https://intelligence.org/2022/06/10/agi-ruin/
Are they seriously attacking Mr.Pig again? One of the most benign meme makers, memeing variations on "America winning" is getting under the skin and threatening someone. KEK!
How much money do you still get from Ramtha, Jeff? JZKnight still funding Flynnâs digital soldier LLC?
https://littlesis.org/org/415842-Digital_Soldiers_Media,_LLC
https://www.americanthinker.com/blog/2022/06/summit_of_the_americas_joe_biden_turns_america_into_a_beggar.html
Summit of the Americas: Joe Biden turns
America into a beggar
American Thinker, by Monica Showalter
Posted By: PageTurner, 6/12/2022 9:40:16 AM
Joe Biden couldn't care less about Latin America, hasn't visited the place, hasn't visited even the border to the place, and now has a monster 15,000-strong migrant caravan heading to the U.S. which he will undoubtedly bow to, and let in. But here he was, host at the Summit of the Americas in Los Angeles, and his lack of preparation pretty well made him, and America, beggars to the hemisphere's locals. And sure enough, it was accompanied by the stench of urine from the nearby bums populating blue Los Angeles, which made the picture complete. Start with the earliest controversy â the invitation list.
40% of the twatter accounts are farmed in Israel at Mount Mossad