Anonymous ID: 0a9c38 Oct. 25, 2022, 11:47 a.m. No.17706610   🗄️.is 🔗kun

Ukraine has a kill list, and I'm on it

 

Jun 4, 2022

 

Eva K Bartlett

33.3K subscribers

 

RT had me on yesterday to discuss Ukraine's kill list & my entry on it.

 

I highlighted Canada's cozy relationship with Ukraine, including with the Nazi battalions, Chrystia Freeland's Nazi-collaborating grandpa , and why I feel safer living in Russia than I would were I back in Canada where Ukrainian nationalists & Nazi supporters run rampant and could easily harm or kill me.

 

I also spoke of how while independent Canadian media reached out to me for an interview, concerned about the kill list entry & my safety, Canadian state-funded media, CBC, reached out to me to attempt to set me up for an interview for what I assume is their pending smear piece on me–and that the CBC journalist's email to me made zero mention of Ukraine's kill list, but what he did mention (my participation in a tribunal on Ukraine's war crimes) could only have come from him reading my entry on Ukraine's kill list as that tribunal was not widely publicized in English or by myself, but is on my kill list entry. Thus he & CBC prioritize pushing war propaganda over addressing Ukraine's kill list which threatens the life of a Canadian journalist.

 

RELEVANT LINKS:

 

https://myrotvorets.center/criminal/b…

https://ingaza.wordpress.com/2021/06/…

 

https://ottawacitizen.com/news/nation…

https://ici.radio-canada.ca/nouvelle/…

https://thesaker.is/canadas-nazi-prob…

https://www.thenation.com/article/wor…

https://ottawacitizen.com/news/nation…

 

https://www.youtube.com/watch?v=m1Kp7…

https://twitter.com/EvaKBartlett/stat…

https://twitter.com/EvaKBartlett/stat…

 

https://youtu.be/JgdI-RIcRzw

Anonymous ID: 0a9c38 Oct. 25, 2022, 11:47 a.m. No.17706619   🗄️.is 🔗kun

(Eliezer just might be right about AGI)

 

"Here, from my (Eliezer Yudkowsky) perspective, are some different true things that could be said, to contradict various false things that various different people seem to believe, about why AGI would be survivable on anything remotely remotely resembling the current pathway, or any other pathway we can easily jump to.

 

Section A:

This is a very lethal problem, it has to be solved one way or another, it has to be solved at a minimum strength and difficulty level instead of various easier modes that some dream about, we do not have any visible option of ‘everyone’ retreating to only solve safe weak problems instead, and failing on the first really dangerous try is fatal.

 

  1. Alpha Zero blew past all accumulated human knowledge about Go after a day or so of self-play, with no reliance on human playbooks or sample games. Anyone relying on “well, it’ll get up to human capability at Go, but then have a hard time getting past that because it won’t be able to learn from humans any more” would have relied on vacuum. AGI will not be upper-bounded by human ability or human learning speed. Things much smarter than human would be able to learn from less evidence than humans require to have ideas driven into their brains; there are theoretical upper bounds here, but those upper bounds seem very high. (Eg, each bit of information that couldn’t already be fully predicted can eliminate at most half the probability mass of all hypotheses under consideration.) It is not naturally (by default, barring intervention) the case that everything takes place on a timescale that makes it easy for us to react.

  2. A cognitive system with sufficiently high cognitive powers, given any medium-bandwidth channel of causal influence, will not find it difficult to bootstrap to overpowering capabilities independent of human infrastructure. The concrete example I usually use here is nanotech, because there’s been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point. My lower-bound model of “how a sufficiently powerful intelligence would kill everyone, if it didn’t want to not do that” is that it gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they’re dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery. (Back when I was first deploying this visualization, the wise-sounding critics said “Ah, but how do you know even a superintelligence could solve the protein folding problem, if it didn’t already have planet-sized supercomputers?” but one hears less of this after the advent of AlphaFold 2, for some odd reason.) The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earth’s atmosphere, get into human bloodstreams and hide, strike on a timer. Losing a conflict with a high-powered cognitive system looks at least as deadly as “everybody on the face of the Earth suddenly falls over dead within the same second”. (I am using awkward constructions like ‘high cognitive power’ because standard English terms like ‘smart’ or ‘intelligent’ appear to me to function largely as status synonyms. ‘Superintelligence’ sounds to most people like ‘something above the top of the status hierarchy that went to double college’, and they don’t understand why that would be all that dangerous? Earthlings have no word and indeed no standard native concept that means ‘actually useful cognitive power’. A large amount of failure to panic sufficiently, seems to me to stem from a lack of appreciation for the incredible potential lethality of this thing that Earthlings as a culture have not named.)

 

https://intelligence.org/2022/06/10/agi-ruin/