Anonymous ID: e06fc6 June 20, 2019, 9:16 a.m. No.6798690   🗄️.is 🔗kun   >>8943

The Checkist - A must watch.

 

Part 1

https://www.bitchute.com/video/CwLuV0xoad1s/

 

Part 2

https://www.bitchute.com/video/UkcDl3gMsmHB/

 

Part 3

 

https://www.bitchute.com/video/PxtaS4TPsicp/

 

>One of the few films I can confidently assert will “shock the unshockable.” THE CHEKIST is a blistering look at an executioner going about his work in the wake of the jewish Russian Revolution that’s guaranteed to traumatize the most hardened viewers.

If I had to pick one film in the world to save from destruction, I would save Aleksandr Rogozhkin's "The Chekist", about the jewish Bolshevik Red Terror.

This is the only movie I have ever seen that has radically altered the way I see the world. I think about this film often, almost 10 years after I first saw it. I was never able to see it again because it disappeared from distribution. VHS sales ended, and it never made the transition to digital from tape.

Recently I searched again, and found it on youtube. Before clicking through, be warned. This is a disturbing film. Reviewers guarantee that it will "shock the unshockable". No child should ever see it.

The horror of the movie is not however in the acts of evil portrayed, but in their utter mundanity. This is not a movie that revels in the drama of torture. This evil is methodical, it has a conscience, it is arduous. Atrocity is revealed to operate against a series of surmountable logistical problems that require one to keep their chin up, to remain self-justified, and to not grow weary from accomplishing so much harm.

 

The filthy jews that ran the cheka secret police wanted to round up and systematically murder priests, wealthy folk, professors, intellectuals, ex government officials,

or just anyone that voiced a negative opinion against jewish communism, they were considered a threat to the the bolsheviks.

oh and they also rounded up and murdered attractive people, the jews hate anything beautiful, they are contrary to all men and mother earths natural processes.

In my opinion, this is whats in store for all white nations, intelligence agencies are gathering information on anyone who knows too much about the jews and their history,

and are actively spreading the word as we are here online, the hooked noses are intent on murdering all of us.remember, every jew is a communist whether he admits it or not.

NEVER, NEVER give up your guns. that will be the last nail in our collective coffins if they disarm us all, our fate will be just as in this film.

Anonymous ID: e06fc6 June 20, 2019, 10:07 a.m. No.6799103   🗄️.is 🔗kun   >>9113 >>9127 >>9131 >>9165 >>9188

http://web.archive.org/save/https://www.techdirt.com/articles/20190614/20280842406/congress-now-creating-moral-panic-around-deepfakes-order-to-change-cda-230.shtml

 

Congress Now Creating A Moral Panic Around Deepfakes In Order To Change CDA 230

 

Legal Issues

from the oh-come-on dept

Wed, Jun 19th 2019 9:28am — Mike Masnick

 

Everyone's got it out for Section 230 of the Communications Decency Act these days. And pretty much any excuse will do. The latest is that last week, Rep. Adam Schiff held a hearing on "deep fakes" with a part of the focus being on why we should "amend" (read: rip to shreds) Section 230 of the Communications Decency Act to "deal with" deep fakes. You can watch the whole hearing here, if you're into that kind of punishment:

 

One of the speakers was law professor Danielle Citron, who has been a long time supporter of amending CDA 230 (though, at the very least, has been a lot more careful and thoughtful about her advocacy on that then many others who speak out against 230). And she recommended changing CDA 230 to deal with deep fakes by requiring platforms take responsibility with "reasonable" policies:

 

Maryland Carey School of Law professor Danielle Keats Citron responded suggesting that Congress force platforms to judiciously moderate content in any changes to 230 in order to receive those immunities. “Federal immunity should be amended to condition the immunity on reasonable moderation practices rather than the free pass that exists today,” Citron said. “The current interpretation of Section 230 leaves platforms with no incentive to address destructive deepfake content.”

 

I have a lot of different concerns about this. First off, while everyone is out there fear mongering about the harm that deep fakes could do, it's not yet clear that the public can't figure out ways to adapt to this. Yes, you can paint lots of stories about how a deepfake could impact things, and I do think there's value in thinking through how that may play out in various situations (such as elections), to assume that deepfakes will absolutely fool people and therefore we need to paternalistically "protect" the public from possibly being fooled, seems a bit premature. That could change over time. But we haven't yet seen any evidence of any significant long term effect from deepfakes, so maybe we shouldn't be changing a fundamental internet law without actual evidence of the need.

 

Second, defining "reasonable moderation practices" in law seems like a very, very dangerous idea. "Reasonable" to whom? And how? And how can Congress demand reasonable rules for moderating content without violating the 1st Amendment? I don't see how any proposed solution could possibly survive constitutional scrutiny.

 

Finally, and most importantly, Citron is just wrong to claim that the current structure "leaves platforms with no incentive to address destructive deepfake content." As I said, I find Citron to be more thoughtful and reasonable than many critics of Section 230, but this statement is just bonkers. It's clearly false, given that YouTube has taken down deepfakes and Facebook has pulled them from algorithmic promotion and put warning flags on them. It certainly looks like the current system has provided at least some incentive for those platforms to "address destructive deepfake content." You can disagree with how these platforms have chosen to do things. Or you can claim that there need to be different incentives, but to say there are no incentives is simply laughable. There are plenty of incentives: there is public pressure (which has been fairly effective). There is the desire of the platforms not to piss off their users. There is the desire of the platforms not to continue to rain down angry rants from (and future regulations) from Congress.

 

And, importantly, section (c)(2) of CDA 230 is there to encourage this kind of experimentation by the platforms. They are given the benefit of not facing liability for moderation choices they make, which is actually a very strong incentive for those platforms to experiment and figure out what works best for them and their particular community.

Anonymous ID: e06fc6 June 20, 2019, 10:07 a.m. No.6799107   🗄️.is 🔗kun   >>9113

Any effort to change the law to demand "reasonable moderation practices" is going to come up against difficult situations and create something of a mess. If we pass a law that forces Facebook to remove deepfakes, does that mean Facebook/Twitter and others would have to remove the various examples of deepfakes that are more comedic than election-impacting? For example, you may have recently seen a viral deepfake of Bill Hader on Conan O'Brien doing his Arnold Schwarzenegger impression, in which he subtly morphs into Swarzenegger. Would a "reasonable" moderation policy forbid such a thing:

 

Also, different kinds of sites have wholly different moderation approaches. How do you write a rule that applies equally to Facebook, Twitter, YouTube… and Wikipedia, Reddit and Dropbox. You can argue that the first three are similar enough, but the latter three work in wholly different ways. Crafting a single solution that works for all is asking for trouble – or will wipe away significant concepts on how to run online communities.

 

I can completely empathize with the worries about deep fakes and what they could mean long term. But let's not use this moral panic and overreaction without evidence of harm to completely change the internet – especially with silly claims falsely stating that there are no incentives for platforms to handle the problematic side of this technology already.

 

Filed Under: adam schiff, content moderation, content removals, danielle citron, deep fakes, incentives, reasonable policies