https://www.techdirt.com/articles/20190614/20280842406/congress-now-creating-moral-panic-around-deepfakes-order-to-change-cda-230.shtml
Congress Now Creating A Moral Panic Around Deepfakes In Order To Change CDA 230
Legal Issues
from the oh-come-on dept
Wed, Jun 19th 2019 9:28am — Mike Masnick
Everyone's got it out for Section 230 of the Communications Decency Act these days. And pretty much any excuse will do. The latest is that last week, Rep. Adam Schiff held a hearing on "deep fakes" with a part of the focus being on why we should "amend" (read: rip to shreds) Section 230 of the Communications Decency Act to "deal with" deep fakes. You can watch the whole hearing here, if you're into that kind of punishment:
One of the speakers was law professor Danielle Citron, who has been a long time supporter of amending CDA 230 (though, at the very least, has been a lot more careful and thoughtful about her advocacy on that then many others who speak out against 230). And she recommended changing CDA 230 to deal with deep fakes by requiring platforms take responsibility with "reasonable" policies:
Maryland Carey School of Law professor Danielle Keats Citron responded suggesting that Congress force platforms to judiciously moderate content in any changes to 230 in order to receive those immunities. “Federal immunity should be amended to condition the immunity on reasonable moderation practices rather than the free pass that exists today,” Citron said. “The current interpretation of Section 230 leaves platforms with no incentive to address destructive deepfake content.”
I have a lot of different concerns about this. First off, while everyone is out there fear mongering about the harm that deep fakes could do, it's not yet clear that the public can't figure out ways to adapt to this. Yes, you can paint lots of stories about how a deepfake could impact things, and I do think there's value in thinking through how that may play out in various situations (such as elections), to assume that deepfakes will absolutely fool people and therefore we need to paternalistically "protect" the public from possibly being fooled, seems a bit premature. That could change over time. But we haven't yet seen any evidence of any significant long term effect from deepfakes, so maybe we shouldn't be changing a fundamental internet law without actual evidence of the need.
Second, defining "reasonable moderation practices" in law seems like a very, very dangerous idea. "Reasonable" to whom? And how? And how can Congress demand reasonable rules for moderating content without violating the 1st Amendment? I don't see how any proposed solution could possibly survive constitutional scrutiny.
Finally, and most importantly, Citron is just wrong to claim that the current structure "leaves platforms with no incentive to address destructive deepfake content." As I said, I find Citron to be more thoughtful and reasonable than many critics of Section 230, but this statement is just bonkers. It's clearly false, given that YouTube has taken down deepfakes and Facebook has pulled them from algorithmic promotion and put warning flags on them. It certainly looks like the current system has provided at least some incentive for those platforms to "address destructive deepfake content." You can disagree with how these platforms have chosen to do things. Or you can claim that there need to be different incentives, but to say there are no incentives is simply laughable. There are plenty of incentives: there is public pressure (which has been fairly effective). There is the desire of the platforms not to piss off their users. There is the desire of the platforms not to continue to rain down angry rants from (and future regulations) from Congress.