Knew that was comingโฆ
Let's not forget the push-back for even suggesting something was off with the election.
Google's Gemini is doing the same thing, but it also reports it to the authorities. The problem with that is, that it has about an 40% false positive rate, and people are being jailed in Georgia for this, and held without bond until the GBI can verify that it wasn't CSAM, and then months later they were being released. One guy tried suing, the state, but they have immunity for ruining people's lives over false positives.
40% false positives is more than a few. It's fine if they simply want to remove the images, but when authorities are also alerted and act on those, then it's not good. People shouldn't sit in jail for 3 months due to a false positive.
It's also on Google, to explain the imperfections with their AI when reporting. Too many zealous police and DAs will arrest anyone at the drop of a hat. They don't care about truth, only about the number of convictions that they can get.
>anything the government decides is โoffensiveโ can get you in trouble.
Correction: anything that the government run AI decides is hate-speech will get you in trouble.
Between Flock Camera fuckups, and Georgia locking up innocent people for false positive CSAM all due to AI, it's time to get AI out of the law enforcement picture until it's much more advanced than it is now.
Cops attitude is "if they AI said you did it, then you did it".