Anonymous ID: af8f71 Dec. 23, 2025, 9:54 a.m. No.24019996   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>0002

>>24019987

Google's Gemini is doing the same thing, but it also reports it to the authorities. The problem with that is, that it has about an 40% false positive rate, and people are being jailed in Georgia for this, and held without bond until the GBI can verify that it wasn't CSAM, and then months later they were being released. One guy tried suing, the state, but they have immunity for ruining people's lives over false positives.

Anonymous ID: af8f71 Dec. 23, 2025, 9:58 a.m. No.24020010   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>0016

>>24020002

40% false positives is more than a few. It's fine if they simply want to remove the images, but when authorities are also alerted and act on those, then it's not good. People shouldn't sit in jail for 3 months due to a false positive.

Anonymous ID: af8f71 Dec. 23, 2025, 10:01 a.m. No.24020027   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>24020016

It's also on Google, to explain the imperfections with their AI when reporting. Too many zealous police and DAs will arrest anyone at the drop of a hat. They don't care about truth, only about the number of convictions that they can get.

Anonymous ID: af8f71 Dec. 23, 2025, 10:08 a.m. No.24020051   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>24020038

>anything the government decides is โ€œoffensiveโ€ can get you in trouble.

Correction: anything that the government run AI decides is hate-speech will get you in trouble.

 

Between Flock Camera fuckups, and Georgia locking up innocent people for false positive CSAM all due to AI, it's time to get AI out of the law enforcement picture until it's much more advanced than it is now.