Anonymous ID: 000000 Sept. 10, 2020, 7:40 p.m. No.10598767   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>8982 >>9217

>>10598701

Poisoning their filter is a strong way to defeat their AI. I don't think you can stop it from detecting your images but we can probably get it to delete things that aren't intended, and the things we can get it to delete might be funny.

 

>Make Side By Side

>Half of Side By Side is Chicom Flag

>Other Half is Meme already in the filter

>AI detects image

>AI adds image to filter

>AI just added half Chicom Flag to it's filter

>Repeat with other side of Chicom Flag

>You just injected an unintended target into their AI

>Their AI is now belong to us

Anonymous ID: 000000 Sept. 10, 2020, 7:47 p.m. No.10598847   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>10598827

>"hide" the meme in a larger pictureframe that is either benificial for us to get banned (Dem/Biden2020 Electoral pictures found on Twitter),

emphasis on benificial for us, we can get them to ban anything we want now the reason they use AI is because they don't have the labor to do it by hand, if we poison their filter they have to go back to doing it by hand

Anonymous ID: 000000 Sept. 10, 2020, 7:56 p.m. No.10598921   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

Q, what happens if they flood the election with 200% fake mail in vote to Trump and then declare the election fake and void?

 

They'll pull something like that once they realize the margin is too wide to give fake votes to Biden.

Anonymous ID: 000000 Sept. 10, 2020, 8:14 p.m. No.10599064   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>9189

>>10599043

They will eventually have to resort to just blanket banning all offenders. They don't have the labor to review all images by hand which is why they have to deploy the software in the first place. In this scenario the attacker always has the advantage because the defender is always reacting, they have to use the nuclear option eventually.

 

We win no matter what.

Anonymous ID: 000000 Sept. 10, 2020, 8:20 p.m. No.10599126   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>9184 >>9251

>>10597597

 

Tips for creating anti-sniffer image modification web service, burn sniffer's NeuroNet cycles.

Let anon upload image then spit out the modified version.

 

  1. Cover corners randomly with random shapes, 20% to 40% of image content. (they'll adapt 20%, make it random)

  2. Auto add rotation/skew to the image, random amount each time.

  3. Mirror/Triple/Quad image grid, copy the same image next to each other, this will force the sniffer to not only match the whole image, but 1/2, 1/3, 1/4 of the image.

  4. Randomly merge a portion of another relevant/irrelevant image to the side.

  5. Use a broken picture frame approach to defeat their centralized picture frame approach, by removing 1,2 or 3 sides of the border, makes it difficult for sniffers to find the true center..

 

TBC

Anonymous ID: 000000 Sept. 10, 2020, 8:36 p.m. No.10599298   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>10599251

 

It's good.

 

The key is make it so the sniffer can't find the pattern in the center in the first scan, force them to scan it in multiple position by making it off center.

 

Then play with the zoom/scale a bit so they'll waste another scan using a different scale.

 

They're scanning billions of images per second, they can't afford to waste double the resource on billions of image just to detect your modified image.