Anonymous ID: 000000 Sept. 11, 2020, 7:07 a.m. No.10602629   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>2747

Unclassified 911 playbook from 13 March 1962. DOD on how to fool Americans into war

 

https://nsarchive2.gwu.edu//news/20010430/northwoods.pdf

 

NORTHWOODS

 

A report on the above subject is submitted for consideration by the Joint Chiefs of Staff.

 

Justification for US Military Intervention in Cuba

 

Page 10:

"It is possible to create an incident which will demonstrate convincingly that a cuban aircraft has attacked and shot down a chartered civil airliner"

 

"An aircraft at Eglin AFB would be painted and numbered as an exact duplicate for a civil registered aircraft belonging to a CIA proprietary organization in the Miami area. At a designated time the duplicate would be substituted for the actual civil aircraft and would be loaded with the selected passengerse, all boarded under carefully prepared aliases. The actual registered aircraft would be converted to a drone."

 

Found on jimstone.is

Anonymous ID: 000000 Sept. 11, 2020, 7:22 a.m. No.10602728   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>2732

from earlier

 

> computer vision relies on something called "feature extraction" (jewgle it)

> you mess this up you break the preconditions for the "ai" to compare the

> images image similarity is also based usually on a "distance" computation

> between a vector of "features" (like euclidean distance in 2 or 3d but with

> many more dimensions - say 32 or more evenly spaced points through the

> image you mess up the ability to extract features and populate these dims

> in the vector and detection is impossible for example noise can be added to

> an image or better yet features can be made to confuse the feature extraction

> process a everyone has seen the picture that even to human has blue berry

> muffins that look almost identical to one of those mexican dogs look up feature

> extraction there is a lot of techniques to foil feature extraction look up adversary

> attacks on image detection there are ways to even apply the diff between two

> different images A and B to make the ai think B is A this works for detection

> also or anything where a feature can be defined and extracted image magic

> gimp ms paint fotoshop etc can be used to foil detection blur add noise distort

> sure there even exists some excel macros that can make this easy for you and

> nomies without relying on an online service there shuld probably be a separate

> thread on the board for adversarial image manipulation techiniques to cover this

> and don't forget there are real people that review a lot of the images and vidoes

> so even if your art work gets fagged for human review you have a chance to wake

> some tool up on the other end of the ether god speed anons love you all

Anonymous ID: 000000 Sept. 11, 2020, 7:44 a.m. No.10602907   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>2920 >>2926 >>3017 >>3087 >>3167 >>3308 >>3359 >>3422 >>3428

>>10602705

>>10602710

 

More tips on how to defeat their image matching algo.

 

According to Q, DARPA's algorithm currently analyze 32 structural points on the image.

 

Let's make a few assumptions here:

 

It could mean something simple, such as:

 

32 structural points comes from an image evenly divided into 32 parts, to trigger a 20% mismatch, you need 32 * 0.2 = 6.4 area of mismatch sections.

 

Or it may be something more complex, such as:

 

32 structural points come from 1 + 2 + 4 + 9 + 16, where:

 

1 = A whole image structural match.

2 = half image structural match (split image into 2)

4 = split image into a 2x2 grid and compare each pieces.

9 = split image into a 3x3 grid and compare each pieces.

16 = split image into a 4x4 grid and compare each pieces.

 

Together they form 32 data points.

 

By distorting the 4 corners with small boxes, you get:

4 out of 16 mismatch on the 4x4 grid.

4 out of 9 mismatch on the 3x3 grid

The whole image, half image, and 2x2 grid does not have enough differences to trigger a mismatch due to boxes being too small.

In this case, you get 8 out of 32 structural points

A good 25%, above the 20% trigger, but it'll be defeated in a few days.

 

Generally you compare more than just patterns, you compare colors as well, but Q specifically mention "identifying structural point" so I am going with it.

 

TBC

Anonymous ID: 000000 Sept. 11, 2020, 8 a.m. No.10603087   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>3291

>>10602907

 

Follow up:

 

In practice the algo are not always that exact.

 

For example when you split image into a 3x3 grid , you allow the sections to over lap each other, so that each section is wider than 1/3 of the whole image.

 

Also 1+2+4+8+16 gives 31, the last 1/32 structural point can be just another whole image with the border cut off (maybe this is what Q referred to as a 'centralized' picture frame)

 

the last 1/32 could be a sphere in the center, or a diagonal strip, use your imagination.

 

You just keep tweaking the aglo and let the AI do its thing for hours then compare the result.

 

Different algo works differently on different image sets, there is no fixed formula for all.