Anonymous ID: 0016c5 July 31, 2018, 9:42 p.m. No.2384816   🗄️.is 🔗kun   >>5149 >>1957

>>2374957

We might be able to put the GPU to some use. The decoding part obviously has too much conditional branching for it to be of any use there. But the Permutation generation step is highly linear. It should be well suited to parallelization. It could be sent perspective passwords and a sizeN and send back an arrays. However, it would be memory bound. And the huge bandwidth requirements to send those arrays back to the main memory might be an issue.

I found the source for all the parts of SecureRandom and plan on making a perfect replica of it in C as a stepping stone to a possible GPU implementation. That is extremely ambitious for someone with my coding skill-level. But I can to it… eventually.

Anonymous ID: 0016c5 July 31, 2018, 10:14 p.m. No.2385219   🗄️.is 🔗kun   >>5265 >>5325

>>2385149

The Huffman decoding part is a non issue. You only need to do that once for an unlimited number of password attempts.

It's calling the SHA-based psudorandom number generator a million times in series (can't be paralleled) to decide which integers to shuffle around that takes most of the work.

Anonymous ID: 0016c5 July 31, 2018, 10:42 p.m. No.2385604   🗄️.is 🔗kun   >>5732

>>2385325

Hashcat is doing something totally different. It's trying to find the passwords that produced a set of hashes. It does this by hashing lots for trial passwords once in parallel'. We need to take one password, use it to set the state of the SHA algo, and then cycle the output back in many many times. This is an unavoidably serial process. If I indeed go down this rabbit hole it will probably involved reading the HashCat code as a way of learning how CPU<->GPU coding works. I might even use some parts from it. But beyond that programs like HashCat and John the Ripper are not useful to us.

Anonymous ID: 0016c5 July 31, 2018, 11:12 p.m. No.2386063   🗄️.is 🔗kun   >>6109 >>6525

>>2385732

We are not really looking for one target hash. It would be nice if it were that simple. Here is the annoying chunk of code in question. 'random.getNextValue' calls 'SecureRandom' which was previously seeded using the password under test. Inside 'SecureRandom" there is a SHA hash function at the heart of it. 'size' is typically around a million.[code]public Permutation(int size, F5Random random) {

int i, randomIndex, tmp;

shuffled = new int[size];

 

// To create the shuffled sequence, we initialise an array

// with the integers 0 … (size-1).

for (i=0; i<size; i++) // initialise with size integers

shuffled[i] = i;

int maxRandom = size; // set number of entries to shuffle

for (i=0; i<size; i++) { // shuffle entries

randomIndex = random.getNextValue(maxRandom–);

tmp = shuffled[randomIndex];

shuffled[randomIndex] = shuffled[maxRandom];

shuffled[maxRandom] = tmp;

}[code] It's serial. And it's memory intensive. But at least there need be little conditional branching (which GPUs suck at). So this would use all of the GPUs RAM long before you got enough processes in parallel to use all of its computing power. It can't hurt to have a few hundred more cores helping the main CPU (as long as there are no memory bandwidth issues). But we're not going to get the same astronomical performance boost that HashCat gets.

Anonymous ID: 0016c5 July 31, 2018, 11:15 p.m. No.2386109   🗄️.is 🔗kun

>>2386063

Oops, for got the /

for (i=0; i<size; i++) { // shuffle entries randomIndex = random.getNextValue(maxRandom–); tmp = shuffled[randomIndex]; shuffled[randomIndex] = shuffled[maxRandom]; shuffled[maxRandom] = tmp; }

Anonymous ID: 0016c5 July 31, 2018, 11:50 p.m. No.2386742   🗄️.is 🔗kun   >>6850

>>2386525

Its the size of the DCT coefficient list.. which works out to be the same as the number of pixels * channels (RGB). But, practically, yes. Many of the images are larger than that one.

>With my lame 1.5GB graphics card that's still almost 5K potential instances

Indeed. I just need to work out how it will handle all the out of order loading and storing.

Anonymous ID: 0016c5 Aug. 1, 2018, 12:08 a.m. No.2386977   🗄️.is 🔗kun   >>7120 >>7183

>>2386850

Uh-huh. That is why I'm currently reading up on GPU programming.

The stumbling block I foresee is that there is a lot or random accessing going on after very short work segments will very short arrays. This is really not what GPUs are good at.

 

Disclaimer: I have no experience with this kind of stuff and I'm mostly just talking out my ass. So if anyone who has ever done anything in CUDA or OpenCL would like to weigh in it would be much appreciated.

Anonymous ID: 0016c5 Aug. 1, 2018, 10:06 a.m. No.2391741   🗄️.is 🔗kun   >>2853 >>5061

>>2391530

I picked a random file and tried generating a 4 letter list using only the characters in a files filename. Nothing.

But occurred to me last night that it was a 13 char filename. If it done by shuffling the filename somehow then I'd be looking for a 5 char key. I don't have the horsepower to attack that in a reasonable time. So when I get home today I'm gonna write a filter that reduces the set to only those that use any single char no more than the number of time it appears in the source filename, unless you want try it first. If you do then let me know so I don't reinvent a bad wheel.

Anonymous ID: 0016c5 Aug. 2, 2018, 12:25 a.m. No.2407861   🗄️.is 🔗kun   >>8637

Wait a second… files that I uploaded yesterday that were encoded with PK are no longer so.

 

Check 'em. Their sha256 hashes no longer match their sha256 filenames. CodeMonkey must have heard about what we've discovered and not liked that his site is being used for such purposes.

Anonymous ID: 0016c5 Aug. 2, 2018, 2:08 a.m. No.2408637   🗄️.is 🔗kun   >>9640

>>2407861

How much you wanna bet half-chan is doing the same thing? We shouldn't have announced our finds so publicly. Now we can't scrape pages to find more such images. That spoils all my fun.

I discovered this while testing a python script to scrape and quickly check all the images on a page. It detected 36 images on this page on one test and none on a subsequent test without changing anything in that section of code. They must be checking and reencoding old images when accessed.

 

Here is my code to scrape and scan a chan and forum type sites (anything without fancy-shmancy frames or JS). Doesn't work on Pinterest, Instagram, Medium, etc.

I don't know what good it will do now that the word is out about how easy it is to find this kind of stenago. Damnit. If we find another way to detect such hidden messages let's swap PGP keys and discuss it privately.

 

https://pastebin.com/yAFSVY86

Anonymous ID: 0016c5 Aug. 2, 2018, 4:54 a.m. No.2409560   🗄️.is 🔗kun   >>9701

>>2399125

It's not just the missing header. The first 139 bytes of nearly every file in Medium is identical.

The "James" that wrote the JPEG encoder in f5.jar and PK used to sell/license that same code. It may have found it way into the Medium back end. And it's conceivable that someone annoyed by the default comment that it normally produces got a little over zealous when they went in to shut-up that section and also commented out the JFIF part.

Alternately, Medium is know to be badguy territory. Maybe they either use stegano extensively. Or perhaps they know that PK images are easily recognizable and are intentionally sowing innocuous images with same signature to create cover for people using PK.

Anonymous ID: 0016c5 Aug. 2, 2018, 10 a.m. No.2412561   🗄️.is 🔗kun   >>2677

>>2409701

Medium.com

 

One of the spoopy images we found on QResearch was traced back to hear:

https://medium.com/pedophiles-about-pedophilia/you-say-potato-i-say-pedophile-5a9ad0ee0f99

Anonymous ID: 0016c5 Aug. 2, 2018, 10:37 p.m. No.2425587   🗄️.is 🔗kun   >>2888 >>9002

>>2419895

I think Evil Eye one is a false positive. Steg detection works by finding what should be sharp lines and checking for if they are not. A image like this has no business ever being encode with JPEG. You get too much buzzing around the sharp edges.

I just manufactured a test image as closely as I could to the a1 file using a PNG of the same logo at high rez and GIMP and quality 70. Stegdetect -t F gives me 1.711036. I think it's because if the very similar buzzing you see when you zoom in (use Pix, it doesn't smooth pixels).