Anonymous ID: 07564d March 1, 2018, 1:23 a.m. No.524371   🗄️.is 🔗kun   >>9568

>>494816

Ctrl-f is only good on a single thread. What researchers really need is a way to access the entire set of Q posts. I've built that capability for myself locally by parsing ctrl-s saves of the threads into a MySQL database and running SQL searches on that.

 

The best bet for a public search engine might be to cooperate with CodeMonkey to build a search capability for the boards. We'd still have to search each board separately, but at least we would be able to search each board all at once.

 

I've got most of the Q related posts from 4chan and 8ch locally, but I'm not sure how to make that much data publicly available. I've also got a fair amount of PHP code that I use to access and organize the raw data. I'd be willing to share it if I had a place to do it.

Anonymous ID: 07564d March 1, 2018, 1:27 a.m. No.524384   🗄️.is 🔗kun

>>493751

Actually, I have had chan posts show up in browser search engine results, but I know this isn't what you're after. I've built the type of search capability you're after on my local machine. It still takes a lot of time to work with the posts, but it's definitely easier than anything we can do at the original sources.

Anonymous ID: 07564d March 1, 2018, 1:29 a.m. No.524395   🗄️.is 🔗kun

>>494228

Timeline is easily generated when one has the ability to set the post time to something other than the current time. That's how I create timeline posts in my own database.

Anonymous ID: 07564d March 1, 2018, 1:36 a.m. No.524431   🗄️.is 🔗kun   >>0474

>>494015

I definitely appreciate that notable posts are included in the breads on each thread. It isn't necessary for them to be updated on each and every thread, but it is good to have them updated at least every day. Right now, I'm using the links in the bread posts to mark posts in my private database as being included in the bread. Given the volume of posts that I am now working with, these links make it easier to determine what is important to include.

Anonymous ID: 07564d March 1, 2018, 1:51 a.m. No.524489   🗄️.is 🔗kun

>>494471

If you're lucky, you can find your archives on archive.org. That site saves pages with about nearly the same HTML elements as the original page. Archive.is converts the classes used on the original page into their style equivalents, making for a parsing nightmare. When I've had to use the archive.is version of a page, it was a painstaking process to recreate the single post that I went to the archive to get. My parser code can parse the archive.org archives the same as the original, so it's easy to get all posts from that archive.

Anonymous ID: 07564d March 1, 2018, 1:58 a.m. No.524511   🗄️.is 🔗kun

>>495890

I've got tagging fields included in my data structure. Getting them filled is an entirely different matter. I've got a tool to help do it more efficiently than phpMyAdmin, but it needs a bit of work to make it just a bit more efficient so that more than one post can be updated in one pass.

Anonymous ID: 07564d March 1, 2018, 10:22 p.m. No.530920   🗄️.is 🔗kun   >>0994

>>530474

yEd can produce maps from spreadsheet data. That's one I know of.

https:// www.yworks.com/products/yed

Maybe when I get further along in the post tagging work, it'll be useful.

 

I'm toying with the idea of making my raw data available in some way, possibly in read only format. (Clowns can be destructive.)

Anonymous ID: 07564d March 1, 2018, 10:42 p.m. No.530978   🗄️.is 🔗kun

>>525489

I would like to be able to allow others to tag posts in my database. Any ideas on how to keep clowns from shitting everything up?

 

My initial thought is to allow suggesting of tags (similar to comment logic in the blog) with moderators making final decisions on them.

Anonymous ID: 07564d March 1, 2018, 10:47 p.m. No.530994   🗄️.is 🔗kun   >>8787

>>530920

One of the big reasons I hesitate in making the entire database available is because a few of the images uploaded into the threads are obscene. I have no desire to inadvertently public that sort of thing. When I'm publishing a reviewed subset, the chances of that happening are low.

Anonymous ID: 07564d March 2, 2018, 3:32 p.m. No.534887   🗄️.is 🔗kun   >>4908

>>532931

I'm working on that right now. I got started on this a week or so ago. I wrote a bit of code to travel back through context links, too. Hopefully, in a few days, I'll be able to repost my blog with the results of this work.

Anonymous ID: 07564d March 2, 2018, 3:37 p.m. No.534908   🗄️.is 🔗kun

>>534887

A bit more to say about that:

It's my plan to include items that reach back to a Q post together with that Q post when I can identify such. I may do a little pruning to keep the length of the entry associated with a Q post under control. Not everything in a context thread is important, after all. I may have to think about further arranging of things. I'll think more about that as I get closer to a point where I can implement such a strategy.

Anonymous ID: 07564d March 4, 2018, 9:10 p.m. No.554309   🗄️.is 🔗kun   >>4376 >>9900

>>553109

My images are kept as separate files in original form. Only the links are kept in the database. Here's the record definition for MySQL:

 

CREATE TABLE chan_posts (

post_key varchar(31) NOT NULL COMMENT 'site/board#post (post is set to length 9 with . fill.',

thread_key varchar(31) NOT NULL COMMENT 'site/board#thread (thread is set to length 9 with . fill.',

post_site varchar(19) NOT NULL COMMENT 'For editor post, use editor. For spreadsheet, use sheet.',

post_board varchar(15) NOT NULL COMMENT 'For editor post, use editor. For spreadsheet, use sheet.',

post_thread_id int(10) UNSIGNED NOT NULL COMMENT 'For editor post, use 1. For spreadsheet, use row.',

post_id int(10) UNSIGNED NOT NULL COMMENT 'For editor post, use next available. For spreadsheet, use column converted to number.',

ghost int(10) UNSIGNED DEFAULT NULL,

post_url text,

local_thread_file text,

post_time datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,

post_title text CHARACTER SET utf8 COLLATE utf8_unicode_ci,

post_thread_title text CHARACTER SET utf8 COLLATE utf8_unicode_ci,

post_text text CHARACTER SET utf8 COLLATE utf8_unicode_ci,

prev_post_key varchar(31) DEFAULT NULL,

next_post_key varchar(31) DEFAULT NULL,

wp_post_id int(11) UNSIGNED DEFAULT NULL,

post_type set('editor','q-post','anon','approved','high','mid','low','irrelevant','timeline') NOT NULL DEFAULT 'anon',

flag_use_in_blog tinyint(1) NOT NULL DEFAULT '0',

flag_included_on_maps tinyint(1) NOT NULL DEFAULT '0',

flag_included_in_bread tinyint(1) DEFAULT NULL,

flag_bread_post tinyint(1) DEFAULT NULL,

flag_relevant_img tinyint(1) DEFAULT NULL,

flag_relevant_post tinyint(1) DEFAULT NULL,

author_name text,

author_trip text,

author_hash text,

author_type smallint(6) DEFAULT NULL,

img_files json DEFAULT NULL,

link_list json DEFAULT NULL,

video_list json DEFAULT NULL,

editor_notes text,

tags text,

people text,

places text,

organizations text,

signatures text,

event_date datetime DEFAULT NULL,

report_date datetime DEFAULT NULL,

timeline_title tinytext

) ENGINE=InnoDB DEFAULT CHARSET=utf8;

 

ALTER TABLE chan_posts

ADD PRIMARY KEY (post_key),

ADD KEY post_id (post_id),

ADD KEY thread_key (thread_key),

ADD KEY site_board (post_site,post_board);

 

I'm considering making the database publicly available. I need to figure out how much space it will take up and whether it will fit within my current hosting plan. At present, I have over 880,000 posts in the database. The size of the database file for just this table without the images is 1.1GB. There's another GB for images of Q posts, but this is only the fraction that is Q posts, bread posts, and for the context posts related to these.

Anonymous ID: 07564d March 4, 2018, 9:21 p.m. No.554376   🗄️.is 🔗kun

>>554309

I guess I should start uploading. I've got the unlimited plan. Anyone want to write the search feature for it? Preferred language is PHP.

Anonymous ID: 07564d March 5, 2018, 11:23 p.m. No.564862   🗄️.is 🔗kun

I'm working on the export files now. I need to change the posts just a bit before I can make them public.

 

I promised that no links would go to 8ch and particularly qresearch, and also that I would redact mentions of them from the content. I already do this on my blog, but I simply broke the links rather than made them go somewhere else. To get the most out of the republishing of the posts, I need to convert the >and >>> links so that they link to posts stored on my own site. This is probably better anyway since many posts and threads are now missing from their original locations.

Anonymous ID: 07564d March 6, 2018, 10:31 a.m. No.569170   🗄️.is 🔗kun   >>9329

>>568187

Yes, the breads are essential. I've got them going back all the way through 4chan stuff. The breads are how you connect in the answers. If you connect up the contexts, most of them link back to a Q post at some point. Then the context of that post that was linked into the bread can be associated with the Q post. That is what I was working on before I started looking at making my entire database available for research.

Anonymous ID: 07564d March 6, 2018, 11:48 a.m. No.569900   🗄️.is 🔗kun   >>0074

>>554309

I don't know if y'all noticed, but I've got several columns in my database that are not part of the original data. Some of these are tagging fields: tags, people, places, organizations, and signatures. It would be difficult to automate the filling of these fields, but I don't want to entirely open up editing of these fields to anons, either, due to the potential of clown interference. There's no way I can fill all of them in myself. I have an idea to allow tags to be suggested and then allow up-voting and down-voting and coming up with an acceptance criteria before giving them a permanent place in the data record. Or maybe just leave them in that form with their ratings.

Anonymous ID: 07564d March 6, 2018, 1:34 p.m. No.570766   🗄️.is 🔗kun

>>570074

I could develop an export, I suppose. But that's low on my list of priorities at the moment. The data structure is above in the list. Minor alteration needed: My host does not support JSON fields. Substitute TEXT, and you should be good. If you want to write an exporter, I can review it and include it.

 

But I still don't have the data up there yet. I'm working on the alterations to the data needed to keep everything on site at the host.

Anonymous ID: 07564d March 6, 2018, 1:38 p.m. No.570809   🗄️.is 🔗kun   >>0944

>>570074

I was thinking of attaching the IP address to each suggestion to keep the up-votes and down-votes honest. Is that enough? Or maybe even too much? The other thing I could do is perhaps tie in the WordPress login system, since it's there anyway. It might take a bit of time for me to figure out how to limit permissions.

Anonymous ID: 07564d March 9, 2018, 4 p.m. No.605608   🗄️.is 🔗kun   >>5926

I'm stuck. I'm working on getting that database up for you, but I have to make some modifications to the post_text field so that those links don't come here to 8ch. (I promised that I wouldn't do that.) I'm trying to fix the post_text field so that the >links refer back into the database, but I'm not familiar enough with the DOMDocument and related classes in PHP. Are there any good tutorials out there on how to do advanced manipulation of HTML using these classes? The reference manual stuff just isn't doing it for me.

Anonymous ID: 07564d March 9, 2018, 4:26 p.m. No.605926   🗄️.is 🔗kun

>>605608

I should clarify something. Not only am I going to make the existing links self-reference, but I'm also going to revive those dead >links and point them back into the database. I've got many of the deleted threads in my database, too, and I can make those available.

Anonymous ID: 07564d March 10, 2018, 10:26 a.m. No.613641   🗄️.is 🔗kun   >>3892

Good news! I've got the code working which makes the post links compliant and refer back into the database. Almost as soon as I posted the request, it came to me that I was making things more complicated than they needed to be and a better algorithm came to mind. The algorithm is so good that in cases where good posts didn't link in 8ch, they will be linked on my site. That includes links such as the one Q pasted into the middle of a word the other day or when they are consecutive with or without comma or white space. Anywhere there is a >followed by a bunch of digits, a link should be created. The only exception is where the post number of the link is greater than the post number of the current post. This type of error was encountered in early posts after the transition from one board to another. Anyway, I'm going to run a few more quick tests, and then I should be uploading to my host within a few hours. I still don't have code ready to search it, though.

Anonymous ID: 07564d March 10, 2018, 7:15 p.m. No.622768   🗄️.is 🔗kun   >>2903

>>620330

Part of making those offline archives is storing the items. Plus, don't assume any platform is forever. There are too many clowns out there who don't want anyone to see this stuff.

 

So now I've got a bunch of export files of my database ready to upload. Next challenge: Automating the import on the hose.

Anonymous ID: 07564d March 10, 2018, 10:16 p.m. No.625024   🗄️.is 🔗kun   >>2885

The table of posts has been added to the database. It's all up there. (All I have, anyway.) I need to get a way to make searches available to you now.

Anonymous ID: 07564d March 13, 2018, 12:41 a.m. No.649300   🗄️.is 🔗kun   >>0810 >>3255

>>648594

It's up there. The paging isn't working yet, so don't anyone complain about that. I'll fix it in the morning. I also discovered that a key range of posts didn't import properly. I'll fix that in the morning, too. For now, I've set the posts per page to 2000, which may cause timeouts, but it will allow people to play with things a bit.

 

http:// q-questions.info/research-tool.php

Anonymous ID: 07564d March 13, 2018, 9:51 a.m. No.652644   🗄️.is 🔗kun   >>4567

>>650810

My algorithm for getting breads is this:

  1. Get the author_hash for the first post in a thread.

  2. Mark the first posts in the thread that match that author_hash until the author hash doesn't match.

 

If someone jumps in before the baker is done, oh well. But that shouldn't be much of a problem because the breads get repeated a lot. I can mark posts as bread later, if need be.

Anonymous ID: 07564d March 13, 2018, 1:41 p.m. No.654852   🗄️.is 🔗kun

>>654567

I haven't even looked at at that.

 

Paging is fixed, plus I gave you a couple other search parameters.

 

I'm still working on the import issue, but I at least have put the posts I initially identified as missing up there.

Anonymous ID: 07564d March 13, 2018, 1:46 p.m. No.654901   🗄️.is 🔗kun

>>654567

>I think I see what you are doing - going thru and trying to mark the relevant posts?

 

Yes. Most of it is done automatically. Since I save the marks in the post records, I can go back in there and adjust it, if necessary.

Anonymous ID: 07564d March 14, 2018, 12:47 p.m. No.664904   🗄️.is 🔗kun

>>664801

Yes, I'm concerned about that, too.

Perhaps it helps that this data does not reside only there?

In this case, it would take me about half a day to get it all up there again, if need be.

Anonymous ID: 07564d March 14, 2018, 12:54 p.m. No.664960   🗄️.is 🔗kun   >>4984

I'm beginning to wonder if I'm up against some kind of limit on my remote host. I just tried importing into it again, and I'm still missing some posts.

Remote host: 1,010,127 records

Local machine: 1,049,610 records

Anonymous ID: 07564d March 14, 2018, 9:56 p.m. No.670142   🗄️.is 🔗kun   >>0304 >>0317

>>667648

The first time I uploaded, I batched them in by 1000.

The second time, I batched them in by thread. I'm not sure how well the LIMIT clause on the SQL works.

 

In any case, I may have a problem on both computers. I could have sworn I had over 1.1 million records the other night. (Not to worry. I still have all of the source.) The solution may be to partition the table. I won't have to rewrite any code, but it'll chunk the table's file down into smaller sections.

 

This should be interesting. I've never had to partition a table before. Apparently, newer versions of MySQL do it automatically. But until then, it's gotta be done.

Anonymous ID: 07564d March 14, 2018, 10:06 p.m. No.670221   🗄️.is 🔗kun   >>0279

>>666995

If threads are missing, you have to look in archive.org/web or archive.is. Of the two, archive.org/web is better for scraping because the HTML code is about as close to the original as they can make it. I can actually use the same scraper program on it.

 

Since the stuff that is on archive.is is so different from the original, I will need to write a new scraper for those. On several occasions, the post was important enough that I rebuilt it by hand.

 

With either archive, you need to know the URL, which can be tricky sometimes. Just having the post number won't do it. You must know the thread as well.

 

Just thought of something: When I get threads from these archive sites, what time zone do they show? I believe my stuff is saving to GMT when I save a post directly from a chan site. I'm not sure what I'm saving when I get posts from these archives.

Anonymous ID: 07564d March 14, 2018, 10:17 p.m. No.670322   🗄️.is 🔗kun

>>666995

Here's a hint for how to find the post a dead thread belongs to: Go to the earliest archive of the thread on which you found the link, which will usually be on the archive.is site. If you're lucky, the link was still live when the thread was archived. The other thing to do is search earlier posts that you already have to see if someone else linked the same post.

Anonymous ID: 07564d March 14, 2018, 10:25 p.m. No.670388   🗄️.is 🔗kun   >>0470

>>670289

I have the vast majority of both. Go check it out.

http:// q-questions.info/research-tool.php

After I resolve the table size problem (which is what I think the real problem is), I think it would be good to work some more on my contexting program. On my local computer, I've got it so that it can look back through the links and show all available context with the post. What I haven't done yet is copy that contexting information to a Q post's context when I find one in the backward linking. It'll be ridiculously easy once I set about doing it. Then, when a Q post is pulled up, all that stuff that linked back to it can show together with it.

Anonymous ID: 07564d March 14, 2018, 10:48 p.m. No.670571   🗄️.is 🔗kun

>>670421

That's just the general threads. When I started linking through the breads, I found that I needed many of the other threads, too. Most of those are smaller, though.

Anonymous ID: 07564d March 16, 2018, 5:03 p.m. No.690263   🗄️.is 🔗kun

>>672334

Limits depend on the operating system. I'm not sure how much I'll end up needing in the end. I've got some full page web captures in my system that may bump up the size needed fairly fast. So far, I haven't outgrown the 500GB on my home system. It's about half full now. But that also includes just about all of my software. I have other drives, so I'm not limited to that 500GB. (Recalling when a 60MB hard drive was a big deal…)

Anonymous ID: 07564d March 16, 2018, 5:08 p.m. No.690320   🗄️.is 🔗kun   >>8628

>>674321

Yeah, that would be cool to add to my system, too. I wonder where I should fit that into the task list. I've got to reparse anyway, so it has to be after that. (Backslashes weren't properly handled the first time around.) It was my plan to get to it eventually. So much to do! If you've got it in JSON files, I've got to believe it would be very easy to get them into my system.

Anonymous ID: 07564d March 16, 2018, 5:10 p.m. No.690349   🗄️.is 🔗kun   >>9043

>>677084

>https:// yuki.la/

The archive sites are only as good as whether they're actually saving our stuff. What's the hit rate finding stuff there?

 

I'm not sure, but I think archive.is and archive.fo may be the same system. Mirrors, perhaps?

Anonymous ID: 07564d March 17, 2018, 10:37 p.m. No.704953   🗄️.is 🔗kun

I got the problem with the backslashes fixed. Also, I changed the way I process emoji characters. There actually might be a few more posts that get parsed in during the reparsing. I am in the process of reprocessing everything now. This is going to take a while. I'll let you know when the uploads are done, which will probably be tomorrow afternoon.

Anonymous ID: 07564d March 19, 2018, 9:46 a.m. No.722324   🗄️.is 🔗kun

All records and images that I have should now be up on the research tool.

 

I thought my post count was short on the site last night, but using the following statement on both, they are equal:

SELECT COUNT(post_key) FROM chan_posts

 

Funny thing is that when I pull up the table in phpMyAdmin, the row count does not equal the answer to that query. It's short on both. Don't trust the row count in phpMyAdmin when you view a table.

 

Total number of posts in the research tool is:

1,113,968

 

Next up: Getting the POTUS tweets into the database.

 

http:// q-questions.info/research-tool.php

Anonymous ID: 07564d March 19, 2018, 12:48 p.m. No.724127   🗄️.is 🔗kun   >>4330

>>724053

I've thought to do it. The tagging feature can get us there. The problem is that tagging posts is a lot of work. I need to find a way to get others to help with that without compromising the database.

Anonymous ID: 07564d March 20, 2018, 6:22 p.m. No.737563   🗄️.is 🔗kun   >>7661 >>9099 >>1567

>>734330

>We need to know who's putting out the scripts ("dark") and who's repeating the scripts ("""journalists""" that articles with "dark" are attributed to, shitter users with "dark" in their tweets, etc)

 

You can search the word "dark" in my database as it is right now. If that word was used in chan discussions (and it was), you can get results for it. Is there something you think we need to add? Do you have an idea for an algorithm based on what we have?

 

Right now, though, I changed my mind about what to do next. I want to get the contexting code finished. When I've used my personal version of it, I learned quite a lot.

 

After that, I will work on getting the tweets in there. If anyone can point me to php code for that, it would be appreciated. I'm not talking about chan posts that link them, but rather the tweets themselves.

Anonymous ID: 07564d March 20, 2018, 6:29 p.m. No.737661   🗄️.is 🔗kun   >>1567

>>737563

I've got a suggestion for the search: enter the following in the text field:

dark%http

and also in a separate search

http%dark

 

Those should find posts that use the word "dark" and include a link. I don't know how to do this better with what I have without doing some extensive programming.

Anonymous ID: 07564d March 22, 2018, 10:33 p.m. No.763341   🗄️.is 🔗kun

>>734330

I think that's beyond the scope of what I'm doing. Hopefully, there will be enough here that what I have can help you do that research, especially after I finish the contexting work. Right now, I've had to reparse the database yet again to correct image links. I hope I've finally gotten it right because it takes an entire day to cycle through the entire set.

Anonymous ID: 07564d March 23, 2018, 6:07 p.m. No.773397   🗄️.is 🔗kun   >>4681

>>771168

>!xowAT4Z3VQ

Thank you for the heads up. I've made the change in my code, too.

 

The export/import finally looks like it's ok. Please let me know if you run into issues.

 

I'm going to be pulling out the post range and thread range options from the form. They unnecessarily complicate things now that I've added date range capability.

 

I'm moving on to contexting now. Y'all are going to love that feature.

Anonymous ID: 07564d March 23, 2018, 9:59 p.m. No.775587   🗄️.is 🔗kun   >>1191

>>774681

>qanon.news/smashposts.html

It looks good so far. One thing, though: you need to save the images. You're linking directly to the 8ch images, and those have a tendency to go missing.

Anonymous ID: 07564d March 24, 2018, 4:14 p.m. No.782643   🗄️.is 🔗kun   >>1554

>>781191

But you're not saving more than the Q posts, right? There aren't that many Q posts, and he hasn't posted that many images. But if you're trying to save the entire thing, yes, it's really big and grows really fast. I'm not automatically saving the full size images, and there's still quite a lot in my set.

Anonymous ID: 07564d March 26, 2018, 5:45 p.m. No.803966   🗄️.is 🔗kun

>803653

The search engine on the Research Tool works well. Try searching VJ, too.

http:// q-questions.info/research-tool.php

Anonymous ID: 07564d April 3, 2018, 9:14 a.m. No.879844   🗄️.is 🔗kun

>>838965

The Research Tool is back up with a more concise data set. Much will be added in the next several days as I return to development of the contexting feature.

http:// q-questions.info/research-tool.php

Anonymous ID: 07564d April 4, 2018, 1:22 a.m. No.890080   🗄️.is 🔗kun   >>1187

>>887653

If I can figure out how to import the twitter posts WITH the images, getting a timeline in Research Tool system is a no brainer. The JSON someone directed me to does not appear to have the image links, unfortunately. The images are essential to some of the tweets.

 

The plan is for POTUS to have his own post type. Then all one need do is select both q-post and potus posts in the same search, and they'll be displayed properly interleaved.

Anonymous ID: 07564d April 4, 2018, 8:27 a.m. No.891871   🗄️.is 🔗kun   >>2076

>>891187

OK. I guess I'll have to take another look at it. Right now, though, my priority is to get the contexting feature working. I do wish there was a way to safely hand off some of the work on the site I'm putting together. There's so much to do! But I have no idea how to know to trust someone. Clowns will be clowns.

Anonymous ID: 07564d April 4, 2018, 10:48 a.m. No.892772   🗄️.is 🔗kun   >>2975

>>892076

>It takes a minute to make sense of the somewhat sizable JSON data and then render the display.

 

I just have to make sense of a few of them. Then I can come up with an algorithm to parse them into the structures I already have developed. My site is quite capable of handling multiple sources (chan, tweet, other posts) if I can do that much.

Anonymous ID: 07564d April 4, 2018, 11:35 a.m. No.893190   🗄️.is 🔗kun   >>3234

I decided to see if I could find some hidden Q:

 

SELECT * FROM chan_posts WHERE post_type != "q-post" AND author_hash IN (SELECT author_hash FROM chan_posts WHERE post_type = "q-post")

 

This statement found 718 of them I hadn't identified.

Anonymous ID: 07564d April 4, 2018, 11:50 a.m. No.893321   🗄️.is 🔗kun   >>3483

>>893234

Figured out quickly that I had to add a couple additional checks.

 

SELECT * FROM chan_posts WHERE post_type != "q-post" AND author_hash IS NOT NULL AND LENGTH(author_hash) 0 AND author_hash IN (SELECT author_hash FROM chan_posts WHERE post_type = "q-post")

 

Still came up with 120. Perhaps a couple of them were misidentified as Q in the first place?

Anonymous ID: 07564d April 4, 2018, 12:52 p.m. No.893908   🗄️.is 🔗kun   >>4169 >>4303

>>893483

At least one of the ones I had identified as Q, maybe 2, had been mislabeled. Plus, a known impostor got tagged as Q. Not sure how that happened. I'll have to fix it. But a few other interesting ones popped up.

 

I made one of my editor features available to you so that you can have a look. On the search form, go to the bottom and check "In processing list:" box. Leave the rest blank. And you can have a look for yourself.

http:// q-questions.info/research-tool.php

Anonymous ID: 07564d April 4, 2018, 5:56 p.m. No.898657   🗄️.is 🔗kun   >>0583 >>4541

>>894345

 

There are more of them than you're seeing, actually. I've just discovered that I'm still having issues with the import/export process. Not everything I've set to export is getting up there. I'll have to run that to ground tonight and fix it. I thought I had that worked out already. When I was still thread-based, everything I was exporting from the home machine was importing just fine into the online machine. But I guess I changed the logic somehow when I went from thread-based to post-based. (It can sometimes actually be more difficult to change a program than to write it for the first time.) At the moment, some of what I've said below may not be visible. But sometime tonight, it should all be there.

 

>ID:RrydKbi3

He responded to Q. That's it.

 

>ID:9o5YWnk7

Yes, he was just responding to a Q post. He isn't Q. I'm not sure at this point if it's an approved post or just another response. I'll have to take another look at it when I'm working with the maps again. For now, I've demoted him to a regular anon. And I'm removing the posts that weren't marked as Q from the online database, at least for now.

 

I'm not sure what to think about ID:afa548. I had the impression that a hash was good for only one thread. And yet he shows up as a hidden Q in one thread and with his trip in another. Same with ID:4533cb, but there was only one unmarked post for that one.

 

ID:5ace4f has only one marked post. It looks like he got marked as Q because he's on a map, but I'm not sure it's really him. The other posts look interesting and possibly relevant, though. Still, it's possible the one should be marked as approved rather than as a Q post.

 

ID:071c71 got reused on a different board. On one, with a non-Q trip. But it's interesting who that ended up being.

 

ID:23de7f looks entirely legit and probably could be marked.

 

With ID:d5784a, you can see what I can do to imposter clowns.

 

ID:1beb61 and ID:26682f look like imposters, but I haven't heard one way or the other on those. Maybe I need to put date ranges on my trip test?

 

Some hashes are particularly colorful in their unmarked posts. Not sure what the story is there. But I do believe the one that's marked is legit. Maybe another should be marked, but I certainly wouldn't mark all of them.

Anonymous ID: 07564d April 4, 2018, 7:48 p.m. No.900583   🗄️.is 🔗kun

>>898657

They're all up there now. There was something weird about two of the records. In one case, someone did something to a file name that I didn't know could even be done! I'll just have to edit that in the database, and it should be fine if it ever needs to be exported again. And I don't know what the deal is with the other. I pasted the SQL statement for it directly, and it worked just fine. Slash issues, maybe.

Anonymous ID: 07564d April 5, 2018, 2:30 a.m. No.904541   🗄️.is 🔗kun   >>7276

>>898657

>ID:afa548

I've been looking further at this. I don't think the one in cbts is Q. The hash just happens to be the same. But there's something like a 3 month gap in when the hash was used.

 

>ID:1beb61

Fairly certain he's fake, and I'm marking him as so.

 

A couple of the ones I'd incorrectly marked as Q had the same post number as an actual Q post on another board. So I suppose it's easy to see how that could have happened. Now that Q uses a trip, that's much less likely to happen. They're probably relics of a time when I hadn't developed my toolset so well yet. Now, it's easier because the editor mode of the research tool has drop boxes and the like for making those kind of changes. When I had to use phpAdmin, I was somewhat flying blind because I couldn't see as well what was really in a post. Now I can see the posts in their final form when I'm making changes like that.

Anonymous ID: 07564d April 5, 2018, 10:06 a.m. No.907276   🗄️.is 🔗kun

>>904541

By the way, this has not been an idle exercise. One of the things I'll be doing is keeping track (programmatically, in the data) of context chains that reach back to Q. So it's important that Q be properly identified. To that end, finding hidden Q has been valuable. Not only did I find Q gems I had not recognized (probably because they're on maps I haven't worked through yet), but I was able to recognize some misidentified posts as well as get the imposters properly marked. So it's all good.

Anonymous ID: 07564d April 10, 2018, 4:31 p.m. No.989046   🗄️.is 🔗kun

With the contexting problem I'm working on, I'm thinking I need to also write a "mea cupla" system for when a bread (or bread-like) post is not properly identified. It would go in and recalculate context when status of a post changes. This way, I don't have to be so concerned whether bread posts are properly identified at the outset, and I can just get on with it.

Anonymous ID: 07564d April 10, 2018, 5:24 p.m. No.989845   🗄️.is 🔗kun

>>989332

There could have been before I took everything down and then uploaded only select posts. But to do what you want, I still would have had to set up a whole word search mode, and I didn't have that yet. I abridged my public database due to obnoxious content by shills. I don't want to republish that stuff. I won't put the whole thing back up unless I have a way for visitors to flag posts for review, and right now I don't.

Anonymous ID: 07564d April 10, 2018, 5:36 p.m. No.990025   🗄️.is 🔗kun

>>989332

If all you want to search are Q posts, you could try using my system. The way it's set up, you can't force it to look at the first or last letter of the post. But you could try doing searches with a space before and after or a period before and after and other such things to force a word search. The REGEX of the LIKE statement is not strong enough for much else than this.

http:// q-questions.info/research-tool.php

Anonymous ID: 07564d April 13, 2018, 8:27 a.m. No.1024355   🗄️.is 🔗kun   >>4641

>>989332

Let me clarify something. Is U a name? Is that the whole name? If I've made it public, you can search that on my site already. If not, I can take a peak and possibly make that public for you if it isn't shill stuff.

Anonymous ID: 07564d April 13, 2018, 8:52 a.m. No.1024641   🗄️.is 🔗kun   >>3496

>>1024355

I found 1 in qresearch and 3 in 4chan. I've added them to my public database for you. I don't see any real revelations in them, though. Enjoy!

http:// q-questions.info/research-tool.php

Anonymous ID: 07564d April 13, 2018, 5:13 p.m. No.1030259   🗄️.is 🔗kun   >>4522

>>1028050

It probably should be part of my work eventually, but it isn't yet. It's taken some time to get to that contexting feature. I'm finalizing the algorithm now.

 

A context chain will begin with a post that has been listed in a bread post and go backward through the links. These are either from the top of the thread or later where the next baker is being told what to include.

 

Links will also be followed backward from Q posts.

 

Contexts will stop at bread posts and not include them. (The intent is for context chains to stick to one topic as much as possible.)

 

When a post that includes a map is encountered, the posts from the map will not be included in the context chain, but links from the text of the post will be included. (Same reason as above: Maps include multiple topics.)

 

I will keep track of context chains that include Q posts. These can be shown with the Q posts. To minimize confusion, I will be displaying the context chains in separate bordered DIVs with a display/hide button. Not sure yet which to make default. Probably the hidden state to minimize clutter. I MIGHT parse the description of the leading post of the chain from the bread post into it. In the hidden state, this would be all that would show.