Looks interesting. Using the API?
Whatever happened to the idea of a Q Research Wiki? Did that whole thing die out?
I like the idea of automation. If we had a good way of trawling the breads to assemble info the wiki could build itself. That too, is probably a full time job.
What are the projects that are currently being worked on? Tying them together or collaborating would make things go faster. Lets pool our resources.
Some javascript magic? I'd be interested in reading it.
Doit
>https://pastebin.com/LmPFhtXm
Interesting! Looks good - although you'll possibly miss some breads due to ebakes and what not if they don't match your "Q Research" restrictions.
I've been archiving the JSON from here since about FEB. I have over 3000 breads all archived locally and online in JSON - I just need to work out some logic on how to trawl with some context in order to make some sense of all the data we've found.
I'll take another look later on. Looks good!
I've been looking at IPFS
IPFS is the Distributed Web
https://ipfs.io/#uses
Anybody know anything about this thing?
Cool thx I'll check that out.
The Q Research API has CORS policy set up on it's services. Security reasons - anyways, I figured anons wouldn't want to register for a key, so anybody that needed it had been contacting me and I've been opening it up for them.
Anyhoos - After seeing the qanon.pub archiver I wanted to see if I could do one for my site, only just using HTML/Javascript so there were no installs. Ideally it would be set up so that (you) could have the main HTML file locally and then it runs from your machine.
Here's what I came up with
http://qanon.news/LocalViewer.html#
I discovered that since I wanted to have this run from the client machine, AND use a few of the API services, it wasn't going to work. So I opened up a select few services completely.
Specifically the q Get(), the BreadArchive GetBreadList() and GetById()
-
Choose source format XML/JSON
-
Click the [Download] icon to download single breads.
-
[Get Everything] will downbload everything.
-
[Get Latest] will download everything you haven't downloaded before.
Browser restrictions mean downloads to your ''Downloaded Files" directory. Works in Chrome.
If people like this download functionality I'll migrate it over to the main Archives page. It could be extended to include some way of downloading images too if another codefag is feeling ambitious. The link data is in the JSON.
What's everybody working on?
5:5 Digits
There this basic Q ATOM/RSS feed.
https://qanon.news/feed
I've been tinkering with a notables trawler. I'll get back on that, I think I may be able to do another feed. There's alot of duplication, each bread repeats a few of the previous breads notables. I haven't worked that out yet.
Here's some stuff that may help you.
https://qanon.news/api/bread/225
https://8ch.net/comms/res/225.html#225
https://8ch.net/qresearch/notables.html
https://8ch.net/comms/index.rss
What are you looking for exactly? I may be able to give you exactly what you need.
>https://t.co/uXCog0QsSw
I think I understand. Does this have the qanon.news rss added in? Can these skills have more than one feed?
I'll look into that notable crawler again today.
Yeah I've been trying to work out the best way to do that same thing. I think block chain is interesting and have only played around briefly with it. I like where yer going. I'll help any way I can.
>Any advice, recommendations, or motivational words?
It won't be THAT bad. 6-7 hrs tops!
How do you get a quorum that the Q Post is correct?
I don't know that there's a way currently to compare the scraper sites - manually against the screen caps?
Can you explain the concept of 'Context' forward and backwards? Not sure what to think about the PageCap stuff because I'm not sure I understand what it is you are trying to do.
>http://q-questions.info/2017/12/05/cbts-general-38-we-are-the-storm-no-34663/
Ahh I got ya. Site is looking good! And if Q had referenced a post that would have been in the backward context too etc. Context != timestamped posts in a bread.
On the archive pages I use the API to lookup the referred post. Scraper looks up all the 1st level references for all Q posts. Thought about making it go deeper but then decided to go with the lookup due to space concerns. Dunno if that helps you at all.
I've been thinking about doing a similar crawler kinda thing myself. I've got over 5500 breads @ 3,868,000+ posts. The amount of information that anons have dug is mind boggling. I'm so ADD that mostly I leave things unfinished.
5:5
I use https://8ch.net/qresearch/catalog.json to get the list of available breads. I find the ones I'm interested in (Q Research General etc). I archive them if my current archive has less than 751 posts. I built a crawler that finds each bakers 'Notables' post and archives those. It still needs some work. I'm planning on making a new RSS feed for them.
I originally built my scraper as a command line util.. I could probably wrap it up as a .NET WinForm app, or a simple console app.
Straight page scraping is a mega pain in the ass. I'm not interested in that. All the info I need is available in the JSON and it's easy to get to. https://8ch.net/qresearch/res/2352371.json
I agree. Go for biggest bang for the buck!
Here's how qanon.news does itโฆ
When I've got a bread I want to search for notables, I get the ID of the baker and then a list of the first 5 or 10 posts. Then I search those for the 'notable' keyword and figure that's the bakers notable post I'm interested in. Next bread.
My current problem is that the Notables post from the baker has notables from the last 4 or 5 breads, plus the previously collected lists. There's alot of repetition. IE: Notables from #5460 are in #5461, #5462, #5463โฆ So am I trying to find just the previous breads notables? I can probably parse that out with the '#'.
I probably need to break it up into smaller more manageable chunks. Something like monthly. The last notables crawl I did ended up with a massive file of results. 77MB txt file
I've got some time today. I'll see if I can rejigger this notable crawler into a new API/RSS
>What was in your text file?
I ran it again today and came up with a single 98MB file. So I split it out into a yearly monthly dump so it's (YYYYMM) 201802, 201803, 201804โฆ
JSON lists of https://qanon.news/Help/ResourceModel?modelName=Chan8Post
It's a list of 5000+ Chan8Posts. Each of those looks like this: https://qanon.news/api/bread/4302144/4302146
I'm not worried about shill images and whatnot because I'm only looking at the first 5-10 posts from the baker.
It's set up to find the bread/post for each notable reference and format the HTML post from 8ch to straight txt into the 'text' field. I just need to dial in the targeting a bit so it's smarter about what to crawl, but it's pretty close.
Is there a NoSQL anon in the house? I've got an idea.
Lets see your script. It's probably just a misspelling.
Anybody working on anything new?
I'm looking at http://ogp.me/
I'll add your URL so you can access it directly.
Try using a json content importer and this code in a post
[jsoncontentimporter url="https://qanon.news/api/smash/2744"]
Q#2744
{NAME} {TRIP} {POSTDATE}
{text}
[/jsoncontentimporter]
That should work for the straight text posts. You'd have to do some other magic to get the images. I'll see if I can't work that out later
https://json-content-importer.com/
Codefags - had to disable the Q Research API. It was being attacked and causing the site to crash. So, if you or your app was using the API it's not going to work until I can come up with a solution.
Yeah I've been thinking about that too.
BIG.
The test DB I have on my workstation was over 10 GB. A local index of Sphinx data is an interesting idea.
https://qanon.news/archives/x/2352371
https://qanon.news/archives/clone/2352371
^^ still working on this 8ch clone
Probably most everybody here. Whatcha got?
With all this twitterbanning and facepurging going on recently I've been looking at Mastodon.
https://joinmastodon.org/
Does anybody know anything about this?