Anonymous ID: 70e498 March 9, 2018, 10:31 a.m. No.602595   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>3402

>>598094

I hear you anon.

The key is the content. We have the ability archive threads/qposts. Posts that Q references. Tweets. Known tripcodes/twitter accounts.

 

What is the source of all the evidence? The dedicated research threads? Notables? In order for it to be automagic, there needs to be a reliable single source here on 8ch. None of the codefag work I've seen reaches a level of what could be called AI - or the ability to discern which anon has posted a certifiable answer/evidence.

 

Non automated means anonomated, but that causes it's own set of issues.

 

I agree a wikipedia style thing would be good because it's familiar, but populating it with data may be an issue. Some of it's going to have to be entered in manually.

 

If all you are looking for is a location for an anon wiki, I think that's pretty easy.

Anonymous ID: 70e498 March 10, 2018, 9:47 a.m. No.612945   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>3236

>>603568

Ya that's fine. I'm going to update that today to cover the latest.

 

I've been working on a new local viewer that uses the twitter smashed data. It shows the delta + alt text of the tweet + a link to the tweet. I've noticed that alot of the image links I have a currently broken. I was thinking I'd just update those to point to one of the other QCodeFag branch archives rather than try and archive all the images as well.

 

Expect an update on GitHub later

Anonymous ID: 70e498 March 10, 2018, 10:49 a.m. No.613892   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>8146

>>613641

When you get that worked out make sure to let us know. I've been wondering about that myself. The early halfchan no's are pretty big. I've found some bugs in my code around there being multiple references per Q post. It does happen on occasion and my scraper isn't catching them all.

 

I've just uploaded a bunch of json data to the https:// github.com/QCodeFagNet/SFW.ChanScraper/tree/master/JSON gihub. The json folder is what's generated when you run the ChanScraper, the smash folder when you run the TwitterSmash. Each of those folders has a Viewer.html file that can be used with just the _allQPosts.json or _allSmashPosts.json.

 

Like I said I need to clean up some dead image links for everything to be working right.

Anonymous ID: 70e498 March 13, 2018, 6:27 a.m. No.650810   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>2644

>>648594

HOLEY FUCK YES.

 

This crosses all breads? If so then this is exactly what we need. I can help you with the SQL if you need it.

SELECT * FROM tbl LIMIT 5,10; # Retrieve rows 6-15 you should also specify an ORDER BY

 

>>649300

How are you getting the breads? Maybe I can work out a way to get you those. Combine up somehow

Anonymous ID: 70e498 March 13, 2018, 7:58 a.m. No.651528   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>3255

>>509646

I've been thinking about this. Preliminary research shows that elasticsearch and lucene would probably be the best match for what we've got. There are alot of tools that pile into elasticsearch. Any hostfags here with the ability to set up an elasticsearch node?

 

The data is big. Tons of images. A proper archive takes space. I'm holding @546 complete breads and with no images it's 250MB+. That's for like a month. By the end of the year the bread collection alone is going to be over 1.5GB.

 

The images I've got so far is around 100MB, but that's just from the Q posts - and even then I know I'm missing some.

 

Econ Godaddy hosting is like $45 a year. I'm thinking about just putting the chanscraper/twittersmash online, then write some simple apis. Get thread#, filteredThread, qpost# that kind of thing. Useful or no?

Anonymous ID: 70e498 March 13, 2018, 1:07 p.m. No.654567   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>4852 >>4901

>>652644

Hmmโ€ฆ When I say bread I mean a full Q Research thread. Like this

https:// github.com/QCodeFagNet/SFW.ChanScraper/blob/master/JSON/json/8ch/archive/651280_archive.json

 

That's the straight bread/thread from 8ch. It includes all the responses whether the BV posted it or not.

 

I'm finding those by getting the full catalog from

https:// 8ch.net/qresearch/catalog.json, finding the breads/threads that have q research, q general etc in them, and then getting the json for that thread only from https:// 8ch.net/qresearch/res/651280.json

 

I think I see what you are doing - going thru and trying to mark the relevant posts?

Anonymous ID: 70e498 March 14, 2018, 12:36 p.m. No.664801   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>4904

>>663255

I agree on shill proofing.

 

I've been playing around with a webAPI. I've got it working nice with all the q posts, looking for a specific post# like #929, and posts on a day. Returns json or xml. This is the Crumb Archive.

 

My plan is to expand that so that the archived breads can be accessed as well - each as a single json file. This is the Bread Archive.

 

I'm going to set it up where it's an autonomous machine. It will scrape and archive automagically moving forward from the current baseline. No delete. No put. No fuckery.

 

I'm pretty sure it would with the QCodeFag scraper repos.

 

The bread archive is pretty big. I'm sure there's no way I can archive images for all the breads. An image archive isn't what I've been focused on. The focus of this is only making the json/xml available from the chanscraper.

 

Once I can get the breads all up and being served automagically my plan is to set up an elasticsearch node and suck all the breads in.

 

I figure a year of godaddy hosting is currently $12 with unmetered bandwidth. I'll throw in.

Anonymous ID: 70e498 March 14, 2018, 6:30 p.m. No.667886   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>665010

Yeah man hit it. I've got a github here you can browse around.

https:// github.com/QCodeFagNet/SFW.ChanScraper/tree/master/JSON

json/8ch has the filtered/unfiltered bread and archives in it. smash has the twittersmashed posts. I've been getting my twitter data from http:// www.trumptwitterarchive.com/data/realdonaldtrump/2017.json, 2018.json

 

I set up a test for the webAPI twittersmashed posts here https:// qcodefagnet.github.io/SmashViewer/index.html

 

I'm getting close on having the webAPI thing finished up. Just running some more tests and then I should be ready to go.

Anonymous ID: 70e498 March 14, 2018, 6:33 p.m. No.667927   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>666983

Yeah you could mebbe use the smashed json from me. I've already done the unix timestamp on the trump tweets. All 8ch posts and Twitter posts dervive from the same Post base object with the unix timestamp built in.

Anonymous ID: 70e498 March 14, 2018, 6:37 p.m. No.667971   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>666995

I think that's because you can't really get them. There is an 8ch beta archive here, but all the Q Research threads dissappeared shortly after we started archiving them. Even then, those archives are straight HTML. It's of no use to me. AFAIK, once it slides off the main catalog, its pretty much gone. Some trial and error got me a few breads, but not many.

Anonymous ID: 70e498 March 14, 2018, 10:13 p.m. No.670279   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>670221

I would think the time is relative to the archive home timezone. That is, unless archive.x has done some wizardry to change the time zone it's pulling at to be the time zone of the user requesting the original archive. That would be more problematic - but you could still deal. It should be marked what time zone and then you convert into the unix timestamp.

Anonymous ID: 70e498 March 14, 2018, 10:28 p.m. No.670421   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>0518 >>0571

>>670332

Hmm. Yeah just doing some easy math I can see how you would have more than 1mm records. We're at bread 815+ something here and with 751 post each that over 600k here on 8ch alone.

 

You may be onto something with that. Is there a limit? https:// stackoverflow.com/questions/2716232/maximum-number-of-records-in-a-mysql-database-table

 

Looks like number of rows my be determined by the size of your rows.

Anonymous ID: 70e498 March 20, 2018, 10:24 a.m. No.733102   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

OK brother codefags. I've stood up a simple API. It serves json and XML for your consumption pleasure.

It's currently set up to:

1) Scrape the chan automagically and keep an archive of QResearch breads and GreatAwakening.

2) Filter each bread to search for Q posts and include anything in GreatAwakening into a single QPosts list

3) Serve up access to posts/bread by list, by id, and by date.

 

I'm going to incorporate the TwitterSmash delta output next. I figure I can do a simple search across all Q posts easily. Searching across the breads is harder.

 

You can check it out here: http:// qanon.news/

McAffee says secure https:// www.mcafeesecure.com/verify?host=qanon.news

 

There's a sample single page app that shows how to use it. http:// qanon.news/posts.html

 

I still gotta set up my email account so if you spam me now, it's likely to get bounced. I'll check back in later.

My reason for doing this is twofold, I figured we could use it, and I'm looking at the job market in my area and thinking about changing it up. This is partially a learning project to open opportunities by using different tech. I'm claiming ignorance. My plan is to try out an elasticsearch node once I get this working as designed.

 

Let me know if you can think of a query/filter that you think would be useful. It's not proven to be too difficult to work new things in other than the ugly local path issue I came across working on it this morning.

 

Try it out anons.

Anonymous ID: 70e498 March 21, 2018, 8:14 a.m. No.744374   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>6289

>>742213

>www.trumptwitterarchive.com/data/realdonaldtrump/2018.json

 

There was a 9 day gap at the beginning of the year. Otherwise it's been updated. Unfortunately I think there were 2 markers in that time. Delta anon knows about it.

Anonymous ID: 70e498 March 22, 2018, 5:26 p.m. No.760314   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

Feckin dates. I got it all sorted out. Discovered a bug in the different times zones my dev server is on and the API webserver.

 

I've been sorting out small bugs and about to wire in the TwitterSmash. The automation part seems to be working good now that I sorted the date bug. I've got it set up to do hourly scrapes. Last run at 8:03pm 3-21 est. The scrapes themselves only take about 45 seconds - including the twittersmashing. There's a test smashpost page here to see the deltas in action. Not totally live Q post data online yet.

http:// qanon.news/smashposts.html

 

This is another test page using live data

http:// qanon.news/posts.html

 

I did this to test some code out. Get a random Q post.

http:// qanon.news/api/posts/random/?xml=true

 

I set up an elasticsearch node today to experiment. We'll see how that goes. Could be an huge pain in the ass to set up at a host. We'll see.

Anonymous ID: 70e498 March 23, 2018, 8:02 p.m. No.774681   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>4698 >>5587

>>773397

yeah that sounds like a good one.

 

I've done some more work on the http:// qanon.news api. I managed to work out a coupla small bugs and get the TwitterSmashed posts integrated. Everything seems to be working as designed.

 

Here's the smashposts.html demo page. Shows deltas to Q posts within the hour.

http:// qanon.news/smashposts.html

 

I've going to add another result to the smashposts where everything is grouped by days. I'll probably put it in the posts API as well.

 

It's starting to look like this may be close to going on autopilot. Any interest in changes/additions before I move onto something else?

Anonymous ID: 70e498 March 24, 2018, 1:54 p.m. No.781191   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>2643

>>775587

Hmm. Yeah I'll look into it. I can see that archive getting really big really fast. This things only been running for a month and it's over 400mb only JSON. I'll have to make sure what kind of space I've got avail.

Anonymous ID: 70e498 March 25, 2018, 3:06 p.m. No.791554   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>782643

I never figured that another image archive was what we needed. Each of the QCodefag installs has it's own local archive. My concern was in preserving the JSON data from QResearch before it slid off the main catalog.

 

I'm going to put up a more simple list to show what's been archived. I'm showing 716 total breads., but again that's only starting at 2-7-2018. Q Research General #358 is my earliest full archive - it's up to #982 now.

 

That's 624 breads in 47 days. 13.2 breads per day. EST 4846 breads in one year ~ 800k/bread = @ 4GB/year in JSON bread alone. Mebbe different if I moved to a DB.

 

I may have enough storage, but it's so hard to say. Any image archive estimates anons?

Anonymous ID: 70e498 March 26, 2018, 7:36 p.m. No.805321   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>803461

Glad it was useful. The posts API numbering is a bit squirrelly till you get used to it. The post ID is the post count starting from 1 on Nov 28 2017.

So finding out it was post #692 I had to view all posts (or posts.html or and of the QCodeFag installs) to get the post#. The bread# is in the post as threadId

Anonymous ID: 70e498 March 27, 2018, 6:39 a.m. No.809048   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>9084

>>809001

Fuck off nigger. I'm just trying to come up with other ideas. I've been in IT for over the last 2 decades. I know exactly whats going on.

 

My point was, hosting can be found on the cheap if you look around. Not sure you NEED SSD. What you need is storage space. I was thinking drop the SSD for cheaper storage.

 

Whatever, it's your problem. You seem to be capable of figuring it out.

Anonymous ID: 70e498 March 30, 2018, 3 p.m. No.843897   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

I think I finally managed to squash the date bug in the QPosts/DJTweets.

I took the 60min delta restriction off - and it's applying each day's tweets on each Q post to allow you to see all the deltas.

http:// qanon.news/smashposts.html

Anonymous ID: 70e498 April 3, 2018, 8:50 p.m. No.887653   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>0080

I've been thinking about a timeline for the past few days. I looked into different solutions and found timelineJS that works pretty good.

 

I managed to wrangle the API data into a timeline. I'm planning on adding in the DJTwitter data and ideally news/notable events.

 

Once I can get the twitter data in I'll cut it loose. I was hoping to figure out an easy way to get other data into the timeline. News/notables. Any ideas? QTMergefag? You got good news/events?

 

Here's what it looks like:

Anonymous ID: 70e498 April 4, 2018, 8:59 a.m. No.892076   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>2089 >>2772

>>891871

Agree. I've been thinking about trying to work out a way of collab. I'm sure I could come up with a way to prove we're who we each say we are. Unless the clowns are here building community Q research toolsโ€ฆ

 

Check it out. I got the twitter working.

 

What I can say about this timeline is that there's alot of events on it. There's Q posts batched down to days across 98 days. Add in the Tweets and there's alot going on. Each day/tweet == a slide. It's definitely more than it was probably designed to handle. It takes a minute to make sense of the somewhat sizable JSON data and then render the display.

Anonymous ID: 70e498 April 4, 2018, 11:13 a.m. No.892975   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>3062

>>892772

{"scale": "human","events": [{ "start_date":{"year":"2017","month":"10","day":"28","hour":"0","minute":"0","second":"0","millisecond":"0","display_date":"2017-10-28 00:00:00Z"}, "end_date":{"year":"2017","month":"10","day":"28","hour":"0","minute":"0","second":"0","millisecond":"0","display_date":"2017-10-28 00:00:00Z"}, "text":{ "headline":"HRC extradition...", "text":"The body text...<hr/>" }, "media":null,"group":"QAnon Posts", "display_date":"Saturday, October 28, 2017","background":null,"autolink":true,"unique_id":"1dba35d4-46ac-4c5f-94d7-1e6b0f53ad4d" }, { "start_date":{"year":"2017","month":"10","day":"28","hour":"21","minute":"9","second":"0","millisecond":"0","display_date":"2017-10-28 21:09:00Z"}, "end_date":{"year":"2017","month":"10","day":"28","hour":"21","minute":"9","second":"0","millisecond":"0", "display_date":"2017-10-28 21:09:00Z"}, "text":{"headline":"&Delta; 25","text":"2017-10-28 21:09:00Z<br/>@realDonaldTrump<br/>After strict consultation with General Kelly..."}, "media": {"url":"https:// twitter.com/realDonaldTrump/status/924382514613030912","caption":null,"credit":null,"thumbnail":null,"alt":null,"title":null,"link":null,"link_target":"_new"}, "group":"realDonaldTrump","display_date":null,"background":null,"autolink":true,"unique_id":null }]}

Anonymous ID: 70e498 April 8, 2018, 5:26 p.m. No.958418   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>5953

Qanon.news bumped from the bread anons.

 

Somebody said that the site was serving malware and it was taken out of the bread. I posted in the meta thread to have BV check it out and he gave it the OK. I spent an hr or so trying to get it back in. No luck.

 

I'm not interested in begging - but I do want people to use what I've been working on. I'll see what happens after dinner I guess.

Anonymous ID: 70e498 April 9, 2018, 8:06 a.m. No.965953   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>958418

Meh. I've been thinking about it. After reading all about codefags problems, bandwidth issues, SSL certs, all the other qcodeClonesโ€ฆ It may be better to just stay quiet and let people use it when needed. I'm a little disappointed that it was so easy to get something removed from the bread.

 

What I've been working on is really more backend style anyways. I have been thinking about a few different things though.

 

I saw one anon post something about there needing to be an RSS feed for QPosts. I think that should be pretty easy to provide. If I get some time I may whoop something out.

 

I've been playing around with the timelineJS. I worked it up where you can select a specific timeline. Qposts. DJTweets. Etc. Q has mentioned timelines a few times and I've been looking around trying to find threads that were timeline based. No real luck so far. Anyways, I was thinking about working on some different timelines.

 

I've been starting to wonder if moving to a database solution rather than file based json is going to be worthwhile. Better speed probably? Built in caching? Do I want that for an api? What does everybody else think?

Anonymous ID: 70e498 April 10, 2018, 5:56 a.m. No.981495   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>8865

I built a new API to get a specific post from a specific bread. Maybe I'll get it uploaded today.

Looks like ~/api/bread/981411/981444/

to get >>981444

 

Researching an RSS/ATOM feed. That looks to be low hanging fruit.

Anonymous ID: 70e498 April 10, 2018, 10:52 a.m. No.984329   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

I was contacted by a guy that says he's from this site http:// we-go-all.com

 

Looks to have a Qcodefag repo installed on a page. He wanted to know if he could help at all and I asked him if he had posted anything in here.

 

He doesn't know anything of the codefags thread. He's interested in access to the api. I don't wanna dox the guy, but this name matches a guy that works for Representative Jared Polis (D-CO 2nd)

5th-term Democrat from Colorado.

http:// www.congress.org/congressorg/mlm/congressorg/bio/staff/?id=61715

 

Probably nothing. The QCodeFag stuff is open, 8ch is open. Nothing to worry about anons?

Anonymous ID: 70e498 April 10, 2018, 4:20 p.m. No.988865   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>981495

All updated

New Qanon ATOM feed:

 

I managed to throw together an ATOM feed here:

 

http:// qanon.news/feed

or

http:// qanon.news/feed?rss=true

 

It returns the last 50 of q posts. It's a work in progress. I can include referred posts, images etc.

 

New Timeline api: Timeline api that shows Qposts and DJTweets. I also set up an Obama timeline that another anon pointed out. I'm planning on adding more to it and some other timelines I'm thinking about. You can see a few at http:// qanon.news/timeline.html

Anonymous ID: 70e498 April 13, 2018, 8:47 p.m. No.1034522   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>0483

>>1030259

Interesting that you should post that anon, I've been thinking the same thing. We need a crawler. Sounds like a great idea. A better way of visualizing the context thread would be great. Ya know I've been reading about Google. PageRank. How that was designed in the beginning. Links you come across that have alot of responses can be either good or bad on 8ch.

 

With the new breadID/postID feature I rolled out you could find anything you were missing for sure.

 

So you think your initial targets are just the baker posts and the other posts that are deemed notable?

 

I've been wondering if we could use a hashtag internally for our own benefit. #notable. That kind of thing.

 

It sounds like an interesting project. If I can help at all let me know.

Anonymous ID: 70e498 April 15, 2018, 8:25 a.m. No.1051320   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun

>>1049462

Nice.

>>1040716

I bought hosting from Godaddy. Unlimited bandwidth and 100GB storage. Economy plan on sale was $12/year. I think I even got another domain with that deal for $1/year that I'm not even using.

Yet.

 

I hear ya on time. My shit got bumped from the bread because 1 anon got confused about a malware notification. I've got 2 pretty solid months of time in on what I've been doing and got taken out by a single post.

 

As we reach more and more of the masses, the information is going to appear on more sites that show ads/donations. It's a way of paying for the infrastructure needed to provide the service. I see nothing wrong with it.

Anonymous ID: 70e498 April 15, 2018, 8:03 p.m. No.1059896   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>0038

>>1059305

Wow anon. It's coming together. It will be great to see it once finished.

 

Interesting what you are doing with the links. I think some of my pages are linking like the qcodefag sites. The RSS I hooked up to go back into the api. Think I should change that?

Anonymous ID: 70e498 April 18, 2018, 6:13 a.m. No.1088682   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>0066

>>1087614

Nice.

Let me know if you want to hit the smash data. I'll set you up.

 

I rejiggered the links on some of my pages. It was set up like the qcodefag sites where each post contained a link back here. I changed that to a self referencing link instead. I decided to not be the cause of any more traffic back here.

 

Statistics show that the pages people coming to my site are interested in primarily the presentation pages - not the API. I think what I've decided to do is remove all references to the API - but still provide it. Default to the posts page or something. I got a few ideas.

Anonymous ID: 70e498 April 18, 2018, 2:08 p.m. No.1092764   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>6204

>>1091428

The Smash API will give you more data you want.

 

You probably don't want the timeline stuff just yet. Unless you want to just stick with the default q/DJT timeline. Just do a get on the timeline API. The timeline API filters out all the tweets to just show the 5,10,15โ€ฆ deltas.

 

Yeah Gotta add the full path to the URL. If you are hitting it programattically I gotta give you access. Domain you would be calling it from?

Anonymous ID: 70e498 April 18, 2018, 7:22 p.m. No.1096311   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>9789

>>1096204

Well you can get all those from the trumptwitterarchive. What I did is group them into days that Q posted, and then only calculated the ones that DJT tweeted after Q posted.

 

If you check the API you can see the data, or look at http:// qanon.news/smashposts.html to see it more visually.

Anonymous ID: 70e498 April 19, 2018, 6:27 a.m. No.1100463   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>0644 >>0681

>>1099789

You are on it!

Pain having to get the 2017 and then the 2018 from TrumpTwitterArchive butโ€ฆ it's the only way.

 

I guess I could suck all that in and then offer it as an apiโ€ฆ just raw twitter data.

 

I only thing I found with the twitterdata is that there's a 9 day gap in January at the beginning of 2018. I've been fighting off a compulsion to archive those (manually) to make it complete.

 

>>1099789

css : You can just use the twitter magic.

https:// dev.twitter.com/web/overview

On the smash page I just make links and decorate with the bird and tweet. The timeline does it automagically.

 

Here's a question for you.

How hard would it be for you to remove all the inline style you have on q-questions.info/research-tool.php ?

 

Do you know about jqueryui themeroller?

Conjigger your jqueryUI website and then download the custom css like magic.

Anonymous ID: 70e498 April 20, 2018, 2:24 p.m. No.1119101   ๐Ÿ—„๏ธ.is ๐Ÿ”—kun   >>5149

>>1117987

Kinda wondering about that myself.

IMO, he was talking specifically about the NP/NK video. Many have archived that offline.

 

On one hand, I'm archiving online - but that makes it easier for others to archive.

On the other hand - I'm archiving at home too.

 

The online stuff I'm doing has no bearing on my archives. I put it online so others could use it.