I'm speaking from a Linux operating system perspective (though aspects may apply to the equivalent windows version)
>>978477
wget manual can be downloaded here:
>>>/pdfs/8640
this board doesn't allow pdfs
I would also recommend using the -U or –user-agent options to change how the website sees the wget program. (wget can impersonate a browser when making connections). This can get around some sites that actually look for and filter wget connections.
(see p14 of attached manual)
>>983930
It looks like you were in that poster thread anon ;^)
>it works but does not fetch the fullsize images, only the thumbnails, and I haven't figured out how to modify your wget mirror -the-whole-qresearch command to also fetch images and adjust references so the local HTML pages refer to locally-mirrored images
fetch fullsize: adjust your recursion depth, if I recall correctly from -l1 to -l2 as with -l1 you would only be grabbing that page's content, not anything it links to (the full size images). I think there was a way to only get the items at depth level 2 (the full size content)
adjust references: the pages should contain relative links from the current page to the other and not absolute links (i.e. page1.html has a link to page2.html not http:// somewebsite/fulladdress/page2.html). You may wish to look at using the -m option for site mirroring. WARNING: it has infinite recursion depth and can chew disk space as it attempts to grab anything linked, and anything those links point to, etc, ad infinitum! MAKE SURE -l depth is set to stop it.
See this webpage for more:
https:// stackoverflow.com/questions/4602153/how-do-i-use-wget-to-download-all-images-into-a-single-folder-from-a-url
>>984029
>>985562
>How would a clever anon go about saving a person of interest's entire twitter feed
Off the top of my head, you would be looking at a scraper script using curl for server requests, most likely written in Python, Perl, PHP, or similar. Search "twitter scraper" for lots of hits on the sort of thing you'd be using. There are lots of scripts on gitHub or similar.
Getting a list of URLs when you have 1 per line in a file "grab-these-URLs.txt"
wget -i grab-these-URLs.txt
Downloading videos from YouTube, Twitter, basically anywhere
youtube-dl -F http-URL-goes-here
will give you a list of the available formats to download with a CODE by each (on sites like YouTube) (e.g. CODE 18 RESOLUTION 1920X1080, CODE 22 RESOLUTION 1280X720….)
youtube-dl -fCODE http-URL-goes-here
will download that version specified by the CODE
youtube-dl http-URL-goes-here
will download the best quality version of the video (= largest file size)
Creating a folder for each day of the year on Linux not sure if you'd need this, but it was on my mind for some reason
mkdir -p {01,03,05,07,08,10,12}/{01..31} 02/{01..28} {04,06,09,11}/{01..30}
This will create a folder for each month 01-12. Inside each month folder will be 30 or 31 folders for each day of the month, except 28 for February (which can be changed to 29 for a leap year)