This should allow video downloads when logged in without
'forward-cookies' disabled and from protected tweets.
youtube-dl still gets used to download HLS playlists, but the data
extraction part, which doesn't work with youtube-dl at the moment,
now gets handled by gallery-dl itself.
In the case that a user changed his username, requesting deviations
with an old name might cause problems (missing deviations, etc.)
The internal 'username' value therefore now gets updated to the
current username taken from the user profile.
... for individual tweets.
To get a Tweet page with the old Twitter layout, an Internet
Explorer User-Agent (e.g. Mozilla/5.0 (Windows NT 6.1; WOW64;
Trident/7.0; rv:11.0) like Gecko) as well as a Referer header
pointing to the page itself is required. The "app_shell_visited"
cookie appears to be optional at the moment, but that is what
a regular web browser would send.
Adds the functionality to download search results on twitter.com/search. Since twitter only allows downloading of up to 3,200 of a users most recent tweets, you will be unable to download old images from users with a lot of tweets. To bypass this, you can use the twitter search to get the tweets from the sections in time you were stopped at. An example search would be "from:user since:2015-01-01 until:2016-01-01 filter:images". The URL you would use will look something like this https://twitter.com/search?f=tweets&q=from%3Asupernaturepics%20since%3A2015-01-01%20until%3A2016-01-01%20filter%3Aimages&src=typd&lang=en
The _tweets_from_api function had to be changed because it would not get the next page of results using the last "data-tweet-id". It would return the same JSON but with a "min_position" string added. Using this string for the "max_position" param from the second page onwards correctly returned the next pages. This change does not interfere with how the other extractors work as far as I know. The 2 regex patterns in the extractors had to be changed to not match the search URL.
- let the GalleryExtractor class inherit directly from Extractor
- make ChapterExtractor a subclass of GalleryExtractor
- change enumeration field names of GalleryExtractors to 'num'
The Patreon-provided URLs for the next set of posts aren't
always complete, i.e. they can be missing their scheme and
the subsequent double slash: "www.patreon.com/…"
Some galleries return a 404: Not Found error when trying to access
them through the main gallery URL, but their content is still
available on the respective /reader/ page.
... except for sta.sh content.
Instead of using the old '/api/v1/oauth2/deviation/download' endpoint,
which started delivering URLs to 404 pages a while ago,
it is also possible to get a download URL from the relatively new
'/_napi/da-browse/shared_api/deviation/extended_fetch' endpoint
used by DeviantArt's Eclipse interface.
The current strategy is therefore:
- Iterate over deviations using the OAuth2 API
- Fetch original download URLs with the new NAPI/Shared API
- provide a 'user_name' metadata field
- usually the same as 'artist_id', except for favorite downloads
- extract the whole description text and properly escape HTML entities
- fixed an issue with titles or tags containing double quotes
- combine fetching an HTML page and extracting its 'shared_data'
- move 'shared_data' and field access info out of '_extract_page()'
- introduce a '_request_graphql()' method
- don't try to call '/deviation/metadata' with an empty list of
deviation ids
- print a warning when detecting private deviations without having
a 'refresh-token'
- consistent 'filename' entries, at least as far as possible
- GIFs and SWFs don't have a <title>_by_<artist>_<id> anywhere in
their metadata
- Generating <id> (from 'deviationid'?) might be something that needs
to be figured out, so we can build those filenames ourselves
- better code structure etc.
- tests for videos, archives, and flash animations
Downloading https://pbs.twimg.com/media/EB2cGUYX4AI2Vuu.jpg:orig (NSFW)
sometimes returns a 416 status code, even though no 'Range' header was
sent and no data was downloaded prior.
This code usually means a file has already been downloaded completely
and the download method indicates success, but in this case it causes
an exception down the pipeline since no file was created.
It still doesn't work for converted ugoira animations thanks to how
those files are handled, but everything else, including files with
unknown or changing file extension, now works as it should.
- use str.join() instead of os.path.join()
(less "features", but 10x as fast)
- cache directory formatters
- detect and optimize field access for 1-element format strings
- change 'has_extension' from a simple flag/bool to a field that
contains the original filename extension
- rename 'keywords' to 'kwdict' and some other stuff as well
- inline 'adjust_path()'
- put enumeration index before filename extension (#306)
- change 'num' to a simple enumerating integer
- change default filename format
- provide content of the old 'num' field as 'suffix'
- add 'filename' for ugoira
* [instagram] Add support for stories
Add support for Instagram user's stories
(https://www.instagram.com/stories/<username>/).
First the shared_data in instagram.com/stories/<username> is fetched in
order to retrieve the user_id that is then passed to fetch the stories
via the corresponding graphql query.
Please note that fetching stories is supported only when authentication
is enabled and the corresponding <username> is followed.
* [instagram] Add an only-matching test for stories
* [instagram] Simplify InstagramExtractor.items() and _extract_stories()
Simplify handling of typename in InstagramExtractor.items() and multi-line
string in _extract_stories(). NFCI.
Use a 'gallery-dl' subdirectory in ~/.cache to adhere to how other
programs store their cached data, and call os.makedirs() so it also
works without an existing ~/.cache directory.
Use either $XDG_CACHE_HOME or ~/.cache (if the former isn't set)
and store potentially sensitive cookies and tokens in a user's
home directory and not in the world-readable /tmp.
and only on non-Windows systems.
1. On Windows the 'mode' argument for os.open() has no (visible) effect
on access permissions for new files.
2. The default location for 'cache.file' on Windows is in
%USERPROFILE%\AppData\Local\Temp which can only be accessed by the
owner himself (or an admin).
Previously cache.file could be created world readable leading to
possible sensitive information disclosure on multi-user systems.
Restrict permissions only to the owner by creating an empty file.
Please note that cache.file created before this commit may need a
`chmod 600' or similar!
- check image limit before opening the first gallery or image page
- prevent any further exhentai extractors from running after the image
limit has been reached
Logging in now follows the natural login flow that also happens in a
browser more closely and collects more cookies than just ipb_member_id
and ipb_pass_hash.
Test URLs have been updated and now point to the e-hentai.org domain.
Maximum available image dimensions have been reduced to 4096px
on the longest edge. (from 5000px)
A few (unimportant) metadata fields are no longer available or have
been changed to 'null'.
The RE-tries option now specifies exactly that: the maximum number a
failed HTTP request is re-tried. For example a value of 2 will now
correctly stop after 3 attempts: the initial one + 2 re-tries.
The maximum wait-time now also caps at 30min and increases exponentially
for both extractor.request() and downloader.http.download().
The default value for both is 'false', i.e. duplicate URLs are NOT
ignored.
The previous behavior was to always ignore duplicate URLs to make
'--abort-on-skip' work properly when new images where added to the
beginning of a collection while gallery-dl is running.
The current Blogspot image URLs hosted on Kissmanga end with an
"invalid" query parameter (/000.png&upx=...), which doesn't get
recognized by 'spliturl()' and 'parseurl()' as such and gets therefore
included in the 'extension' field from 'text.nameext_from_url()'.
Let's see how long this works ...
DeviantArt is rolling out a new version of their website, including a
new internal and potentially usable API (rewrite incoming, yay).
The issue with the new layout is that it doesn't include the "old"
UUIDs for single deviations, i.e. mapping a numeric deviation ID to its
UUID counterpart is impossible with the new layout.
Instead of replacing 'https' with 'http' for every URL in
'get_downloader()', this now only happens once during downloader
initialization. Also unit tests.
These metadata fields will only be filled in when using a top-level
URL, because that's the only place this information is available. Using
a Foolslide URL (1) will leave these fields empty.
(1) https://hentai.cafe/manga/read/.../en/0/1/"
Some deviations (possibly only from sta.sh sources) are downloadable
(i.e. 'is_downloadable' is true and /deviation/download/ works), but
have no 'content' or similar in their JSON representation.
(fixes#307)
The login page currently doesn't provide and require a login token
(logging in works without a token), so printing a warning during
each login is unnecessary.
- use API
- remove login support, add 'api-key' option
- remove support for "alpha" subdomain - alpha.wallhaven.cc used numeric
IDs that can't be translated to the new ID system
- support direct links to wallpapers