We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
See openzim/zim-requests#272 (comment)
Since files are hosted on upload.wikimedia.org, we must comply with their User-Agent policy at https://meta.wikimedia.org/wiki/User-Agent_policy
I suggest we add a CLI option to pass a custom User-Agent to be used when downloading the nautilus files.
The text was updated successfully, but these errors were encountered:
Indeed ; shouldn't scraperlib do this by default for stream_file?
Sorry, something went wrong.
shouldn't scraperlib do this by default for stream_file?
yes for upload.wikimedia.org ; CLI argument would still help for "less known" websites.
opened openzim/python-scraperlib#214
No branches or pull requests
See openzim/zim-requests#272 (comment)
Since files are hosted on upload.wikimedia.org, we must comply with their User-Agent policy at https://meta.wikimedia.org/wiki/User-Agent_policy
I suggest we add a CLI option to pass a custom User-Agent to be used when downloading the nautilus files.
The text was updated successfully, but these errors were encountered: