You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The web scraper will be used to get the data from publisher pages and existing APIs automatically. Any resources with information where researchers can publish at a waived or subsidised APC will be useful in providing content to the website. We will be looking to automate this process and make use of GitHub Actions or CI to extract the data and add to the GitHub pages.
The text was updated successfully, but these errors were encountered:
kipkurui
changed the title
build a web scraper
Build a web scraper
Sep 2, 2020
Owing to the unstructured nature of the information on various publisher's websites, I am not sure if this is possible, but we'll keep exploring. The current solution is to use crowd-sourcing on the website through a G-Form, which feeds to the website directly after review and approval.
The web scraper will be used to get the data from publisher pages and existing APIs automatically. Any resources with information where researchers can publish at a waived or subsidised APC will be useful in providing content to the website. We will be looking to automate this process and make use of GitHub Actions or CI to extract the data and add to the GitHub pages.
The text was updated successfully, but these errors were encountered: