You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, Heimdall does not have a built-in way to stop web crawlers or specify indexing rules. Adding support for a robots.txt file will allow administrators to:
Prevent unauthorised or unwanted web crawlers from accessing certain parts of the app.
Specify guidelines for search engine crawlers to improve SEO and reduce resource load.
While I understand this can be added manually into Heamdall's http root, it would be useful to have a default robots.txt file built-in with an example content like:
User-agent: *
Disallow:
The text was updated successfully, but these errors were encountered:
Currently, Heimdall does not have a built-in way to stop web crawlers or specify indexing rules. Adding support for a
robots.txt
file will allow administrators to:While I understand this can be added manually into Heamdall's http root, it would be useful to have a default
robots.txt
file built-in with an example content like:The text was updated successfully, but these errors were encountered: