You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When crawl on a spec fails, the crawler records the error in an error property in ed/index.json and reuses previous extracts. In some cases, failure is transient, e.g., due to a network hiccup. In other cases though, the error is more permanent, e.g., because the extraction logic bumps into unexpected markup.
The more permanent errors may go unnoticed for some time, because nothing notifies us about the problem. Code should report these errors in an issue (and ideally close the issue if the problem disappears).
Side note: when the crawler crashes completely, the job fails, no need to handle that, GitHub already sends email notifications.
The text was updated successfully, but these errors were encountered:
Via #1130.
When crawl on a spec fails, the crawler records the error in an
error
property ined/index.json
and reuses previous extracts. In some cases, failure is transient, e.g., due to a network hiccup. In other cases though, the error is more permanent, e.g., because the extraction logic bumps into unexpected markup.The more permanent errors may go unnoticed for some time, because nothing notifies us about the problem. Code should report these errors in an issue (and ideally close the issue if the problem disappears).
Side note: when the crawler crashes completely, the job fails, no need to handle that, GitHub already sends email notifications.
The text was updated successfully, but these errors were encountered: