diff --git a/sessions/forecast-evaluation-of-multiple-models.qmd b/sessions/forecast-evaluation-of-multiple-models.qmd index d4b0361..1e5df30 100644 --- a/sessions/forecast-evaluation-of-multiple-models.qmd +++ b/sessions/forecast-evaluation-of-multiple-models.qmd @@ -345,7 +345,7 @@ As in the [forecasting evaluation](forecasting-evaluation) session, we will star ::: {.callout-tip} ## Reminder: Key things to note about the CRPS - Small values are better - - When scoring absolute values (e.g. number of cases) it can be difficult to use to compare forecasts across scales (i.e., when case numbers are different, for example between locations or at different times). + - When scoring absolute values (e.g. number of cases) it can be difficult to compare forecasts across scales (i.e., when case numbers are different, for example between locations or at different times). ::: First by forecast horizon. diff --git a/sessions/forecast-evaluation.qmd b/sessions/forecast-evaluation.qmd index 622761c..8b05157 100644 --- a/sessions/forecast-evaluation.qmd +++ b/sessions/forecast-evaluation.qmd @@ -184,7 +184,7 @@ By balancing these two aspects, the CRPS provides a comprehensive measure of the ::: {.callout-tip} ## Key things to note about the CRPS - Small values are better - - When scoring absolute values (e.g. number of cases) it can be difficult to use to compare forecasts across scales (i.e., when case numbers are different, for example between locations or at different times). + - When scoring absolute values (e.g. number of cases) it can be difficult to compare forecasts across scales (i.e., when case numbers are different, for example between locations or at different times). :::