You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 23, 2024. It is now read-only.
For rigor's sake, it would be effective to create a benchmark based on a real dataset that can be reused by several ABM implementations, throughout the time. This is analogous to the NIST hash function competition or ML datasets in various fields but is instead for economic models. If such benchmark exists, prediction accuracy and runtime perf can be iteratively improved over time.
I found a super relevant paper here and its implementation here. I have been diving into https://github.com/S120/benchmark, wasn't able to find how a dataset is included. The most I could find so far (in order to proceed) is from footnote-26 in the paper:
Real time series are taken from the Federal Reserve Economic Data (FRED): they are quarterly data ranging from 1955-01-01 to 2013-10-01 for unemployment (not seasonally adjusted, FRED code: LRUN64TTUSQ156N) and ranging from 1947-01-01 to 2013-10-01 for investments, consumption and GDP (FRED codes: PCECC96, GPDIC96, and GDPC1 respectively)
Even so, I can't figure any micro params from the paper, e.g. how many agents (households+firms+banks) have to be spawned (whether the number should grow organically over several decades), initial endowments/prices/wages. I could find the sequence of events for each round at section 2.1.
Do you have any recommendation on how to write the model in ABCE, @DavoudTaghawiNejad ❓
The text was updated successfully, but these errors were encountered:
For rigor's sake, it would be effective to create a benchmark based on a real dataset that can be reused by several ABM implementations, throughout the time. This is analogous to the NIST hash function competition or ML datasets in various fields but is instead for economic models. If such benchmark exists, prediction accuracy and runtime perf can be iteratively improved over time.
I found a super relevant paper here and its implementation here. I have been diving into https://github.com/S120/benchmark, wasn't able to find how a dataset is included. The most I could find so far (in order to proceed) is from footnote-26 in the paper:
Even so, I can't figure any micro params from the paper, e.g. how many agents (households+firms+banks) have to be spawned (whether the number should grow organically over several decades), initial endowments/prices/wages. I could find the sequence of events for each round at section 2.1.
Do you have any recommendation on how to write the model in ABCE, @DavoudTaghawiNejad ❓
The text was updated successfully, but these errors were encountered: