diff --git a/README.md b/README.md
index 1d6d2074..f53026c6 100644
--- a/README.md
+++ b/README.md
@@ -3,6 +3,7 @@
![](https://img.shields.io/github/license/ccnmaastricht/angorapy)
![](https://img.shields.io/github/stars/ccnmaastricht/angorapy)
+
@@ -10,8 +11,8 @@
Anthropomorphic Goal-Oriented Robotic Control for Neuroscientific Modeling
-
-**AngoraPy** is an open source modeling library for [goal-oriented research](https://pubmed.ncbi.nlm.nih.gov/26906502/) in **neuroscience**. It provides a simple interface to train deep neural network models of the human brain on various, customizable, sensorimotor tasks, using reinforcement learning. It thereby empowers goal-driven modeling to surpass the sensory domain and enter that of sensori_motor_ control, closing the perception-action loop.
+
+**AngoraPy** is an open source modeling library for [goal-oriented research](https://pubmed.ncbi.nlm.nih.gov/26906502/) in **neuroscience**. It provides a simple interface to train deep neural network models of the human brain on various, customizable, sensorimotor tasks, using reinforcement learning. It thereby empowers goal-driven modeling to surpass the sensory domain and enter that of sensori_motor_ control, closing the perception-action loop.
**AngoraPy** is designed to require no deeper understanding of reinforcement learning. It employs state-of-the-art machine learning techniques, optimized for distributed computation scaling from local workstations to high-performance computing clusters. We aim to hide as much of this under the hood of an intuitive, high-level API but preserve the option for customizing most aspects of the pipeline.
@@ -53,28 +54,45 @@ This library is developed as part of the [Human Brain Project](https://www.human
## 📥 Installation
-AngoraPy is available on PyPI. First, install requirements:
+### Prerequisites
+AngoraPy requires Python 3.6 or higher. It is recommended to use a virtual environment to install AngoraPy and its dependencies. Additionally, some prerequisites are required.
-```bash
-sudo apt install libopenmpi-dev
-pip install --extra-index-url https://pypi.nvidia.com tensorrt-bindings==8.6.1 tensorrt-libs==8.6.1
-```
+On Ubuntu, these can be installed by running
-Then install AngoraPy from pip.
+ sudo apt-get install swig
-```bash
-pip install angorapy
-```
+Additionally, to run AngoraPy with its native distribution, you need MPI installed. On Ubuntu, this can be done by running
-### From source
+ sudo apt-get install libopenmpi-dev
-Alternatively, you can download this repository or the source code of any previous release or branch and install from source, using pip.
+However, any other MPI implementation should work as well.
-```bash
-pip install -e .
-```
+### Installing AngoraPy
+
+#### Binaries
+AngoraPy is available as a binary package on PyPI. To install it, run
+
+ pip install angorapy
+
+in your terminal.
+
+If you would like to install a specific version, you can specify it by appending `==` to the command above. For example, to install version 0.9.0, run
-This way, if you make changes to the source code, these will be recognized in the installation (without the need to reinstall).
+ pip install angorapy==0.10.8
+
+#### Source Installation
+To install AngoraPy from source, clone the repository and run `pip install -e .` in the root directory.
+
+#### Test Your Installation
+You can test your installation by running the following command in your terminal:
+
+ python -m angorapy.train CartPole-v1
+
+To test your MPI installation, run
+
+ mpirun -np --use-hwthread-cpus python -m angorapy.train LunarLanderContinuous-v2
+
+where `` is the number of threads you want to (and can) use.
### Docker
@@ -87,6 +105,7 @@ sudo docker build -t angorapy:master https://github.com/ccnmaastricht/angorapy.g
To install different versions, replace `#master` in the source by the tag/branch of the respective version you want to install.
## 🚀 Getting Started
+[ ➡️ Tutorial Section on Getting Started](https://github.com/weidler/angorapy-tutorials/tree/main/get-started)
The scripts `train.py`, `evaluate.py` and `observe.py` provide ready-made scripts for training and evaluating an agent in any environment. With `pretrain.py`, it is possible to pretrain the visual component. `benchmark.py` provides functionality for training a batch of agents possibly using different configs for comparison of strategies.
@@ -96,23 +115,28 @@ The `train.py` commandline interface provides a convenient entry-point for runni
Base usage of `train.py` is as follows:
-python train.py ENV --architecture MODEL
+ python -m angorapy.train ENV --architecture MODEL
+
For instance, training `LunarLanderContinuous-v2` using the `deeper` architecture is possible by running:
-python train.py LunarLanderContinuous-v2 --architecture deeper
+ python -m angorapy.train LunarLanderContinuous-v2 --architecture deeper
+
For more advanced options like custom hyperparameters, consult
-python train.py -h
+ python -m angorapy.train -h
+
### Evaluating and Observing an Agent
+[ ➡️ Tutorial Section on Agent Analysis](https://github.com/weidler/angorapy-tutorials/tree/main/analysis)
+
+There are two more entry points for evaluating and observing an agent: `evaluate` and `observe`. General usage is as follows
-There are two more entry points for evaluating and observing an agent: `evaluate.py` and `observe.py`. General usage is as follows
+ python -m angorapy.evaluate ID
+ python -m angorapy.observe ID
-python evaluate.py ID
Where ID is the agent's ID given when its created (`train.py` prints this outt, in custom scripts get it with `agent.agent_id`).
### Writing a Training Script
-
To train agents with custom models, environments, etc. you write your own script. The following is a minimal example:
```python
@@ -129,6 +153,9 @@ agent.drill(100, 10, 512)
For more details, consult the [examples](examples).
+### Customizing the Models and Environments
+[ ➡️ Tutorial Section on Customization](https://github.com/weidler/angorapy-tutorials/tree/main/customization)
+
## 🎓 Documentation
Detailed documentation of AngoraPy is provided in the READMEs of most subpackages. Additionally, we provide [examples and tutorials](examples) that get you started with writing your own scripts using AngoraPy. For further readings on specific modules, consult the following READMEs:
@@ -137,7 +164,7 @@ Detailed documentation of AngoraPy is provided in the READMEs of most subpackage
- [Environments](angorapy/tasks)
- [Models](angorapy/models)
- [Analysis](angorapy/analysis)
-- [Monitoring](angorapy/monitoring)
+- [Monitoring](angorapy/monitor)
If you are missing a documentation for a specific part of AngoraPy, feel free to open an issue and we will do our best to add it.
@@ -156,7 +183,7 @@ To use MPI locally, you need to have a running MPI implementation, e.g. Open MPI
To execute `train.py` via MPI, run
```bash
-mpirun -np 12 --use-hwthread-cpus python3 train.py ...
+mpirun -np 12 --use-hwthread-cpus python -m angorapy.train ...
```
where, in this example, 12 is the number of locally available CPU threads and `--use-hwthread-cpus`
@@ -199,12 +226,19 @@ The number of parallel workers will equal the number of nodes times the number o
If you use AngoraPy for your research, please cite us as follows
-Weidler, T., & Senden, M. (2020). AngoraPy: Anthropomorphic Goal-Oriented Robotic Control for Neuroscientific Modeling [Computer software]
+ Weidler, Tonio, & Senden, Mario. (2023). AngoraPy - Anthropomorphic Goal-Oriented Robotic Control for Neuroscientific Modeling (0.9.0). Zenodo. https://doi.org/10.5281/zenodo.7770180
+
Or using bibtex
-@software{angorapy2020,
-author = {Weidler, Tonio and Senden, Mario},
-month = {3},
-title = {{AngoraPy: Anthropomorphic Goal-Oriented Robotic Control for Neuroscientific Modeling}},
-year = {2020}
+```bibtex
+ @software{weidler_angorapy_2023,
+ author = {Weidler, Tonio and Senden, Mario},
+ title = {{AngoraPy - Anthropomorphic Goal-Oriented Robotic
+ Control for Neuroscientific Modeling}},
+ year = 2023,
+ publisher = {Zenodo},
+ version = {0.9.0},
+ doi = {10.5281/zenodo.6636482},
+ url = {https://doi.org/10.5281/zenodo.6636482}
}
+```