This repository is used for the backend of the MindLogger application stack.
- MindLogger Admin - GitHub Repo
- MindLogger Backend - This Repo
- MindLogger Mobile App - GitHub Repo
- MindLogger Web App - GitHub Repo
- Features
- Technologies
- Application
- Installation
- Running the app
- Testing
- Scripts
- Arbitrary setup
- License
See MindLogger's Knowledge Base article to discover the MindLogger application stack's features.
- β Python3.11+
- β Pipenv
- β FastAPI
- β Postgresql
- β Redis
- β Docker
- β Pydantic
- β SQLAlchemy
And
Code quality tools:
- Python 3.11
- Docker
Installing pyenv is recommended to automatically manage Python version in the virtual environment specified in the Pipfile
Alternatively, on macOS you can use a tool like Homebrew to install multiple versions and specify when creating the virtual environment:
pipenv --python /opt/homebrew/bin/python3.11
Key | Default value | Description |
---|---|---|
DATABASE__HOST | postgres | Database Host |
DATABASE__USER | postgres | User name for Postgresql Database user |
DATABASE__PASSWORD | postgres | Password for Postgresql Database user |
DATABASE__DB | mindlogger_backend | Database name |
CORS__ALLOW_ORIGINS | * |
Represents the list of allowed origins. Set the Access-Control-Allow-Origin header. Example: https://dev.com,http://localohst:8000 |
CORS__ALLOW_ORIGINS_REGEX | - | Regex pattern of allowed origins. |
CORS__ALLOW_CREDENTIALS | true | Set the Access-Control-Allow-Credentials header |
CORS__ALLOW_METHODS | * |
Set the Access-Control-Allow-Methods header |
CORS__ALLOW_HEADERS | * |
Set the Access-Control-Allow-Headers header |
AUTHENTICATION__ACCESS_TOKEN__SECRET_KEY | secret1 | Access token's salt |
AUTHENTICATION__REFRESH_TOKEN__SECRET_KEY | secret2 | Refresh token salt |
AUTHENTICATION__REFRESH_TOKEN__TRANSITION_KEY | transition secret | Transition refresh token salt. Used for changing refresh token key (generate new key for AUTHENTICATION__REFRESH_TOKEN__SECRET_KEY and use previous value as transition token key for accepting previously generated refresh tokens during transition period (see AUTHENTICATION__REFRESH_TOKEN__TRANSITION_EXPIRE_DATE)) |
AUTHENTICATION__REFRESH_TOKEN__TRANSITION_EXPIRE_DATE | transition expiration date | Transition expiration date. After this date transition token ignored |
AUTHENTICATION__ALGORITHM | HS256 | The JWT's algorithm |
AUTHENTICATION__ACCESS_TOKEN__EXPIRATION | 30 | Time in minutes after which the access token will stop working |
AUTHENTICATION__REFRESH_TOKEN__EXPIRATION | 30 | Time in minutes after which the refresh token will stop working |
ADMIN_DOMAIN | - | Admin panel domain |
RABBITMQ__URL | rabbitmq | Rabbitmq service URL |
RABBITMQ__USE_SSL | True | Rabbitmq ssl setting, turn false to local development |
MAILING__MAIL__USERNAME | mailhog | Mail service username |
MAILING__MAIL__PASSWORD | mailhog | Mail service password |
MAILING__MAIL__SERVER | mailhog | Mail service URL |
MULTI_INFORMANT__TEMP_RELATION_EXPIRY_SECS | 86400 | Expiry (sec) of temporary multi-informant participant take now relation |
SECRETS__SECRET_KEY | - | Secret key for data encryption. Use this key only for local development |
You can see that some environment variables have double underscore (
__
) instead of_
.As far as
pydantic
supports nested settings models it uses to have cleaner code
It is highly recommended to create an .env
file as far as it is needed for setting up the project with Local and Docker approaches.
Use .env.default
to get started:\
cp .env.default .env
π NOTE: Make sure to set
RABBITMQ__USE_SSL=False
for local development
openssl rand -hex 32
Generate a key and update .env
values:
AUTHENTICATION__ACCESS_TOKEN__SECRET_KEY
AUTHENTICATION__REFRESH_TOKEN__SECRET_KEY
- Postgres
- Redis
- RabbitMQ
- Mailhog - Only used for running mail services locally
Running required services using Docker is highly recommended even if you intend to run the app locally.
π NOTE: Make sure to update your environment variables to point to the correct hostname and port for each service.
-
Run Postgres
docker-compose up -d postgres
-
Run Redis
docker-compose up -d redis
-
Run RabbitMQ
docker-compose up -d rabbitmq
-
Alternatively, you can run all required services:
docker-compose up
For manual installation refer to each service's documentation:
Pipenv used as a default dependencies manager Create your virtual environment:
NOTE: Pipenv used as a default dependencies manager. When developing on the API be sure to work from within an active shell. If using VScode, open the terminal from w/in the active shell. Ideally, avoid using the integrated terminal during this process.
# Activate your environment
pipenv shell
If pyenv
is installed Python 3.11 should automatically be installed in the virtual environment, you can check the
correct version of Python is active by running:
python --version
If the active version is not 3.11, you can manually specify a version while creating your virtual environment:
pipenv --python /opt/homebrew/bin/python3.11
Install all dependencies
# Install all deps from Pipfile.lock
# to install venv to current directory use `export PIPENV_VENV_IN_PROJECT=1`
pipenv sync --dev --system
π NOTE: if you don't use
pipenv
for some reason remember that you will not have automatically exported variables from your.env
file.π Pipenv docs
So then you have to do it by your own manually
# Manual exporting in Unix (like this)
export PYTHONPATH=src/
export BASIC_AUTH__PASSWORD=1234
...
...or using a Bash-script
set -o allexport; source .env; set +o allexport
π NOTE 2: Please do not forget about environment variables! Now all environment variables for the Postgres Database which runs in docker are already passed to docker-compose.yaml from the .env file.
π NOTE 3: If you get an error running
pipenv sync --dev
related to the dependencygreenlet
, install it by running:
pipenv install greenlet
π NOTE 4: If the application can't find the
RabbitMQ
service even though it's running normally, change yourRABBITMQ__URL
to your local ip address instead oflocalhost
alembic upgrade head
π NOTE: If you run into role based errors e.g.
role "postgres" does not exist
, check to see if that program is running anywhere else (e.g. Homebrew), run...ps -ef | grep {program-that-errored}
You can attempt to kill the process with the following commandkill -9 {PID-to-program-that-errored}
, followed by rerunning the previous check to confirm if the program has stopped.
This option allows you to run the app for development purposes without having to manually build the Docker image (i.e. When developing on the Web or Admin project).
-
Make sure all required services are properly setup
-
If you're running required services using Docker, disable the
app
service fromdocker-compose
before running:docker-compose up -d
Alternatively, you may run these services using make (i.e. When developing the API):
- You'll need to sudo into
/etc/hosts
and append the following changes.
#mindlogger 127.0.0.1 postgres 127.0.0.1 rabbitmq 127.0.0.1 redis 127.0.0.1 mailhog
Then run the following command from within the active virtual environment shell...
make run_local
- You'll need to sudo into
π NOTE: Don't forget to set the
PYTHONPATH
environment variable, e.g: export PYTHONPATH=src/
- To test that the API is up and running navigate to
http://localhost:8000/docs
in a browser.
In project we use simplified version of imports: from apps.application_name import class_name, function_name, module_nanme
.
To do this we must have src/
folder specified in a PATH.
P.S. You don't need to do this additional step if you run application via Docker container π€«
uvicorn src.main:app --proxy-headers --port {PORT} --reload
Alternatively, you may run the application using make:
make run
- Build the application
- Run the app using Docker:
docker-compose up
Additional docker-compose up
flags that might be useful for development
-d # Run docker containers as deamons (in background)
--no-recreate # If containers already exist, don't recreate them
docker-compose down
Additional docker-compose down
flags that might be useful for development
-v # Remove with all volumes
You can use the Makefile
to work with project (run the application / code quality tools / tests ...)
For local usage:
# Run the application
make run
# Check the code quality
make cq
# Check tests passing
make test
# Check everything in one hop
make check
docker-compose build
β
Make sure that you completed .env
file. It is using by default in docker-compose.yaml
file for buildnig.
β
Check building with docker images
command. You should see the record with fastapi_service
.
π‘ If you would like to debug the application insode Docker comtainer make sure that you use COMPOSE_FILE=docker-compose.dev.yaml
in .env
. It has opened stdin and tty.
The pytest
framework is using in order to write unit tests.
Currently postgresql is used as a database for tests with running configurations that are defined in pyproject.toml
DATABASE__HOST=postgres
DATABASE__PORT=5432
DATABASE__PASSWORD=postgres
DATABASE__USER=postgres
DATABASE__DB=test
π NOTE: To run tests localy without changing DATABASE_HOST please add row below to the
/etc/hosts
file (macOS, Linux). It will automatically redirect postgres to the localhost.
127.0.0.1 postgres
# Connect to the database with Docker
docker-compose exec postgres psql -U postgres postgres
# Or connect to the database locally
psql -U postgres postgres
# Create user's database
psql# create database test;
# Create arbitrary database
psql# create database test_arbitrary;
# Create user test
psql# create user test;
# Set password for the user
psql# alter user test with password 'test';
To correctly calculate test coverage, you need to run the coverage with the --concurrency=thread,gevent
parameter:
coverage run --branch --concurrency=thread,gevent -m pytest
coverage report -m
(This is how tests are running on CI)
# Check the code quality
make dcq
# Check tests passing
make dtest
# Check everything in one hop
make dcheck
It is a good practice to use Git hooks to provide better commits.
For increased security during development, install git-secrets
to scan code for aws keys.
Please use this link for that: https://github.com/awslabs/git-secrets#installing-git-secrets
.pre-commit-config.yaml
is placed in the root of the repository.
π Once you have installed git-secrets
and pre-commit
simply run the following command.
make aws-scan
π Then all your staged cahnges will be checked via git hooks on every git commit
alembic revision --autogenerate -m "Add a new field"
alembic upgrade head
alembic downgrade 0e43c346b90d
β This hash is taken from the generated file in the migrations folder
alembic downgrade 0e43c346b90d
π‘ Do not forget that alembic saves the migration version into the database.
delete from alembic_version;
alembic -c alembic_arbitrary.ini upgrade head
erDiagram
User_applet_accesses ||--o{ Applets: ""
User_applet_accesses {
int id
datetime created_at
datetime updated_at
int user_id FK
int applet_id FK
string role
}
Users {
int id
datetime created_at
datetime updated_at
boolean is_deleted
string email
string full_name
string hashed_password
}
Users||--o{ Applets : ""
Applets {
int id
datetime created_at
datetime updated_at
boolean is_deleted
string display_name
jsonb description
jsonb about
string image
string watermark
int theme_id
string version
int creator_id FK
text report_server_id
text report_public_key
jsonb report_recipients
boolean report_include_user_id
boolean report_include_case_id
text report_email_body
}
Applet_histories }o--|| Users: ""
Applet_histories {
int id
datetime created_at
datetime updated_at
boolean is_deleted
jsonb description
jsonb about
string image
string watermark
int theme_id
string version
int account_id
text report_server_id
text report_public_key
jsonb report_recipients
boolean report_include_user_id
boolean report_include_case_id
text report_email_body
string id_version
string display_name
int creator_id FK
}
Answers_activity_items }o--|| Applets: ""
Answers_activity_items }o--|| Users: ""
Answers_activity_items }o--|| Activity_item_histories: ""
Answers_activity_items {
int id
datetime created_at
datetime updated_at
jsonb answer
int applet_id FK
int respondent_id FK
int activity_item_history_id_version FK
}
Answers_flow_items }o--|| Applets: ""
Answers_flow_items }o--|| Users: ""
Answers_flow_items ||--o{ Flow_item_histories: ""
Answers_flow_items {
int id
datetime created_at
datetime updated_at
jsonb answer
int applet_id FK
int respondent_id FK
int flow_item_history_id_version FK
}
Activities }o--|| Applets: ""
Activities {
int id
datetime created_at
datetime updated_at
boolean is_deleted
UUID guid
string name
jsonb description
text splash_screen
text image
boolean show_all_at_once
boolean is_skippable
boolean is_reviewable
boolean response_is_editable
int ordering
int applet_id FK
}
Activity_histories }o--|| Applets: ""
Activity_histories {
int id
datetime created_at
datetime updated_at
boolean is_deleted
UUID guid
string name
jsonb description
text splash_screen
text image
boolean show_all_at_once
boolean is_skippable
boolean is_reviewable
boolean response_is_editable
int ordering
int applet_id FK
}
Activity_item_histories }o--|| Activity_histories: ""
Activity_item_histories {
int id
datetime created_at
datetime updated_at
boolean is_deleted
jsonb question
string response_type
jsonb answers
text color_palette
int timer
boolean has_token_value
boolean is_skippable
boolean has_alert
boolean has_score
boolean is_random
boolean is_able_to_move_to_previous
boolean has_text_response
int ordering
string id_version
int activity_id FK
}
Activity_items }o--|| Activities: ""
Activity_items {
int id
datetime created_at
datetime updated_at
jsonb question
string response_type
jsonb answers
text color_palette
int timer
boolean has_token_value
boolean is_skippable
boolean has_alert
boolean has_score
boolean is_random
boolean is_able_to_move_to_previous
boolean has_text_response
int ordering
int activity_id FK
}
Flows }o--|| Applets: ""
Flows {
int id
datetime created_at
datetime updated_at
boolean is_deleted
string name
UUID guid
jsonb description
boolean is_single_report
boolean hide_badge
int ordering
int applet_id FK
}
Flow_items }o--|| Flows: ""
Flow_items }o--|| Activities: ""
Flow_items {
int id
datetime created_at
datetime updated_at
boolean is_deleted
int ordering
int activity_flow_id FK
int activity_id FK
}
Flow_item_histories }o--|| Flow_histories: ""
Flow_item_histories }o--|| Activity_histories: ""
Flow_item_histories {
int id
datetime created_at
datetime updated_at
boolean is_deleted
string id_version
int activity_flow_id FK
int activity_id FK
}
Flow_histories }o--|| Applet_histories: ""
Flow_histories {
int id
datetime created_at
datetime updated_at
boolean is_deleted
string name
UUID guid
jsonb description
boolean is_single_report
boolean hide_badge
int ordering
string id_version
int applet_id FK
}
You can connect arbitrary file storage and database by filling special fields in table user_workspaces
.
Add your database connection string into database_uri
In next format:
postgresql+asyncpg://<username>:<password>@<hostname>:port/database
For AWS S3 bucket next fields are required:
storage_region
,storage_bucket
, storage_access_key
,storage_secret_key
.
In case of Azure blob, specify your connection string into field storage_secret_key
Common Public Attribution License Version 1.0 (CPAL-1.0)
Refer to LICENSE.md
- Make sure that
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://opentelemetry:4317
endpoint has been already set in.env
. Run docker container with opentelemetry:
docker-compose up -d opentelemetry
- Make sure that
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4317
is exported in environment.
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4317
or if you use pipenv for autoloading envs - make sure that OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://localhost:4317
is added to .env
file.
- The same as for containerized app - up container with opentelemetry
```bash
docker-compose up -d opentelemetry
- Start you app