Install Python dependencies (for both building and running) and generate nicobot/version.py
with :
pip3 install -c constraints.txt -r requirements-build.txt -r requirements-runtime.txt
python3 setup.py build
To run unit tests :
python3 -m unittest discover -v -s tests
To run directly from source (without packaging) :
python3 -m nicobot.askbot [options...]
To build locally (more at pypi.org) :
rm -rf ./dist ; python3 setup.py build sdist bdist_wheel
To upload to test.pypi.org :
python3 -m twine upload --repository testpypi dist/*
To install the test package from test.pypi.org and check that it works :
# First create a virtual environment not to mess with the host system
python3 -m venv venv/pypi_test && source venv/pypi_test/bin/activate
# Then install dependencies using the regular pypi repo
pip3 install -c constraints.txt -r requirements-runtime.txt
# Finally install this package from the test repo
pip3 install -i https://test.pypi.org/simple/ --no-deps nicobot
# Do some test
python -m nicobot.askbot -V
...
# Exit the virtual environment
deactivate
To upload to PROD pypi.org :
python3 -m twine upload dist/*
Both above twine upload commands will ask for a username and a password. To prevent this, you could set variables :
# Defines username and password (or '__token__' and API key)
export TWINE_USERNAME=__token__
# Example reading the token from a local 'passwordstore'
export TWINE_PASSWORD=`pass pypi/test.pypi.org/api_token`
Or store them in ~/.pypirc
(see doc) :
[pypi]
username = __token__
password = <PyPI token>
[testpypi]
username = __token__
password = <TestPyPI token>
Or even use CLI options -u
and -p
, or certificates...
See python3 -m twine upload --help
for details.
The above instructions allow to build manually but otherwise it is automatically tested, built and uploaded to pypi.org using Travis CI on each push to GitHub (see .travis.yml
).
There are several Dockerfiles, each made for specific use cases (see README.md). They all have multiple stages.
debian.Dockerfile
is quite straight. It builds using pip in one stage and copies the resulting wheels into the final one.
signal-debian.Dockerfile
is more complex because it needs to address :
- including both Python and Java while keeping the image size small
- compiling native dependencies (both for signal-cli and qr)
- circumventing a number of bugs in multiarch building
alpine.Dockerfile
produces smaller images but may not be as much portable than debian ones and misses Signal support for now.
Note that the signal-cli backend needs a Java runtime environment, and also rust dependencies to support Signal's group V2. This approximately doubles the size of the images and almost ruins the advantage of alpine over debian...
Those images are limited on each OS (debian+glibc / alpine+musl) to CPU architectures which :
- have base images (python, openjdk, rust)
- have Python dependencies have wheels or are able to build them
- can build libzkgroup (native dependencies for signal)
- have the required packages to build
At the time of writing, support is dropped for :
linux/s390x
: lack of python:3 image (at least)linux/riscv64
: lack of python:3 image (at least)- Signal backend on
linux/arm*
for Alpine variants : lack of JRE binaries
All images have all the bots inside (as they would otherwise only differ by one script from each other).
The docker-entrypoint.sh
script takes the name of the bot to invoke as its first argument, then its own options and finally the bot's arguments.
Sample build command (single architecture) :
docker build -t nicolabs/nicobot:debian -f debian.Dockerfile .
Sample buildx command (multi-arch) :
docker buildx build --platform linux/amd64,linux/arm64,linux/386,linux/arm/v7 -t nicolabs/nicobot:debian -f debian.Dockerfile .
Then run with the provided sample configuration :
docker run --rm -it -v "$(pwd)/tests:/etc/nicobot" nicolabs/nicobot:debian askbot -c /etc/nicobot/askbot-sample-conf/config.yml
Github actions are currently used (see .github/workflows/dockerhub.yml
to automatically build and push the images to Docker Hub so they are available whenever commits are pushed to the master branch :
- A Github Action is triggered on each push to the central repo
- Alpine images and Debian images are built in parallel to speed up things. Debian-signal is built after Debian. Caching is used for both. See .github/workflows/dockerhub.yml.
- Images are uploaded to Docker Hub
Since I could not find an easy way to generate exactly the tags I wanted, the setup.py
script embeds a custom command to generate them from the git context (tag, commit) and the image variant :
- docker/github-actions tagging strategy does not explicitely allow tagging with latest an image of choice (I may be able to force it by tagging the wanted image in the end but it does not look 100% sure)
- crazy-max/ghaction-docker-meta is quite complex to understand and I could not figure out a way to implement my strategy
- See setup.py#DockerTagsCommand for the custom solution
This diagram is the view from the master branch on this repository. It emphasizes FROM and COPY relations between the images (base and stages).
You may find the reason for a missing CPU architecture / combination within the open issues labelled with docker.
Here are the main application files and directories inside the images :
π¦ /
β£ π etc/nicobot/ - - - - - - - - - - - -> Default configuration files
β β£ π config.yml
β β£ π i18n.en.yml
β β π i18n.fr.yml
β£ π root/
β β π .local/
β β£ π bin/ - - - - - - - - - - - - - -> Executable commands
β β β£ π askbot
β β β£ π docker-entrypoint.sh
β β β£ π transbot
β β β π ...
β β π lib/pythonX.X/site-packages/ - -> Python packages (nicobot & dependencies)
β π var/nicobot/ - - - - - - - - - - - -> Working directory & custom configuration files & data (contains secret stuff !)
β£ π .omemo/ - - - - - - - - - - - - - -> OMEMO keys (XMPP)
β π .signal-cli/ - - - - - - - - - - -> signal-cli configuration files
This chapter describes a very simple way to deploy the bots on Amazon Web Services. There are many other methods and Cloud providers but you can build on this example to start implementing your specific case.
Here is the process :
- Get an AWS account
- Install the latest Docker Desktop or Docker Compose CLI with ECS support (make sure to start a new shell if you've just installed it)
- Configure the AWS credentials (with
AWS_*
environnement variables or~/.aws/credentials
) - Create and switch your local docker to an 'ecs' context :
docker context create ecs myecs && docker context use myecs
- Craft a
docker-compose.yml
file (see templates tests/transbot-jabber.docker-compose.yml and tests/transbot-signal.docker-compose.yml) - Make sure you have the proper configuration files (only a
config.yml
is required in the given templates) and start the service :docker compose up
If you follow the given templates :
- this will deploy nicobot on AWS' Fargate
- the given
config.yml
file will be injected as a secret - it will use the writable layer of the container to download translation files and generate temporary files like OMEMO keys
- if you use the Signal backend it should print the QRCode to scan at startup ; you should also find the URI to manually generate it in the logs on CloudWatch console
- once done,
docker compose down
will stop the bot by clearing everything from AWS
If you want to customize the image, you have the option to upload it to a private registry on AWS before deploying your stack :
- First make a copy of tests/transbot-sample-conf/sample.env and set the variables inside according to your needs. Let's say you've put it at
tests/transbot-sample-conf/aws.env
. Image-related variables should look like :NICOBOT_IMAGE=123456789012.dkr.ecr.eu-west-1.amazonaws.com/nicobot
andNICOBOT_BASE_IMAGE=123456789012.dkr.ecr.eu-west-1.amazonaws.com/nicobot:dev-signal-debian
(see ECR docs - Make sure to authenticate against your private registry - tip : use Amazon ECR Docker Credential Helper for a seamless integration with the docker command line
- Build the image with
docker-compose
(docker compose
on AWS doesn't support build nor push) :cd tests/transbot-sample-conf && docker-compose build
- Push the image to your private AWS ECR[^1][^2] :
docker-compose --env-file aws.env push
- Finally, deploy as before :
docker context use myecs && docker compose --env-file aws.env up
As this method relies on a standard docker-compose file, it is very straightforward and also works on a developer workstation (simply replace docker compose
with docker-compose
).
However it cannot go beyond the supported mappings with CloudFormation templates (the native AWS deployment descriptor) and AWS's choice of services (Fargate, EFS, ...).
In addition, as seen above, you currently have to use different commands (docker-compose / docker compose) to build & push or deploy.
The --version
command-line option that displays the bots' version relies on setuptools_scm, which extracts it from the underlying git metadata.
This is convenient because the developer does not have to manually update the version (or forget to do it), however it either requires the version to be fixed inside a Python module or the .git
directory to be present.
There were several options among which the following one has been retained :
- Running
setup.py
creates / updates the version inside theversion.py
file - The scripts then load this module at runtime
Since the version.py
file is not saved into the project, setup.py build
must be run before the version can be queried. In exchange :
- it does not require setuptools nor git at runtime
- it frees us from having the
.git
directory around at runtime ; this is especially useful to make the docker images smaller
Tip : python3 setup.py --version
will print the guessed version.
The signal backend (actually signal-cli) requires a Java runtime, which approximately doubles the image size. This led to build separate images (same repo but different tags), to allow using smaller images when only the XMPP backend is needed.
- Deploying Docker containers on ECS (Docker's doc.)
- Deploy applications on Amazon ECS using Docker Compose (Amazon's doc.)
- Amazon ECS on AWS Fargate
- Amazon ECR | Private registry authentication
- Amazon ECR | Pushing a Docker image
- Official XMPP libraries : https://xmpp.org/software/libraries.html
- OMEMO compatible clients : https://omemo.top/
- OMEMO official Python library : looks very immature
- Gaijim, a Windows/MacOS/Linux XMPP client with OMEMO support : gajim.org | dev.gajim.org/gajim
- Conversations, an Android XMPP client with OMEMO support and paid hosting : https://conversations.im
- xmpppy : this library is very easy to use but it does allow easy access to thread or timestamp, and no OMEMO...
- github.com/horazont/aioxmpp : officially referenced library from xmpp.org, seems the most complete but misses practical introduction and does not provide OMEMO OOTB.
- slixmpp : seems like a cool library too and pretends to require minimal dependencies ; plus it supports OMEMO so it's the winner. API doc.
- Best practices for writing Dockerfiles
- Docker development best practices
- DEBIAN_FRONTEND=noninteractive trick
- Dockerfile reference
- Docker hub - python images
- docker-library/openjdk - ubuntu java package has broken cacerts
- Openjdk Dockerfiles @ github
- phusion/baseimage-docker @ github - not used in the end, because not so portable
- Azul JDK - not used in the end because not better than openjdk
- rappdw/docker-java-python image - not used because only for amd64
- Use OpenJDK builds provided by jdk.java.net?
- How to install tzdata on a ubuntu docker image?
- docker.com - Automatic platform ARGs in the global scope
- docker/buildx @ github
- Compiling 'crytography' for Python
- signal-cli - Providing native lib for libsignal
- github.com/signalapp/zkgroup - Compiling on raspberry pi fails
- Multi-Platform Docker Builds (including cargo-specific cross-building)
- How to build ARMv6 and ARMv7 in the same manifest file. (Compatible tag for ARMv7, ARMv6, ARM64 and AMD64)
- The "dpkg-split: No such file or directory" bug
- The "Command '('lsb_release', '-a')' returned non-zero exit status 1" bug
- Binfmt / Installing emulators
- Cross-Compile for Raspberry Pi With Docker