Skip to content

API Deployment

Erik Roberts edited this page Feb 2, 2024 · 1 revision

When we talk about deployment in this article, there's really two major steps we're talking about:

  1. Testing, building, and releasing our repository via CI/CD
  2. Running the API in a production environment

ℹ️ This article will in part discuss automatic deployments to remote servers, which primarily applies to RPI TV's infrastructure. If you're a third party and wish to take advantage of this, you may be able to do so by forking this repository or setting up triggers for your own CI/CD system.

Automated Actions

  • When you submit a pull request to the master branch:
    • the test suite will be run.
  • When a commit is pushed to the master branch:
    • the Docker container will be built.
    • the Docker container will be pulled and deployed to the staging environment.
  • When a commit is pushed to the release branch:
    • the test suite will be run.
    • the Docker container will be built.
    • the docker container will be pulled and deployed to the production environment.

These workflows are defined within the /.github/workflows directory.

❗ This is not all implemented yet. See issue #73

Building Locally

The project can be built locally via the npm run build command. This will build the Nest project and put the entry point at /dist/src/main.js. After building, you can run the project with npm run start:prod (or simply node dist/src/main).

Building Docker Container

The project's Docker container can be built locally via docker build .. Of course, you can tag the built image if you'd like. The built image will not contain any development dependencies or resources, including the CLI. It will only contain the dist and node_modules folder. The image is currently based on node:18, and defaults to run with NODE_ENV=production if not explicitly provided at runtime.

You can also build the image via Docker Compose using the api profile at startup:

docker compose --profile api up --build

More information on profiles in the next section.

Runtime

Dependencies

This project requires a PostgreSQL, Redis, and RabbitMQ servers at runtime. Their connection URIs are set via environment variables, as outlined in the README. If you use the provided docker-compose.yml file, these will be set up for you.

If you want to use the Stream module, you also need the glimpse-video and glimpse-video-control containers running on the same Docker network. The rpitv/glimpse-video repository has a docker-compose.yml file with this already set up for you. You may combine these Docker Compose files or run them independently.

Startup

The docker-compose.yml file provided by this repository is designed for development environments. Simply running docker compose up will only spin up the 3 dependencies: PostgreSQL, Redis, and RabbitMQ. To also spin up the API, you need to use the correct Docker Compose profile.

  • api - This profile will build the local Dockerfile for the API.
  • production - This profile will pull the latest Docker image for the API.

To use these profiles, you must have Docker Compose v1.28.0 or later and provide the profile name via docker compose --profile <name> up. For example, docker compose --profile production up will run the pulled Docker image. In an actual production environment, you may want to edit the provided docker-compose.yml file to not require this --profile flag.

Images will only be built/pulled if they are not already available. To rebuild/pull the latest version, you need to explicitly tell Docker Compose to do so:

docker compose up --build --pull=always # Builds local Docker files and pulls the latest Docker images

ℹ️ The --detach (shorthand -d) flag will start the Docker containers in the background

⚠️ New versions of the API should not be run without migrating the database. This should be handled automatically within CI/CD.

Running with the UI

This project is not particularly designed to run on it's own. Instead, it should be run behind a reverse proxy which is also serving the UI (rpitv/glimpse-ui). Therefore, in production, you do not need to expose any Docker container ports besides your UI container, if set up properly.

⚠️ Your PostgreSQL, Redis, and RabbitMQ ports should never be exposed, as it could become a security risk.