An (opininated) collection of open-source CNCF-based Docker services that assist in making 12factor applications.
Future work - Contribute to awesome-compose.
- Service Discovery
- Consul
- Registrator (registers Docker containers with Consul as services)
- Traefik for load balancing & proxy against Consul [1] services
- Logspout for Log collection
- Metrics - The TIG Stack
- Telegraf (metrics aggregator and pipeline) w/ DataDog StatsD parser. Refer to telegraf.conf for customization options.
- InfluxDB 1.x (metrics storage)
- Grafana (visualization)
- TODO: Prometheus (PR's welcome) (Telegraf+Influx work fine, though. Telegraf can be configured to scrape Prometheus Metrics)
- Instrumentation
- Jaeger (distributed tracing)
Any passwords for services are instrument
. For example, Grafana creds are admin:instrument
. InfluxDB can be queried using instrument
as the password.
Consul - http://localhost:8500
Logspout - http://localhost:8000/logs
Traefik - http://localhost:8080/dashboard
Grafana - http://localhost:3000
Docker dashboard - id:893
Telegraf dashboard - id:928
, id:5955
Jaeger - http://localhost:16686
First, download this repo as a ZIP (use the clone button) and extract it as a folder into your project as .instrument
.
Next, create a Docker network for the components
docker network create instrument_web
Then, if you are using your own Makefile
, then add the .instrument
make targets to it
echo -e '\ninclude .instrument/targets.mk' >> Makefile
Otherwise, since we've provided make
targets for you, go ahead and create your own Makefile
. (Trust me, using one is nicer than memorizing a bunch of Docker commands)
Here's a starting template. Note: clean
and install
should be updated to actually do things that are dependent on your own code. Also important: tabs matter when updating a Makefile
.
install:
@echo "installing!"
clean:
@echo "cleaning!"
include .instrument/targets.mk
Alright, alright, alright!
Having the infrastructure in place is great, but it doesn't help, you, the application developer, ensure your app will run on these services.
In order to test your own app in this environment, make your own docker-compose.yml
file
Here's a starting template
version: '3'
networks:
instrument_web:
external: true
## Update here
services:
app:
image: containous/whoami # Replace with your own image
ports: # Update with your own ports
- "8081:80"
networks: ['instrument_web'] # This attaches to the underlying infrastructure network
environment: # Update with your environment
FOO: bar
Next, add additional services that specify:
-
Any dependent services (such as databases, Kafka, etc.). Make sure to only copy the internal sections of any
services
block. -
(Optional) Any links to existing, external services.
If you use a remote service over the network, it is up to you to ensure you have the appropriate network connectivity and firewall options from your machine to those.
Best practices say to configure such connections via the
environment
block of Compose or in-app config wiring. -
Externalized secrets
Note: For simplicity, Hashicorp Vault is excluded from this stack.
Docker Compose can reference a
.env
file, should you need local credentials, otherwise use dummy credentials for test databases and such.No one responsible leaking access credentials in Git repos but yourself._
# add to your gitignore echo -e '\n.env' >> .gitignore vim .env
YES!!!
With all that in place, write in your services (refer to Compose docs above as needed), then get ready to run your application(s)!
make ult-instrument
This will run until stopped via Ctrl + C.
Hopefully the sevices listed above in what you get are enough. Of course, feel free to mix-and-match with what you think is necessary.
Make sure you have the following file structure. Any extra files should include documentation and your local application code + build processes. As mentioned below, this has mostly been tested with Apache Maven, but NPM, or similar tooling could be build around this process.
.instrument/
conf/
grafana/
telegraf/
telegraf.conf
targets.mk
docker-compose.yml
Makefile
docker-compose.yml
Are you stuck here?
version: '3'
services:
myapp:
image: ???
You can either pull an image directly off Docker Hub, or more commonly, you are in development mode, and you are testing services locally. When an image is local, you can find it with docker images
.
When using any Docker image, the full image name would look like
[docker-registry]/[git-org]/[image-name]:[image-version]
Where each part of a full Docker image reference are:
- (optional) Docker Registry
- (optional) Docker Org/User
- (required) Docker Image
- (preferred) Image version
Without a registry specified, the default is Docker Hub. Use docker images
to see what images are already downloaded on your local machine. Creating a Docker accont is free, and will let you create your own Docker Org/User where you can push images for others to use.
If you exclude the image version, then it defaults to latest
. Best practices of Docker say to always use a defined version. Preferabbly SemVer, by which the maven-release-plugin
can generate alongside the Fabric8 docker-maven-plugin
.
Wait... Apache Maven?
Yes, you heard me right... Read on.
Maven is not only for Java apps! You will need Java installed, but the featureset of Maven outweighs that burden.
As mentioned, the Fabric8 plugin works fine and has been tested with this project, so refer its documentation for configuration options. In general, it works similarly to the maven-assembly-plugin
in that it bundles up the final build artifacts into a Docker image.
Other options for building Docker images from Maven include
- (My favorite) :
jib-maven-plugin
by Google. Note: This is being merged into the Fabric8 Maven plugins - see Eclipse JKube. - Spotify Docker Maven Plugin (INACTIVE) - It is stable and functional, but Jib builds more optimized images.
If you find Gradle, SBT, or another build tool works better for you, feel free to let us know.
Relax. Breeeaathee.
If everything started okay, logs from stdout
/ stderr
of all the services will be tail
'd.
They can also be followed in another tab (termainal or browser) via curl http://localhost:8000/logs
. Refer Logspout documentation on performing filters. Of course, grep
works great here too.
Should a container die, you'll need to debug it.
Useful commands:
docker-compose ps
- See what's running (must be ran in same folder as the compose file)docker-compose logs <name>
- dump the logs of that image. Includelogs -f <name>
to follow the logs.docker-compose exec <name> bash
- can be used to shell into a container to inspect files and processes like any other terminal session.
They are nice, sure, but Kube YAML is needlessly verbose for a local environment.