Skip to content
This repository has been archived by the owner on Jan 3, 2024. It is now read-only.

Intentionally optimize Docker image #18

Open
pihart opened this issue Jul 17, 2022 · 0 comments
Open

Intentionally optimize Docker image #18

pihart opened this issue Jul 17, 2022 · 0 comments

Comments

@pihart
Copy link
Contributor

pihart commented Jul 17, 2022

The current Docker image is huge—even with recent optimizations (#13, #16), it is around 3.9 GB!

Part of why is just that it has so many packages. But part of why is that it is oriented around the existing Puppet provisioner, which isn't amenable to the types of optimizations that container images are.

For example, it is a lot easier in a Dockerfile to initiate an install of some software, and then copy over only the relevant portions (leaving behind caches and even parts of the program which will never be used). You can download each dependency in a parallel stage, and selectively copy it over to the final image, dramatically improving performance of builds and rebuilds. Better, you can copy the software directly from prebuilt official images; the software providers have done the hard work of isolating the necessary components, and the software is already built.

Intentionally optimizing for containerized loads might be worthwhile. This is best done together with #17; see that issue for the general strategy, as well as other suggestions such as slim versions.1

Footnotes

  1. But with a lot more work, it is also possible to rewrite the Dockerfile so that it completely bypasses Puppet, while still having students use Puppet in a VM.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant