You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 3, 2024. It is now read-only.
The current Docker image is huge—even with recent optimizations (#13, #16), it is around 3.9 GB!
Part of why is just that it has so many packages. But part of why is that it is oriented around the existing Puppet provisioner, which isn't amenable to the types of optimizations that container images are.
For example, it is a lot easier in a Dockerfile to initiate an install of some software, and then copy over only the relevant portions (leaving behind caches and even parts of the program which will never be used). You can download each dependency in a parallel stage, and selectively copy it over to the final image, dramatically improving performance of builds and rebuilds. Better, you can copy the software directly from prebuilt official images; the software providers have done the hard work of isolating the necessary components, and the software is already built.
Intentionally optimizing for containerized loads might be worthwhile. This is best done together with #17; see that issue for the general strategy, as well as other suggestions such as slim versions.1
Footnotes
But with a lot more work, it is also possible to rewrite the Dockerfile so that it completely bypasses Puppet, while still having students use Puppet in a VM. ↩
The text was updated successfully, but these errors were encountered:
The current Docker image is huge—even with recent optimizations (#13, #16), it is around 3.9 GB!
Part of why is just that it has so many packages. But part of why is that it is oriented around the existing Puppet provisioner, which isn't amenable to the types of optimizations that container images are.
For example, it is a lot easier in a Dockerfile to initiate an install of some software, and then copy over only the relevant portions (leaving behind caches and even parts of the program which will never be used). You can download each dependency in a parallel stage, and selectively copy it over to the final image, dramatically improving performance of builds and rebuilds. Better, you can copy the software directly from prebuilt official images; the software providers have done the hard work of isolating the necessary components, and the software is already built.
Intentionally optimizing for containerized loads might be worthwhile. This is best done together with #17; see that issue for the general strategy, as well as other suggestions such as slim versions.1
Footnotes
But with a lot more work, it is also possible to rewrite the
Dockerfile
so that it completely bypasses Puppet, while still having students use Puppet in a VM. ↩The text was updated successfully, but these errors were encountered: