Build the Container Images#
This guide explains how to build the Dioptra container images from the source repository. After completing these steps, you will have local container images ready for deployment.
Note
Most users do not need to build images manually. Pre-built images (including GPU workers) are available for download from the GitHub Container Registry. See Download the Container Images for the faster approach.
Build images locally only if you need:
Custom modifications to the container images
To work in an environment without access to the GitHub Container Registry
Prerequisites#
Docker Engine installed and running
Git installed
A terminal with access to Docker commands
Building the Images#
Step 1: Clone the Repository#
Clone the Dioptra GitHub repository if you have not already done so.
git clone https://github.com/usnistgov/dioptra.git
git clone [email protected]:usnistgov/dioptra.git
Change into the cloned repository directory:
cd dioptra
Note
To build the latest development versions of the containers, switch to the dev branch:
git checkout -b dev origin/dev
Step 2: Add CA Certificates (Optional)#
If you are building the containers in a corporate environment that uses its own certificate authority, copy your CA certificates into the docker/ca-certificates/ folder before building.
cp /path/to/your/ca-certificate.crt docker/ca-certificates/
See the docker/ca-certificates/README.md file for additional information.
Step 3: Build the Container Images#
Use the Makefile to build the container images:
make build-nginx build-mlflow-tracking build-restapi build-pytorch-cpu build-tensorflow-cpu
Note
The PyTorch and TensorFlow images may take a while to build.
Tip
If make cannot find your Python executable, specify it manually by prepending PY=/path/to/python3 to the command.
Step 4: Build GPU Images (Optional)#
If you plan to run GPU-accelerated workers, build the GPU-enabled images. These images require a host machine with CUDA-compatible GPUs and the NVIDIA Container Toolkit to be useful.
make build-pytorch-gpu build-tensorflow-gpu
Step 5: Verify the Images#
Run docker images to verify that the container images are available with the dev tag:
docker images | grep dioptra
You should see output similar to the following:
REPOSITORY TAG IMAGE ID CREATED SIZE
dioptra/nginx dev 17235f76d81c 3 weeks ago 243MB
dioptra/restapi dev f7e59af397ae 3 weeks ago 1.16GB
dioptra/mlflow-tracking dev 56c574822dad 3 weeks ago 1.04GB
dioptra/pytorch-cpu dev 5309d66defd5 3 weeks ago 3.74GB
dioptra/tensorflow2-cpu dev 13c4784dd4f0 3 weeks ago 3.73GB
Note
The IMAGE ID, CREATED, and SIZE fields will vary. Verify that the REPOSITORY and TAG columns match.
Warning
Locally built images have a different registry prefix than downloaded images. See Understanding Container Registry Prefixes for implications when configuring your deployment.
Next Steps#
Once you have finished building the container images, move onto the next step: Prepare Your Deployment
See Also#
Prepare Your Deployment - Set up a deployment using your built images
Add Custom CA Certificates - Add CA certificates to a running deployment
Configure GPU Workers - Configure GPU workers in your deployment
Understanding Container Registry Prefixes - Understanding registry prefixes