Integrate Custom Containers#
This guide explains how to integrate custom Docker containers into your Dioptra deployment, enabling the use of specialized worker images or additional services.
Prerequisites#
Prepare Your Deployment - A configured Dioptra deployment
Use Docker Compose Override Files - Override file created
Custom container image built and available locally or in a registry
Understanding of Docker Compose file structure
Integration Steps#
Step 1: Prepare Your Custom Container#
Ensure your custom container image is built and available. You can verify it with:
docker images | grep custom-container
If your container is in a remote registry, ensure Docker has access to pull it.
Step 2: Add the Container Service Definition#
Open docker-compose.override.yml and add your custom container definition in the services: section.
Below is a template for a custom worker container that integrates with Dioptra’s services:
services:
custom-container:
# Container image name and tag
image: custom-container:dev
# Restart policy
restart: always
# Hostname for internal DNS resolution
hostname: custom-container
# Health check configuration
healthcheck:
test:
- CMD
- /usr/local/bin/healthcheck.sh
interval: 30s
timeout: 60s
retries: 5
start_period: 80s
# Environment variables for Dioptra integration
environment:
AWS_ACCESS_KEY_ID: ${WORKER_AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${WORKER_AWS_SECRET_ACCESS_KEY}
DIOPTRA_WORKER_USERNAME: ${DIOPTRA_WORKER_USERNAME}
DIOPTRA_WORKER_PASSWORD: ${DIOPTRA_WORKER_PASSWORD}
# Wait for dependent services before starting
command:
- --wait-for
- <deployment-name>-redis:6379
- --wait-for
- <deployment-name>-minio:9001
- --wait-for
- <deployment-name>-db:5432
- --wait-for
- <deployment-name>-mlflow-tracking:5000
- --wait-for
- <deployment-name>-restapi:5000
- tensorflow-cpu
# Environment files to load
env_file:
- ./envs/ca-certificates.env
- ./envs/<deployment-name>-worker.env
- ./envs/<deployment-name>-worker-cpu.env
# Network to join
networks:
- dioptra
# Volume mounts
volumes:
- "worker-ca-certificates:/usr/local/share/ca-certificates:rw"
- "worker-etc-ssl:/etc/ssl:rw"
- "<host-data-path>:/dioptra/data:ro"
Note
Replace <deployment-name> with your deployment’s slugified name (default: dioptra-deployment) and <host-data-path> with the absolute path to your data directory.
See also
See Worker Container Requirements Reference for the complete list of required environment variables, process invocation, and network configuration for custom worker containers.
Step 3: Configure Environment and Volumes#
Customize the container definition for your needs:
Environment Variables:
Add any additional environment variables your container needs in the environment: section.
Volume Mounts: Add volume mounts to provide access to data or configuration files. See Mount Data Volumes for more details.
GPU Support: If your container needs GPU access, add the GPU configuration. See Configure GPU Workers for details.
Step 4: Start the Deployment#
Apply your changes by starting or restarting the deployment:
docker compose up -d
Verify your custom container is running:
docker compose ps
Check the logs if needed:
docker compose logs custom-container
See Also#
Worker Container Requirements Reference - Worker container environment variables, process invocation, and network requirements
Use Docker Compose Override Files - Docker Compose override file basics
Prepare Your Deployment - Full deployment customization
Mount Data Volumes - Mount data volumes into containers
Configure GPU Workers - Configure GPU support
Docker Compose specification - Full reference for compose files