Running Dioptra#

Note

See the Glossary for the meaning of the acronyms used in this guide.

The Dioptra GitHub repository provides a Cookiecutter template in the cookiecutter-templates/cookiecutter-dioptra-deployment/ folder that you can use to generate the scripts, configuration files, and Docker Compose files needed to run Dioptra on a single machine. This guide will show you how to apply the template, run the initialization script, and start the application services for the first time.

Prerequisites#

Quickstart#

The minimal terminal commands needed to configure and run a fresh installation of Dioptra are provided below. This will generate a setup that is appropriate for testing Dioptra on your personal computer or laptop.

# Move to the base directory where you plan to store your Dioptra
# configuration folder
mkdir -p /path/to/deployments/folder  # Create it if it doesn't exist
cd /path/to/deployments/folder

# Run cookiecutter and set the template's variables
cookiecutter gh:usnistgov/dioptra --checkout main \
  --directory cookiecutter-templates/cookiecutter-dioptra-deployment

# Move into the new folder created by the template. The new folder's name
# is based on the deployment_name variable. The default name for the folder
# is dioptra-deployment.
cd dioptra-deployment

# Initialize Dioptra using the init-deployment.sh script
./init-deployment.sh

# Start Dioptra
docker compose up -d

When you are done using Dioptra, navigate back to the configuration folder /path/to/deployments/folder/dioptra-deployment in your terminal and run,

# Stop Dioptra
docker compose down

The rest of this page is a detailed walk-through of the above commands, which includes information about more advanced topics, such as enabling SSL/TLS for the NGINX and postgres services and how to mount additional folders in the worker containers.

Applying the template#

Create the folder where you plan to keep the deployment folder and change into it so that it becomes your working directory. Next, run cookiecutter and apply the Dioptra Deployment template.

# To use the template published on the dev branch, use '--checkout dev' instead
cookiecutter gh:usnistgov/dioptra --checkout main --directory cookiecutter-templates/cookiecutter-dioptra-deployment

You will now be asked to set the variables needed to customize the generated configuration files. In most cases you can just use the default value, but there are a few that you may need to customize. Below is a full list of the variables, their default values, and explanations of what they mean.

  • deployment_name: A name to associate with the deployment. The template will “slugify” the deployment name you provide and use it to name files, folders, and in configuration settings. For example, it will be used to name the main folder created by the template. (default: Dioptra deployment)

  • container_registry: This is only needed if you are building and pushing Dioptra container images to your own Docker registry. As an example, if you are using a Docker registry hosted on a hypothetical JFrog Artifactory instance at the URL acme-dockerv2-virtual.jfrog.io, then you would answer acme-dockerv2-virtual.jfrog.io. Otherwise, leave this blank. (default: empty string)

  • container_tag: This should match the tags of the Dioptra container images you will be using (if you are using Docker, you can see the available tags by running docker images). If you built the images manually, the images will have the dev tag. (default: dev)

  • docker_compose_path: This should match the way you invoke Docker Compose on the host machine. If you are running an up-to-date version of Docker and have Docker Compose v2 or higher installed, then you should answer docker compose. If instead you are using the Python-based Docker Compose v1, or you installed Docker Compose v2 in a standalone manner, then this should be the absolute path to the docker-compose binary, for example /usr/local/bin/docker-compose. (default: /usr/local/bin/docker-compose)

  • systemd_required_mounts: This applies only if you are deploying Dioptra on a Linux distribution that uses systemd, such as Ubuntu, Fedora, or Red Hat Enterprise Linux, otherwise leave this blank. The “required mounts” would be the storage devices mounted on the host machine, which may include NFS volumes or block storage volumes, that must be accessible before the Dioptra application containers can start. This list should include all the devices that store the container images, runtime files, and volume data for Dioptra. Run sudo systemctl list-units -t mount to print the full list of mounted devices for your virtual machine. The names you should use are in the first column labeled UNIT. Multiple mounts should be separated by spaces, i.e. data.mount var.mount. (default: empty string)

  • nginx_server_name: If you are hosting Dioptra on an remote machine, such as a virtual private server, then this will either be a domain name like dioptra.example.org or the IP address you use to access the machine. If you are running Dioptra on your personal computer or laptop, then answer _. (default: dioptra.example.org)

  • nginx_expose_ports_on_localhost_only: Choosing True will generate a docker-compose.yml file that binds the NGINX ports to the loopback interface (localhost, 127.0.0.1). In this configuration, you will need to use port forwarding via SSH or a firewall in order to access the services. If False, the NGINX ports will be exposed on all available network interfaces. Dioptra currently does not have an access control system, so for now it is recommended that you choose True on remote machines and use SSH or firewall rules to manage access. If you are running Dioptra on your personal computer or laptop, also choose True. (default: True)

  • postgres_container_tag: Sets the version of postgres to be used in the deployment. For a full list of available versions, visit the Tags tab on the postgres image page on Docker Hub. (default: 15)

  • pgadmin_default_email: This email address will be used to create the default account on the pgadmin4 app. (default: dioptra@example.com)

  • num_tensorflow_cpu_workers: Sets the number of Tensorflow CPU workers to use in your Dioptra deployment. Each worker is dedicated to watching a named queue and running one job at a time. (default: 1)

  • num_tensorflow_gpu_workers: Sets the number of Tensorflow GPU workers to use in your Dioptra deployment. Each worker is assigned a dedicated GPU, watches a named queue, and runs one job at a time. The combined number of Tensorflow and PyTorch GPU workers cannot be greater than the number of GPUs available on the host machine. (default: 0)

  • num_pytorch_cpu_workers: Sets the number of PyTorch CPU workers to use in your Dioptra deployment. Each worker is dedicated to watching a named queue and running one job at a time. (default: 1)

  • num_pytorch_gpu_workers: Sets the number of PyTorch GPU workers to use in your Dioptra deployment. Each worker is assigned a dedicated GPU, watches a named queue, and runs one job at a time. The combined number of Tensorflow and PyTorch GPU workers cannot be greater than the number of GPUs available on the host machine. (default: 0)

Example#

Below is an example of what it looks like to customize some of the template variables after invoking the template using cookiecutter.

deployment_name [Dioptra deployment]: Dioptra deployment
container_registry []:
container_tag [dev]: dev
docker_compose_path [/usr/local/bin/docker-compose]: docker compose
systemd_required_mounts []:
nginx_server_name [dioptra.example.org]: _
Select nginx_expose_ports_on_localhost_only:
1 - True
2 - False
Choose from 1, 2 [1]: 1
postgres_container_tag [15]: 15
pgadmin_default_email [dioptra@example.com]: dioptra@example.com
num_tensorflow_cpu_workers [1]: 1
num_tensorflow_gpu_workers [0]: 0
num_pytorch_cpu_workers [1]: 1
num_pytorch_gpu_workers [0]: 0

Folder organization#

The following tree-like diagram shows the files created by the cookiecutter template. The annotations explain the purpose of the configuration files. Note that this diagram includes server certificates and private keys for the NGINX and Postgres services, which may or may not apply to your use case.

.
├── config
│   ├── db
│   │   └── init-db.sh                                       <- Postgres initialization script that creates the accounts and databases used in the Dioptra app.
│   ├── minio
│   │   ├── builtin-plugins-readonly-policy.json             <- A Minio access policy. Configures a readonly role for the dioptra_builtins/ folder in the plugins bucket.
│   │   ├── builtin-plugins-readwrite-policy.json            <- A Minio access policy. Configures a readwrite role for the dioptra_builtins/ folder in the plugins bucket.
│   │   ├── custom-plugins-readonly-policy.json              <- A Minio access policy. Configures a readonly role for the dioptra_custom/ folder in the plugins bucket.
│   │   ├── custom-plugins-readwrite-policy.json             <- A Minio access policy. Configures a readwrite role for the dioptra_custom/ folder in the plugins bucket.
│   │   ├── dioptra-readonly-policy.json                     <- A Minio access policy. Configures a readonly role for all folders and buckets created and used by Dioptra.
│   │   ├── mlflow-tracking-readwrite-policy.json            <- A Minio access policy. Configures a readwrite role for the artifacts/ folder in the mlflow-tracking bucket.
│   │   ├── plugins-readonly-policy.json                     <- A Minio access policy. Configures a readonly role for the dioptra_builtins/ and dioptra_custom/ folders in the plugins bucket.
│   │   ├── workflow-downloadonly-policy.json                <- A Minio access policy. Configures a downloadonly role for the workflow bucket.
│   │   └── workflow-uploadonly-policy.json                  <- A Minio access policy. Configures a uploadonly role for the for the workflow bucket.
│   └── nginx
│       ├── http_dbadmin.conf                                <- (HTTP) Configures Nginx to serve the pgadmin4 dashboard over http connections.
│       ├── http_default.conf                                <- (HTTP) Configures the basic defaults for Nginx to use when serving content over http connections.
│       ├── http_minio.conf                                  <- (HTTP) Configures Nginx to serve the Minio service over http connections.
│       ├── http_mlflow.conf                                 <- (HTTP) Configures Nginx to serve the MLflow Tracking service over http connections.
│       ├── http_proxy.conf                                  <- (HTTP) Configures the general reverse proxy settings for Nginx when serving content over http connections, including how Nginx modifies the request header fields.
│       ├── http_restapi.conf                                <- (HTTP) Configures Nginx to serve the Dioptra REST API service over http connections.
│       ├── https_dbadmin.conf                               <- (HTTPS) Configures Nginx to serve the pgadmin4 dashboard over https connections.
│       ├── https_default.conf                               <- (HTTPS) Configures the basic defaults for Nginx to use when serving content over https connections, which includes setting the path to the server certificate and private key.
│       ├── https_minio.conf                                 <- (HTTPS) Configures Nginx to serve the Minio service over https connections.
│       ├── https_mlflow.conf                                <- (HTTPS) Configures Nginx to serve the MLflow Tracking service over https connections.
│       ├── https_proxy.conf                                 <- (HTTPS) Configures the general reverse proxy settings for Nginx when serving content over https connections, including how Nginx modifies the request header fields.
│       ├── https_restapi.conf                               <- (HTTPS) Configures Nginx to serve the Dioptra REST API service over https connections.
│       └── stream_db.conf                                   <- Configures Nginx to proxy tcp requests to the Postgres database. This is a simple passthrough, so any SSL encryption must be enabled in the Postgres service itself, see ssl/db/README.md for more info.
├── envs
│   ├── ca-certificates.env                                  <- Sets common environment variables that applications use to find extra CA certificates. Safe to commit to a git repo.
│   ├── dioptra-deployment-db.env                            <- Sets environment variables that customize the Postgres database. Safe to commit to a git repo.
│   ├── dioptra-deployment-dbadmin.env                       <- Sets environment variables that customize the pgadmin4 dashboard. Safe to commit to a git repo.
│   ├── dioptra-deployment-mlflow-tracking.env               <- Sets environment variables that customize the MLflow Tracking service. Safe to commit to a git repo.
│   ├── dioptra-deployment-restapi.env                       <- Sets environment variables that customize the Dioptra REST API service. Safe to commit to a git repo.
│   ├── dioptra-deployment-worker-cpu.env                    <- Sets environment variables that customize the CPU-based Dioptra workers. Safe to commit to a git repo.
│   └── dioptra-deployment-worker.env                        <- Sets environment variables that customize the Dioptra workers. Safe to commit to a git repo.
├── scripts
│   ├── copy-extra-ca-certificates.m4                        <- Used in the init-deployment.sh and init-named-volumes.m4 scripts to inject the extra CA certificates in the ssl/ca-certificates folder into the services.
│   ├── file-copy.m4                                         <- Used in the init-named-volumes.m4 script to handle file copying. Emits logging information and sets appropriate access and ownership permissions.
│   ├── git-clone.m4                                         <- Used in the init-named-volumes.m4 script to handle cloning git repositories. Emits logging information and sets appropriate access and ownership permissions.
│   ├── globbed-copy.m4                                      <- Used in the init-named-volumes.m4 script to handle globbed file copying. Emits logging information and sets appropriate access and ownership permissions.
│   ├── init-minio.sh                                        <- Used in the init-deployment.sh script to set the Minio policies in config/minio.
│   ├── init-named-volumes.m4                                <- Used in the init-deployment.sh script to prepare the named storage volumes used by each container. Actions include copying in configuration files and setting file access and ownership permissions.
│   ├── init-scripts.sh                                      <- Used in the init-deployment.sh script to translate this folder's *.m4 files into executable bash scripts using argbash.
│   ├── manage-postgres-ssl.m4                               <- Used in the init-deployment.sh script to enable and disable SSL in Postgres.
│   └── set-permissions.m4                                   <- Used in the init-named-volumes.m4 script to set appropriate access and ownership permissions.
├── secrets
│   ├── dioptra-deployment-db.env                            <- Sets environment variables containing sensitive passwords. NOT SAFE TO SHARE OR COMMIT TO A GIT REPO.
│   ├── dioptra-deployment-dbadmin.env                       <- Sets environment variables containing sensitive passwords. NOT SAFE TO SHARE OR COMMIT TO A GIT REPO.
│   ├── dioptra-deployment-minio-accounts.env                <- Sets environment variables containing sensitive passwords. NOT SAFE TO SHARE OR COMMIT TO A GIT REPO.
│   ├── dioptra-deployment-minio.env                         <- Sets environment variables containing sensitive passwords. NOT SAFE TO SHARE OR COMMIT TO A GIT REPO.
│   ├── dioptra-deployment-mlflow-tracking-database-uri.env  <- Sets environment variables containing sensitive passwords. NOT SAFE TO SHARE OR COMMIT TO A GIT REPO.
│   ├── dioptra-deployment-mlflow-tracking.env               <- Sets environment variables containing sensitive passwords. NOT SAFE TO SHARE OR COMMIT TO A GIT REPO.
│   ├── dioptra-deployment-restapi-database-uri.env          <- Sets environment variables containing sensitive passwords. NOT SAFE TO SHARE OR COMMIT TO A GIT REPO.
│   ├── dioptra-deployment-restapi.env                       <- Sets environment variables containing sensitive passwords. NOT SAFE TO SHARE OR COMMIT TO A GIT REPO.
│   ├── dioptra-deployment-worker.env                        <- Sets environment variables containing sensitive passwords. NOT SAFE TO SHARE OR COMMIT TO A GIT REPO.
│   └── postgres-passwd.env                                  <- Sets environment variables containing sensitive passwords. NOT SAFE TO SHARE OR COMMIT TO A GIT REPO.
├── ssl
│   ├── ca-certificates
│   │   └── README.md            <- README file explaining the folder's purpose and which files need to be copied here.
│   ├── db
│   │   ├── server.crt           <- MUST BE COPIED HERE MANUALLY. The server certificate for enabling encrypted traffic to the Postgres database.
│   │   ├── server.key           <- MUST BE COPIED HERE MANUALLY. The private key for enabling encrypted traffic to the Postgres database. NOT SAFE TO SHARE OR COMMIT TO A GIT REPO.
│   │   └── README.md            <- README file explaining the folder's purpose and which files need to be copied here.
│   └── nginx
│       ├── server.crt           <- MUST BE COPIED HERE MANUALLY. The server certificate for enabling HTTPS in the Nginx webserver.
│       ├── server.key           <- MUST BE COPIED HERE MANUALLY. The private key for enabling HTTPS in the Nginx webserver. NOT SAFE TO SHARE OR COMMIT TO A GIT REPO.
│       └── README.md            <- README file explaining the folder's purpose and which files need to be copied here.
├── systemd
│   └── dioptra.service          <- A systemd service that can be used to manage the full Dioptra application.
├── .gitignore                   <- A list of patterns that configures the files and directories that git should ignore. Used if the deployment configuration is placed under version control with git.
├── docker-compose.init.yml      <- Used in the init-deployment.sh script to initialize the deployment.
├── docker-compose.yml           <- Orchestrates how to start the containers, configure the network, set the environment variables, attach the storage volumes, and publish the NGINX web server ports.
├── init-deployment.sh           <- The deployment initialization script. Used to copy configuration files into named volumes, configure Minio policies, and enable/disable SSL in the NGINX and Postgres services.
└── README.md                    <- README file that explains how to initialize and run Dioptra using the provided scripts and configuration files.

Additional configuration#

The generated template takes care of most of the configuration work for you. However, depending on your use case, you may need to complete some additional steps before you initialize and boot up the deployment for the first time. The following subsections explain how to:

  • Copy extra CA certificates into the containers

  • Copy the certificate-key pairs that enable SSL/TLS encrypted connections in the NGINX and Postgres services

  • Mount additional folders in the worker containers, for example a folder that contains your datasets

In addition to the above, you may want to further customize the docker-compose.yml file to suit your needs, such as allocating explicit CPUs you want each container to use. See the Compose specification documentation for the full list of available options.

Adding extra CA certificates#

The deployment initialization scripts will look for extra CA certificates in the ssl/ca-certificates/ folder and copy and bundle them into named volumes so they are available at runtime. Only CA certificate files copied into the ssl/ca-certificates/ folder that meet the following criteria will be bundled:

  • Each CA certificate file must be in the PEM format. The PEM format encodes the certificate using base64 and stores it in a plain text file between two lines, -----BEGIN CERTIFICATE----- and -----END CERTIFICATE-----.

  • Each file should include one, and only one, CA certificate. Do not bundle multiple CA certificates together.

  • Each PEM-formatted CA certificate file must have the file extension crt, for example ca-root.crt. If your CA certificate has a different file extension (such as pem), rename it to crt after copying to this folder.

For further information about including extra CA certificates, please see the README.md file in the ssl/ca-certificates/ folder.

Enabling SSL/TLS in NGINX and Postgres#

When the deployment initialization scripts are directed to enable SSL/TLS in the NGINX and/or Postgres services, it will scan the ssl/db/ and ssl/nginx/ folders for server certificate(s) and private key(s). The server certificate must be named server.crt and the private key must be named server.key when copying them into the ssl/db/ and ssl/nginx/ folders. This applies even when you are using a different certificate-key pair for each service. As an example, if the certificate and key files start in the folder /home/username/certs with different names, you would copy the files as follows,

# These commands assume you are in the folder you just created using cookiecutter.
cp /home/username/certs/db.crt ./ssl/db/server.crt
cp /home/username/certs/db.key ./ssl/db/server.key
cp /home/username/certs/nginx.crt ./ssl/nginx/server.crt
cp /home/username/certs/nginx.key ./ssl/nginx/server.key

If you are enabling SSL/TLS in the NGINX service, you will also need to comment/uncomment a few lines in the docker-compose.yml file that configure the NGINX service’s published ports and health check test. Open the file in a text editor, find the block for the NGINX service (the block starts with dioptra-deployment-nginx: if you used the default value for deployment_name), and edit its ports and health check test to match the appropriate YAML snippet below.

dioptra-deployment-nginx:
  healthcheck:
    test:
      [
        "CMD",
        "/usr/local/bin/healthcheck.sh",
        "http://localhost:30080",
        "http://localhost:35000",
        "http://localhost:35050/login",
        "http://localhost:39000",
        "http://localhost:39001",
        # "https://localhost:30443",
        # "https://localhost:35000",
        # "https://localhost:35050/login",
        # "https://localhost:39000",
        # "https://localhost:39001",
      ]
  ports:
    - 127.0.0.1:80:30080/tcp
    # - 127.0.0.1:443:30443/tcp
    - 127.0.0.1:35432:5432/tcp
    - 127.0.0.1:35000:35000/tcp
    - 127.0.0.1:35050:35050/tcp
    - 127.0.0.1:39000:39000/tcp
    - 127.0.0.1:39001:39001/tcp
dioptra-deployment-nginx:
  healthcheck:
    test:
      [
        "CMD",
        "/usr/local/bin/healthcheck.sh",
        "http://localhost:30080",
        # "http://localhost:35000",
        # "http://localhost:35050/login",
        # "http://localhost:39000",
        # "http://localhost:39001",
        "https://localhost:30443",
        "https://localhost:35000",
        "https://localhost:35050/login",
        "https://localhost:39000",
        "https://localhost:39001",
      ]
  ports:
    - 127.0.0.1:80:30080/tcp
    - 127.0.0.1:443:30443/tcp
    - 127.0.0.1:35432:5432/tcp
    - 127.0.0.1:35000:35000/tcp
    - 127.0.0.1:35050:35050/tcp
    - 127.0.0.1:39000:39000/tcp
    - 127.0.0.1:39001:39001/tcp

For further information about adding the certificate-key pairs, please see the README.md files in the ssl/db and ssl/nginx/ folders.

Mounting folders in the worker containers#

The docker-compose.yml file generated by the cookiecutter template does not mount any folders from the host machine into worker containers. Common reasons for mounting additional folders in the worker containers include:

  1. Your datasets are stored in a folder on your host machine or in an NFS share

  2. You want to make other artifacts available to the worker containers, such as pre-trained models

An example that illustrates how to mount a folder on the host machine and a folder in an NFS share is shown in the subsections below.

Mounting a folder on the host machine#

For this example, let’s assume the datasets are stored on the host machine under the absolute path /home/datasets and that we want to make these available in the worker containers under the absolute path /datasets. We also want to make the folders read-only to prevent a job from accidentally modifying or deleting the data. To do this, first verify that the folder /home/datasets and all of its files are world-readable. If they are not, you can fix this as follows:

find /home/datasets -type d -print0 | xargs -0 chmod o=rx
find /home/datasets -type f -print0 | xargs -0 chmod o=r

Next, open docker-compose.yml with a text editor and find the blocks for the worker containers. The worker container blocks will have tfcpu, tfgpu, pytorch-cpu, or pytorch-gpu in their names. Append the line - /home/datasets:/datasets:ro to the volumes: subsection.

dioptra-deployment-tfcpu-01:
  volumes:
    - worker-ca-certificates:/usr/local/share/ca-certificates:rw
    - worker-etc-ssl:/etc/ssl:rw
    - /home/datasets:/datasets:ro

Mounting a folder on an NFS share#

For this example, let’s assume the datasets are stored on a NFS version 4 server with IP address 10.0.0.10 and in the exported directory /dioptra/datasets and that we want to make these available in the worker containers under the absolute path /datasets. First, we need to configure a named NFS volume in the docker-compose.yml file so that Docker knows where the NFS server is and which exported directory to use. Open docker-compose.yml with a text editor and find the top-level volumes: section (this should not be nested under the services: section) near the bottom of the file and add the following:

volumes:
  dioptra-datasets:
    driver: local
    driver_opts:
      type: nfs
      o: "addr=10.0.0.10,auto,rw,bg,nfsvers=4,intr,actimeo=1800"
      device: ":/dioptra/datasets"

Next, verify that the exported directory /dioptra/datasets and all of its files are world-readable. If they are not, access the NFS share and update the file access permissions. For example, if the NFS share is also mounted on the host machine at /dioptra/datasets, then you would run:

find /dioptra/datasets -type d -print0 | xargs -0 chmod o=rx
find /dioptra/datasets -type f -print0 | xargs -0 chmod o=r

Next, find the blocks for the worker containers in docker-compose.yml. The worker container blocks will have tfcpu, tfgpu, pytorch-cpu, or pytorch-gpu in their names. Append the line - dioptra-datasets:/datasets:ro to the volumes: subsection. The :ro at the end will mount the NFS share as read-only within the worker container to prevent a job from accidentally modifying or deleting the data.

dioptra-deployment-tfcpu-01:
  volumes:
    - worker-ca-certificates:/usr/local/share/ca-certificates:rw
    - worker-etc-ssl:/etc/ssl:rw
    - dioptra-datasets:/datasets:ro

Assigning multiple GPUs per worker#

To assign multiple GPUs to a worker, modify the NVIDIA_VISIBLE_DEVICES environment variable that is set in the tfgpu and pytorch-gpu container blocks:

environment:
  NVIDIA_VISIBLE_DEVICES: 0,1

To allow a worker to use all available GPUs, set NVIDIA_VISIBLE_DEVICES to all:

environment:
  NVIDIA_VISIBLE_DEVICES: all

Initializing the deployment#

The init-deployment.sh script is the main tool for initializing the deployment and automates the following steps:

  • Copies and bundles the extra CA certificates for the containers

  • Copies the configuration files in config/ folder and the server certificates and private keys into named volumes

  • Sets the appropriate file and folder access permissions in the named volumes

  • Creates the Minio S3 accounts and configures their access permissions

  • Syncs the built-in task plugins from the Dioptra GitHub repository with the appropriate Minio S3 bucket

  • Enables SSL/TLS in the Postgres service (if applicable)

This script should be executed if this is the first time you are starting the deployment or if you have changed at least one of the files in the config/ or ssl/ directory. If you run ./init-deployment.sh --help, you will print the script’s help message:

Utility that prepares the deployment initialization scripts.

Usage: init-deployment.sh [--enable-nginx-ssl] [--enable-postgres-ssl]
                          [--branch <arg>]
                          [--worker-ssl-service [tfcpu|pytorchcpu]] [-h|--help]
        --enable-nginx-ssl: Enable the SSL-enabled configuration settings for nginx image
        --enable-postgres-ssl: Enable the SSL-enabled configuration settings for postgres
                               image
        --branch: The Dioptra GitHub branch to use when syncing the built-in task plugins
                  and the frontend files (default: 'main')
        --worker-ssl-service: Image to use when bootstrapping the SSL named volumes for
                              the worker containers, must be 'tfcpu' or 'pytorchcpu'
                              (default: 'tfcpu')
        -h, --help: Prints help

As a rule of thumb, use the --enable-nginx-ssl option if you copied a server certificate and private key into the ssl/nginx/ folder and use the --enable-postgres-ssl option if you copied a server certificate and private key into the ssl/db/ folder. In addition, if you wish to sync the plugins from a different Dioptra GitHub branch, such as the main branch, use the --branch <arg> option. Otherwise, you do not need to specify any of the other options when running the script.

As an example, if you copied a server certificate and private key into both the ssl/nginx/ and ssl/db/ folders, then you would run the following to initialize the deployment:

./init-deployment.sh --enable-nginx-ssl --enable-postgres-ssl

You should see a series of log messages as the initialization scripts run. If you do not see any error messages while the script runs, then initialization is complete and you are ready to start the deployment.

Please note that you need to specify the --enable-nginx-ssl and --enable-postgres-ssl options each time you run the init-deployment.sh script. If you omit them on a subsequent run, then SSL/TLS will be disabled for the services. To enable SSL/TLS again, simply re-run the script with the options included.

Starting the deployment#

There are two options for starting the Dioptra deployment, using Docker Compose directly and using systemd.

Using Docker Compose#

Run the following in the generated folder to start the deployment using Docker Compose.

# Using Docker Compose v1
docker-compose up -d

# Using Docker Compose v2
docker compose up -d

Using systemd#

Note

This will only work if you are deploying Dioptra on a Linux distribution that uses systemd, such as Ubuntu, Fedora, or Red Hat Enterprise Linux.

Run the following to install the Dioptra systemd service generated by the template.

sudo cp ./systemd/dioptra.service /etc/systemd/system

Next, use systemd to start the deployment.

sudo systemctl start dioptra

Finally, enable the Dioptra systemd service so that the deployment will survive reboots.

sudo systemctl enable dioptra

Managing the deployment#

Run the following in the generated folder to check the status of the deployment.

# Using Docker Compose v1
docker-compose ps

# Using Docker Compose v2
docker compose ps

Run the following in the generated folder to check the status of the application logs.

# Using Docker Compose v1
docker-compose logs -f

# Using Docker Compose v2
docker compose logs -f

Use Ctrl + C to stop following the logs.

Run the following in the generated folder to restart the deployment.

# Using Docker Compose v1
docker-compose restart

# Using Docker Compose v2
docker compose restart
systemctl restart dioptra

Run the following in the generated folder to stop the deployment.

# Using Docker Compose v1
docker-compose down

# Using Docker Compose v2
docker compose down
systemctl stop dioptra

Run the following to prevent the deployment from restarting on reboot (systemd only),

systemctl disable dioptra