.. This Software (Dioptra) is being made available as a public service by the .. National Institute of Standards and Technology (NIST), an Agency of the United .. States Department of Commerce. This software was developed in part by employees of .. NIST and in part by NIST contractors. Copyright in portions of this software that .. were developed by NIST contractors has been licensed or assigned to NIST. Pursuant .. to Title 17 United States Code Section 105, works of NIST employees are not .. subject to copyright protection in the United States. However, NIST may hold .. international copyright in software created by its employees and domestic .. copyright (or licensing rights) in portions of software that were assigned or .. licensed to NIST. To the extent that NIST holds copyright in this software, it is .. being made available under the Creative Commons Attribution 4.0 International .. license (CC BY 4.0). The disclaimers of the CC BY 4.0 license apply to all parts .. of the software developed or licensed by NIST. .. .. ACCESS THE FULL CC BY 4.0 LICENSE HERE: .. https://creativecommons.org/licenses/by/4.0/legalcode .. _how-to-data-mounts: Mount Data Volumes ================== This guide explains how to mount data volumes (host directories or NFS shares) into Dioptra worker containers for accessing datasets and other artifacts. Prerequisites ------------- * :ref:`how-to-prepare-deployment` - A configured Dioptra deployment * :ref:`how-to-using-docker-compose-overrides` - Override file created * (For NFS) Network access to the NFS server Overview -------- The ``docker-compose.yml`` file generated by the cookiecutter template supports mounting a single datasets directory from the host machine into worker containers via the ``datasets_directory`` variable. For more advanced configurations, use the ``docker-compose.override.yml`` file. Common reasons for mounting additional folders: 1. Your datasets are stored in a folder on your host machine or in an NFS share 2. You want to make other artifacts available to the worker containers, such as pre-trained models Option A: Mount a Host Directory -------------------------------- .. rst-class:: header-on-a-card header-steps Step A1: Verify Directory Permissions ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Ensure the folder and all of its files are world-readable: .. code:: sh find -type d -print0 | xargs -0 chmod o=rx find -type f -print0 | xargs -0 chmod o=r .. note:: Replace ```` with the absolute path to your data directory on the host machine (e.g., ``/home/data``, ``/mnt/datasets``). .. rst-class:: header-on-a-card header-steps Step A2: Add Volume Mount to Workers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Open ``docker-compose.override.yml`` in a text editor and add a block for each worker container that needs access to the data. Worker container names include **tfcpu**, **tfgpu**, **pytorchcpu**, or **pytorchgpu**. This example mounts the host data directory to ``/dioptra/data`` in the container as read-only: .. code:: yaml services: -tfcpu-01: volumes: - ":/dioptra/data:ro" .. note:: Replace ```` with your deployment's slugified name (default: ``dioptra-deployment``) and ```` with the absolute path to your data directory. Repeat for each worker container that needs access to the data. Option B: Mount an NFS Share ---------------------------- .. rst-class:: header-on-a-card header-steps Step B1: Define the NFS Volume ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Open ``docker-compose.override.yml`` and add a **top-level** ``volumes:`` section (not nested under ``services:``) with a named NFS volume definition: .. code:: yaml volumes: dioptra-data: driver: local driver_opts: type: nfs o: "addr=,auto,rw,bg,nfsvers=4,intr,actimeo=1800" device: ":" .. note:: Replace ```` with your NFS server's IP address and ```` with the path to the exported directory on the NFS server. .. rst-class:: header-on-a-card header-steps Step B2: Ensure Files Are World-Readable ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Worker containers run as a non-root user and require read access to the data files. The files and directories on the NFS share must be world-readable (``o=r`` for files, ``o=rx`` for directories). How you set these permissions depends on your access level: **If you have shell access to the NFS server:** Run the chmod commands directly on the server where the files are stored: .. code:: sh find -type d -print0 | xargs -0 chmod o=rx find -type f -print0 | xargs -0 chmod o=r .. note:: Replace ```` with the path to the exported directory on the NFS server (e.g., ``/srv/nfs/dioptra-data``). **If the NFS share is mounted on a system where you have write access:** Run the chmod commands on the mounted path: .. code:: sh find -type d -print0 | xargs -0 chmod o=rx find -type f -print0 | xargs -0 chmod o=r .. note:: Replace ```` with the local mount point of the NFS share (e.g., ``/mnt/nfs-data``). **If you do not have write access to the files:** Coordinate with your system administrator to set the appropriate permissions on the NFS share. .. rst-class:: header-on-a-card header-steps Step B3: Add Volume Mount to Workers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Add service blocks for each worker container that needs access to the data, using the named volume: .. code:: yaml services: -tfcpu-01: volumes: - "dioptra-data:/dioptra/data:ro" .. note:: Replace ```` with your deployment's slugified name (default: ``dioptra-deployment``). The ``:ro`` suffix mounts the NFS share as read-only to prevent jobs from accidentally modifying or deleting data. Repeat for each worker container that needs access to the data. .. rst-class:: header-on-a-card header-seealso See Also -------- * :ref:`how-to-using-docker-compose-overrides` - Docker Compose override file basics * :ref:`how-to-download-data` - Download example datasets * :ref:`how-to-prepare-deployment` - Full deployment customization * :ref:`how-to-integrating-custom-containers` - Add custom containers to your deployment