Design Principles#

Dioptra was designed with the following use cases and key properties in mind.

Use Cases#

We envision the following primary use cases for Dioptra:

  • Model Testing:

    • 1st party - Assess AI models throughout the development lifecycle

    • 2nd party - Assess AI models during acquisition or in an evaluation lab environment

    • 3rd party - Assess AI models during auditing or compliance activities

  • Research: Aid trustworthy AI researchers in tracking experiments

  • Evaluations and Challenges: Provide a common platform and resources for participants

  • Red-Teaming: Expose models and resources to a red team in a controlled environment

Key Properties#

Dioptra strives for the following key properties:

  • Reproducible: Dioptra automatically creates snapshots of resources so experiments can be reproduced and validated

  • Traceable: The full history of experiments and their inputs are tracked

  • Extensible: Support for expanding functionality and importing existing Python packages via a plugin system

  • Interoperable: A type system promotes interoperability between plugins

  • Modular: New experiments can be composed from modular components in a simple yaml file

  • Secure: Dioptra provides user authentication with access controls coming soon

  • Interactive: Users can interact with Dioptra via an intuitive web interface

  • Shareable and Reusable: Dioptra can be deployed in a multi-tenant environment so users can share and reuse components