TrainingConfig

Contents

TrainingConfig#

class nfflr.train.TrainingConfig(experiment_dir: Path = PosixPath('.'), output_dir: Path | None = None, progress: bool = True, random_seed: int = 42, dataloader_workers: int = 0, pin_memory: bool = False, diskcache: Path | None = None, checkpoint: bool = True, optimizer: Literal['sgd', 'adamw'] = 'adamw', criterion: Module | Callable = MSELoss(), scheduler: Literal['onecycle'] | None = 'onecycle', warmup_steps: float | int = 0.3, per_device_batch_size: int = 256, batch_size: int | None = None, gradient_accumulation_steps: int = 1, learning_rate: float = 0.01, weight_decay: float = 1e-05, epochs: int = 30, swag: bool = False, initialize_bias: bool = False, initialize_estimated_reference_energies: bool = False, resume_checkpoint: Path | None = None, train_eval_fraction: float = 0.1)[source]#

NFFLr configuration for the optimization process.

Parameters:
experiment_dirPath

directory to load model configuration and artifacts

output_dirPath, optional

directory to save model artifacts (checkpoints, metrics)

progressbool

enable console progress bar and metric logging

diskcachePath, optional

directory to cache transformed Atoms data during training

Attributes:
batch_size
diskcache
output_dir
resume_checkpoint

Methods

criterion