metrics#
Note
See the Glossary for the meaning of the acronyms used in this guide.
distance.py#
A task plugin module for getting functions from a distance metric registry.
- get_distance_metric_list(request: List[Dict[str, str]]) List[Tuple[str, Callable[[...], numpy.ndarray]]] [source]#
Gets multiple distance metric functions from the registry.
The following metrics are available in the registry,
l_inf_norm
l_1_norm
l_2_norm
paired_cosine_similarities
paired_euclidean_distances
paired_manhattan_distances
paired_wasserstein_distances
- Parameters
request – A list of dictionaries with the keys name and func. The func key is used to lookup the metric function in the registry and must match one of the metric names listed above. The name key is human-readable label for the metric function.
- Returns
A list of tuples with two elements. The first element of each tuple is the label from the name key of request, and the second element is the callable metric function.
- get_distance_metric(func: str) Callable[[...], numpy.ndarray] [source]#
Gets a distance metric function from the registry.
The following metrics are available in the registry,
l_inf_norm
l_1_norm
l_2_norm
paired_cosine_similarities
paired_euclidean_distances
paired_manhattan_distances
paired_wasserstein_distances
- Parameters
func – A string that identifies the distance metric to return from the registry. The string must match one of the names of the metrics in the registry.
- Returns
A callable distance metric function.
- l_inf_norm(y_true, y_pred) numpy.ndarray [source]#
Calculates the L∞ norm between a batch of two matrices.
- Parameters
y_true – A batch of matrices containing the original or target values.
y_pred – A batch of matrices containing the perturbed or predicted values.
- Returns
A
numpy.ndarray
containing a batch of L∞ norms.
- l_1_norm(y_true, y_pred) numpy.ndarray [source]#
Calculates the L1 norm between a batch of two matrices.
- Parameters
y_true – A batch of matrices containing the original or target values.
y_pred – A batch of matrices containing the perturbed or predicted values.
- Returns
A
numpy.ndarray
containing a batch of L1 norms.
- l_2_norm(y_true, y_pred) numpy.ndarray [source]#
Calculates the L2 norm between a batch of two matrices.
- Parameters
y_true – A batch of matrices containing the original or target values.
y_pred – A batch of matrices containing the perturbed or predicted values.
- Returns
A
numpy.ndarray
containing a batch of L2 norms.
- paired_cosine_similarities(y_true, y_pred) numpy.ndarray [source]#
Calculates the cosine similarity between a batch of two matrices.
- Parameters
y_true – A batch of matrices containing the original or target values.
y_pred – A batch of matrices containing the perturbed or predicted values.
- Returns
A
numpy.ndarray
containing a batch of cosine similarities.
- paired_euclidean_distances(y_true, y_pred) numpy.ndarray [source]#
Calculates the Euclidean distance between a batch of two matrices.
The Euclidean distance is equivalent to the L2 norm.
- Parameters
y_true – A batch of matrices containing the original or target values.
y_pred – A batch of matrices containing the perturbed or predicted values.
- Returns
A
numpy.ndarray
containing a batch of euclidean distances.
- paired_manhattan_distances(y_true, y_pred) numpy.ndarray [source]#
Calculates the Manhattan distance between a batch of two matrices.
The Manhattan distance is equivalent to the L1 norm.
- Parameters
y_true – A batch of matrices containing the original or target values.
y_pred – A batch of matrices containing the perturbed or predicted values.
- Returns
A
numpy.ndarray
containing a batch of Manhattan distances.
- paired_wasserstein_distances(y_true, y_pred, **kwargs) numpy.ndarray [source]#
Calculates the Wasserstein distance between a batch of two matrices.
- Parameters
y_true – A batch of matrices containing the original or target values.
y_pred – A batch of matrices containing the perturbed or predicted values.
- Returns
A
numpy.ndarray
containing a batch of Wasserstein distances.
See also
performance.py#
A task plugin module for getting functions from a performance metric registry.
- get_performance_metric_list(request: List[Dict[str, str]]) List[Tuple[str, Callable[[...], float]]] [source]#
Gets multiple performance metric functions from the registry.
The following metrics are available in the registry,
accuracy
roc_auc
categorical_accuracy
mcc
f1
precision
recall
- Parameters
request – A list of dictionaries with the keys name and func. The func key is used to lookup the metric function in the registry and must match one of the metric names listed above. The name key is human-readable label for the metric function.
- Returns
A list of tuples with two elements. The first element of each tuple is the label from the name key of request, and the second element is the callable metric function.
- get_performance_metric(func: str) Callable[[...], float] [source]#
Gets a performance metric function from the registry.
The following metrics are available in the registry,
accuracy
roc_auc
categorical_accuracy
mcc
f1
precision
recall
- Parameters
func – A string that identifies the performance metric to return from the registry. The string must match one of the names of the metrics in the registry.
- Returns
A callable performance metric function.
- accuracy(y_true, y_pred, **kwargs) float [source]#
Calculates the accuracy score.
- Parameters
y_true – A 1d array-like, or label indicator array containing the ground truth labels.
y_pred – A 1d array-like, or label indicator array containing the predicted labels, as returned by a classifier.
- Returns
The fraction of correctly classified samples.
See also
- roc_auc(y_true, y_pred, **kwargs) float [source]#
Calculates the Area Under the Receiver Operating Characteristic Curve (ROC AUC).
- Parameters
y_true – An array-like of shape (n_samples,) or (n_samples, n_classes) containing the ground truth labels.
y_pred – An array-like of shape (n_samples,) or (n_samples, n_classes) containing the predicted labels, as returned by a classifier.
- Returns
The ROC AUC.
See also
- categorical_accuracy(y_true, y_pred) float [source]#
Calculates the categorical accuracy.
This function is a port of the Keras metric
CategoricalAccuracy
.- Parameters
y_true – A 1d array-like, or label indicator array containing the ground truth labels.
y_pred – A 1d array-like, or label indicator array containing the predicted labels, as returned by a classifier.
- Returns
The fraction of correctly classified samples.
- mcc(y_true, y_pred, **kwargs) float [source]#
Calculates the Matthews correlation coefficient.
- Parameters
y_true – A 1d array containing the ground truth labels.
y_pred – A 1d array containing the predicted labels, as returned by a classifier.
- Returns
The Matthews correlation coefficient (+1 represents a perfect prediction, 0 an average random prediction and -1 and inverse prediction).
See also
- f1(y_true, y_pred, **kwargs) float [source]#
Calculates the F1 score.
- Parameters
y_true – A 1d array-like, or label indicator array containing the ground truth labels.
y_pred – A 1d array-like, or label indicator array containing the predicted labels, as returned by a classifier.
- Returns
The F1 score of the positive class in binary classification or the weighted average of the F1 scores of each class for the multiclass task.
See also
- precision(y_true, y_pred, **kwargs) float [source]#
Calculates the precision score.
- Parameters
y_true – A 1d array-like, or label indicator array containing the ground truth labels.
y_pred – A 1d array-like, or label indicator array containing the predicted labels, as returned by a classifier.
- Returns
The precision of the positive class in binary classification or the weighted average of the precision of each class for the multiclass task.
See also
- recall(y_true, y_pred, **kwargs) float [source]#
Calculates the recall score.
- Parameters
y_true – A 1d array-like, or label indicator array containing the ground truth labels.
y_pred – A 1d array-like, or label indicator array containing the predicted labels, as returned by a classifier.
- Returns
The recall of the positive class in binary classification or the weighted average of the recall of each class for the multiclass task.
See also
exceptions.py#
A task plugin module of exceptions for the metrics plugins collection.