Skip to content
Discussion options

You must be logged in to vote

The inconsistencies you are experiencing likely stem from passing raw softmax outputs (probabilities) directly to the Accuracy metric. As per the TorchMetrics Accuracy docs, the metric expects integer class labels or binary labels for comparison, not probability tensors.
For binary classification, you should convert your model outputs from softmax or sigmoid probabilities to binary predictions by applying a threshold such as 0.5. Here’s a simple example of how to do this:

import torch
from torchmetrics.classification import BinaryAccuracy

# Initialize metric
accuracy = BinaryAccuracy()

# Model outputs as probabilities (2D with softmax over dim=1)
probs = torch.tensor([[0.3, 0.7], [0.6, 0.4

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by Borda
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
2 participants