Loss Functions
simplegrad.functions.losses.ce_loss(z: Tensor, y: Tensor, dim: int = -1, reduction: str = 'mean') -> Tensor
Compute cross-entropy loss with built-in softmax.
Numerically stable: uses the log-sum-exp trick internally.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
z
|
Tensor
|
Logits (raw unnormalized scores), shape |
required |
y
|
Tensor
|
Target probability distribution, same shape as |
required |
dim
|
int
|
Class dimension to apply softmax over. Defaults to -1 (last dim). |
-1
|
reduction
|
str
|
How to reduce the per-sample losses. One of |
'mean'
|
Returns:
| Type | Description |
|---|---|
Tensor
|
Scalar loss tensor (or per-sample if |
Raises:
| Type | Description |
|---|---|
ValueError
|
If |
Source code in simplegrad/functions/losses.py
simplegrad.functions.losses.mse_loss(p: Tensor, y: Tensor, reduction: str = 'mean') -> Tensor
Compute mean squared error loss: mean((p - y)^2).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
p
|
Tensor
|
Predictions tensor. |
required |
y
|
Tensor
|
Targets tensor, same shape as |
required |
reduction
|
str
|
One of |
'mean'
|
Returns:
| Type | Description |
|---|---|
Tensor
|
Scalar loss tensor (or element-wise if |
Raises:
| Type | Description |
|---|---|
ValueError
|
If |