Function & Context
simplegrad.core.autograd.Function
Base class for differentiable operations.
Subclass this and implement forward and backward as static methods.
Call cls.apply(*inputs) to run the op — it handles creating the output
tensor, wiring the computation graph, and setting up gradient accumulation.
forward computes and returns the numpy result (and saves anything needed
for backward into ctx). backward receives the upstream gradient and
returns one gradient array per Tensor input — pure computation, no
accumulation. The apply method handles accumulating those gradients into
.grad via +=, including broadcast dimension reduction.
Class attributes
oper: Short label shown on graph nodes. Defaults to the class name. differentiable: Set to False for ops like argmax that have no gradient.
Source code in simplegrad/core/autograd.py
156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 | |
apply(*inputs: object, oper: str | None = None) -> 'Tensor'
classmethod
Run the op, build the graph node, and wire up the backward step.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
*inputs
|
object
|
Tensor and non-Tensor arguments forwarded to |
()
|
oper
|
str | None
|
Optional label override for the graph node. Falls back to
|
None
|
Returns:
| Type | Description |
|---|---|
'Tensor'
|
Output tensor wired into the computation graph. |
Source code in simplegrad/core/autograd.py
backward(ctx, grad_output: np.ndarray) -> np.ndarray | tuple
staticmethod
Compute gradients. Return one array per Tensor input (None = no grad).
forward(ctx, *inputs) -> np.ndarray
staticmethod
output_shape(*inputs) -> tuple
staticmethod
Infer the output shape from inputs without executing the op.
The default returns the shape of the first Tensor input (correct for element-wise ops). Override for ops where the output shape differs.
Source code in simplegrad/core/autograd.py
simplegrad.core.autograd.Context
Stores intermediate values computed during a forward pass for reuse in backward.
Every op that needs to carry state from forward to backward should create a
Context, write to it inside the forward lambda, and read from it inside
the backward function. This pattern works in both eager and lazy mode: in
eager mode the forward lambda runs immediately; in lazy mode it runs at
.realize() time — either way, the backward always runs after the forward,
so ctx attributes are always populated by the time they are read.
Attributes are set freely with dot notation — use whatever names are meaningful for the op.
Example
ctx = Context() ctx.mask = np.random.rand(*x.values.shape) >= p ctx.mask # available in backward