Skip to content

Function & Context

simplegrad.core.autograd.Function

Base class for differentiable operations.

Subclass this and implement forward and backward as static methods. Call cls.apply(*inputs) to run the op — it handles creating the output tensor, wiring the computation graph, and setting up gradient accumulation.

forward computes and returns the numpy result (and saves anything needed for backward into ctx). backward receives the upstream gradient and returns one gradient array per Tensor input — pure computation, no accumulation. The apply method handles accumulating those gradients into .grad via +=, including broadcast dimension reduction.

Class attributes

oper: Short label shown on graph nodes. Defaults to the class name. differentiable: Set to False for ops like argmax that have no gradient.

Source code in simplegrad/core/autograd.py
class Function:
    """Base class for differentiable operations.

    Subclass this and implement ``forward`` and ``backward`` as static methods.
    Call ``cls.apply(*inputs)`` to run the op — it handles creating the output
    tensor, wiring the computation graph, and setting up gradient accumulation.

    ``forward`` computes and returns the numpy result (and saves anything needed
    for backward into ``ctx``). ``backward`` receives the upstream gradient and
    returns one gradient array per Tensor input — pure computation, no
    accumulation. The ``apply`` method handles accumulating those gradients into
    ``.grad`` via ``+=``, including broadcast dimension reduction.

    Class attributes:
        oper: Short label shown on graph nodes. Defaults to the class name.
        differentiable: Set to False for ops like argmax that have no gradient.
    """

    oper: str = ""
    differentiable: bool = True

    @classmethod
    def apply(cls, *inputs: object, oper: str | None = None) -> "Tensor":
        """Run the op, build the graph node, and wire up the backward step.

        Args:
            *inputs: Tensor and non-Tensor arguments forwarded to ``forward``
                and ``output_shape``. Non-Tensor inputs are ignored during
                gradient accumulation.
            oper: Optional label override for the graph node. Falls back to
                ``cls.oper`` then ``cls.__name__``.

        Returns:
            Output tensor wired into the computation graph.
        """

        ctx = Context()
        tensor_inputs = [t for t in inputs if isinstance(t, Tensor)]
        out = _create_op_result(
            lambda: cls.forward(ctx, *inputs),
            shape=cls.output_shape(*inputs),
            dtype=tensor_inputs[0].dtype,
        )
        out.prev = set(tensor_inputs)
        out.comp_grad = _should_compute_grad(*tensor_inputs) and cls.differentiable
        out.is_leaf = False
        out.oper = oper if oper is not None else (cls.oper or cls.__name__)
        if out.comp_grad:
            out.backward_step = lambda: cls._accumulate(ctx, out, tensor_inputs)
        return out

    @classmethod
    def _accumulate(cls, ctx, out, tensor_inputs: list) -> None:
        """Call backward and accumulate the returned gradients into each input."""
        grads = cls.backward(ctx, out.grad)
        if not isinstance(grads, tuple):
            grads = (grads,)
        for inp, grad in zip(tensor_inputs, grads):
            if inp.comp_grad and grad is not None:
                inp._init_grad_if_needed()
                inp.grad += inp._reduce_broadcasted_dims(grad)

    @staticmethod
    def output_shape(*inputs) -> tuple:
        """Infer the output shape from inputs without executing the op.

        The default returns the shape of the first Tensor input (correct for
        element-wise ops). Override for ops where the output shape differs.
        """

        for inp in inputs:
            if isinstance(inp, Tensor):
                return inp.shape
        raise ValueError("No Tensor input found")

    @staticmethod
    def forward(ctx, *inputs) -> np.ndarray:
        """Compute the forward pass. Save anything needed for backward into ctx."""
        raise NotImplementedError

    @staticmethod
    def backward(ctx, grad_output: np.ndarray) -> np.ndarray | tuple:
        """Compute gradients. Return one array per Tensor input (None = no grad)."""
        raise NotImplementedError

apply(*inputs: object, oper: str | None = None) -> 'Tensor' classmethod

Run the op, build the graph node, and wire up the backward step.

Parameters:

Name Type Description Default
*inputs object

Tensor and non-Tensor arguments forwarded to forward and output_shape. Non-Tensor inputs are ignored during gradient accumulation.

()
oper str | None

Optional label override for the graph node. Falls back to cls.oper then cls.__name__.

None

Returns:

Type Description
'Tensor'

Output tensor wired into the computation graph.

Source code in simplegrad/core/autograd.py
@classmethod
def apply(cls, *inputs: object, oper: str | None = None) -> "Tensor":
    """Run the op, build the graph node, and wire up the backward step.

    Args:
        *inputs: Tensor and non-Tensor arguments forwarded to ``forward``
            and ``output_shape``. Non-Tensor inputs are ignored during
            gradient accumulation.
        oper: Optional label override for the graph node. Falls back to
            ``cls.oper`` then ``cls.__name__``.

    Returns:
        Output tensor wired into the computation graph.
    """

    ctx = Context()
    tensor_inputs = [t for t in inputs if isinstance(t, Tensor)]
    out = _create_op_result(
        lambda: cls.forward(ctx, *inputs),
        shape=cls.output_shape(*inputs),
        dtype=tensor_inputs[0].dtype,
    )
    out.prev = set(tensor_inputs)
    out.comp_grad = _should_compute_grad(*tensor_inputs) and cls.differentiable
    out.is_leaf = False
    out.oper = oper if oper is not None else (cls.oper or cls.__name__)
    if out.comp_grad:
        out.backward_step = lambda: cls._accumulate(ctx, out, tensor_inputs)
    return out

backward(ctx, grad_output: np.ndarray) -> np.ndarray | tuple staticmethod

Compute gradients. Return one array per Tensor input (None = no grad).

Source code in simplegrad/core/autograd.py
@staticmethod
def backward(ctx, grad_output: np.ndarray) -> np.ndarray | tuple:
    """Compute gradients. Return one array per Tensor input (None = no grad)."""
    raise NotImplementedError

forward(ctx, *inputs) -> np.ndarray staticmethod

Compute the forward pass. Save anything needed for backward into ctx.

Source code in simplegrad/core/autograd.py
@staticmethod
def forward(ctx, *inputs) -> np.ndarray:
    """Compute the forward pass. Save anything needed for backward into ctx."""
    raise NotImplementedError

output_shape(*inputs) -> tuple staticmethod

Infer the output shape from inputs without executing the op.

The default returns the shape of the first Tensor input (correct for element-wise ops). Override for ops where the output shape differs.

Source code in simplegrad/core/autograd.py
@staticmethod
def output_shape(*inputs) -> tuple:
    """Infer the output shape from inputs without executing the op.

    The default returns the shape of the first Tensor input (correct for
    element-wise ops). Override for ops where the output shape differs.
    """

    for inp in inputs:
        if isinstance(inp, Tensor):
            return inp.shape
    raise ValueError("No Tensor input found")

simplegrad.core.autograd.Context

Stores intermediate values computed during a forward pass for reuse in backward.

Every op that needs to carry state from forward to backward should create a Context, write to it inside the forward lambda, and read from it inside the backward function. This pattern works in both eager and lazy mode: in eager mode the forward lambda runs immediately; in lazy mode it runs at .realize() time — either way, the backward always runs after the forward, so ctx attributes are always populated by the time they are read.

Attributes are set freely with dot notation — use whatever names are meaningful for the op.

Example

ctx = Context() ctx.mask = np.random.rand(*x.values.shape) >= p ctx.mask # available in backward

Source code in simplegrad/core/autograd.py
class Context:
    """Stores intermediate values computed during a forward pass for reuse in backward.

    Every op that needs to carry state from forward to backward should create a
    ``Context``, write to it inside the forward lambda, and read from it inside
    the backward function. This pattern works in both eager and lazy mode: in
    eager mode the forward lambda runs immediately; in lazy mode it runs at
    ``.realize()`` time — either way, the backward always runs after the forward,
    so ``ctx`` attributes are always populated by the time they are read.

    Attributes are set freely with dot notation — use whatever names are
    meaningful for the op.

    Example:
        >>> ctx = Context()
        >>> ctx.mask = np.random.rand(*x.values.shape) >= p
        >>> ctx.mask  # available in backward
    """

    pass