Linear
Linear layer implementations.
-
nabla.nn.layers.linear.linear_forward(x, weight, bias=None)[source]
Forward pass through a linear layer.
Computes: output = x @ weight + bias
- Parameters:
x (Array) – Input tensor of shape (batch_size, in_features)
weight (Array) – Weight tensor of shape (in_features, out_features)
bias (Array | None) – Optional bias tensor of shape (1, out_features) or (out_features,)
- Returns:
Output tensor of shape (batch_size, out_features)
- Return type:
Array
-
nabla.nn.layers.linear.mlp_forward(x, params)[source]
MLP forward pass through all layers.
This is the original MLP forward function from mlp_train_jit.py.
Applies ReLU activation to all layers except the last.
- Parameters:
x (Array) – Input tensor of shape (batch_size, input_dim)
params (list[Array]) – List of parameters [W1, b1, W2, b2, …, Wn, bn]
- Returns:
Output tensor of shape (batch_size, output_dim)
- Return type:
Array
-
nabla.nn.layers.linear.mlp_forward_with_activations(x, params, activation='relu', final_activation=None)[source]
MLP forward pass with configurable activations.
- Parameters:
x (Array) – Input tensor of shape (batch_size, input_dim)
params (list[Array]) – List of parameters [W1, b1, W2, b2, …, Wn, bn]
activation (str) – Activation function for hidden layers (“relu”, “tanh”, “sigmoid”)
final_activation (str | None) – Optional activation for final layer
- Returns:
Output tensor of shape (batch_size, output_dim)
- Return type:
Array