Skip to main content
Back to top
Ctrl
+
K
Getting Started
Examples
Example 1: Tensors and Operations
Example 2: Automatic Differentiation
Example 3: Graph Tracing — Under the Hood
Example 3a: MLP Training (PyTorch-Style)
Example 3b: MLP Training (JAX-Style / Functional)
Example 4: Transforms and
@nb.compile
Example 5a: Transformer Training (PyTorch-Style)
Example 5b: Transformer Training (JAX-Style / Functional)
Example 6: Pipeline Parallelism (GPipe)
Example 7: 2D Parallel Training (Pipeline + Data Parallelism)
Example 8: Pipeline Parallel Inference
Example 9: Compiled vs Eager vs JAX
Example 10: LoRA & QLoRA Fine-Tuning
Example 11: Custom Mojo Kernels
API Reference
Core
Tensor
Defaults & Context
Autograd Runtime
PyTrees
Sharding
Types & Devices
Transforms
Differentiation
Vectorization
Compilation
Distributed Transforms
Operations
Creation
Binary
Unary & Activations
Reduction
Comparison & Logical
Control Flow
Multi-Output
View & Shape
Communication
Neural Networks (nabla.nn)
Modules
Functional API
Optim
Fine-Tuning
LoRA
QLoRA
Checkpointing
Testing (nabla.testing)
Utilities
Mojo Integration
Custom Kernels
Repository
Suggest edit
Open issue
.md
.pdf
Fine-Tuning
Fine-Tuning
#
LoRA
QLoRA
Checkpointing