Examples#
Hands-on Jupyter notebooks for learning Nabla from basics to distributed training and custom kernels.
Example Notebooks
- Example 1: Tensors and Operations
- Example 2: Automatic Differentiation
- Example 3: Graph Tracing — Under the Hood
- Example 4a: MLP Training — PyTorch-Style (Imperative)
- Example 4b: MLP Training (JAX-Style / Functional)
- Example 5: Transforms and
@nb.compile - Example 6a: Transformer Training — PyTorch-Style (Imperative)
- Example 6b: Transformer Training (JAX-Style / Functional)
- Example 7: Pipeline Parallelism (GPipe)
- Example 8: 2D Parallel Training (Pipeline + Data Parallelism)
- Example 9: Pipeline Parallel Inference
- Example 10: Compiled vs Eager vs JAX
- Example 11: LoRA & QLoRA Fine-Tuning
- Example 12: Custom Mojo Kernels
- Example 13: CNN Training