Schedules#
Learning rate schedules.
- nabla.nn.optim.schedules.constant_schedule(initial_lr=0.001)[source]#
Constant learning rate schedule.
- nabla.nn.optim.schedules.exponential_decay_schedule(initial_lr=0.001, decay_factor=0.95, decay_every=1000)[source]#
Exponential decay learning rate schedule.
- nabla.nn.optim.schedules.step_decay_schedule(initial_lr=0.001, decay_factor=0.1, step_size=30)[source]#
Step decay learning rate schedule.
- nabla.nn.optim.schedules.cosine_annealing_schedule(initial_lr=0.001, min_lr=1e-06, period=1000)[source]#
Cosine annealing learning rate schedule.
- nabla.nn.optim.schedules.warmup_cosine_schedule(initial_lr=0.001, warmup_epochs=100, total_epochs=1000, min_lr=1e-06)[source]#
Warmup followed by cosine annealing schedule.
- Parameters:
- Returns:
Function that takes epoch and returns learning rate
- Return type:
- nabla.nn.optim.schedules.learning_rate_schedule(epoch, initial_lr=0.001, decay_factor=0.95, decay_every=1000)[source]#
Learning rate schedule for complex function learning.
This is the original function from mlp_train_jit.py for backward compatibility. Consider using exponential_decay_schedule instead for new code.