ride.utils.discriminative_lr
¶
Module Contents¶
Classes¶
A metaclass that calls optional __pre_init__ and __post_init__ methods |
|
Same as nn.Module, but no need for subclasses to call super().__init__ |
|
Register a lone parameter p in a module. |
Functions¶
|
Get children of m. |
|
Get number of children modules in m. |
Return the children of m and its direct parameters not registered in modules. |
|
|
Build log-stepped array from start to stop in n steps. |
|
Build differential learning rates from lr. |
|
Unfreeze or freeze all layers |
|
Either return the number of layers with requires_grad is True |
|
Flatten our model and generate a list of dictionnaries to be passed to the |
Attributes¶
Developped by the Fastai team for the Fastai library |
|
Modified version of lr_range from fastai |
- ride.utils.discriminative_lr.logger[source]¶
Developped by the Fastai team for the Fastai library From the fastai library https://www.fast.ai and https://github.com/fastai/fastai
- class ride.utils.discriminative_lr.PrePostInitMeta[source]¶
Bases:
type
A metaclass that calls optional __pre_init__ and __post_init__ methods
- class ride.utils.discriminative_lr.Module[source]¶
Bases:
torch.nn.Module
Same as nn.Module, but no need for subclasses to call super().__init__
- class ride.utils.discriminative_lr.ParameterModule(p: torch.nn.Parameter)[source]¶
Bases:
Module
Register a lone parameter p in a module.
- ride.utils.discriminative_lr.children(m: torch.nn.Module)[source]¶
Get children of m.
- ride.utils.discriminative_lr.num_children(m: torch.nn.Module)[source]¶
Get number of children modules in m.
- ride.utils.discriminative_lr.children_and_parameters(m: torch.nn.Module)[source]¶
Return the children of m and its direct parameters not registered in modules.
- ride.utils.discriminative_lr.even_mults(start: float, stop: float, n: int) numpy.ndarray [source]¶
Build log-stepped array from start to stop in n steps.
- ride.utils.discriminative_lr.flatten_model[source]¶
Modified version of lr_range from fastai https://github.com/fastai/fastai/blob/master/fastai/basic_train.py#L185
- ride.utils.discriminative_lr.lr_range(net: torch.nn.Module, lr: slice, model_len: int) numpy.ndarray [source]¶
Build differential learning rates from lr.
- ride.utils.discriminative_lr.unfreeze_layers(model: torch.nn.Sequential, unfreeze: bool = True) None [source]¶
Unfreeze or freeze all layers
- ride.utils.discriminative_lr.build_param_dicts(layers: torch.nn.Sequential, lr: list = [0], return_len: bool = False) Union[int, list] [source]¶
Either return the number of layers with requires_grad is True or return a list of dictionnaries containing each layers on its associated LR” Both weight and bias are check for requires_grad is True
- ride.utils.discriminative_lr.discriminative_lr(net: torch.nn.Module, lr: slice, unfreeze: bool = False) Union[list, numpy.ndarray, torch.nn.Sequential] [source]¶
Flatten our model and generate a list of dictionnaries to be passed to the optimizer. - If only one learning rate is passed as a slice the last layer will have the corresponding learning rate and all other ones will have lr/10 - If two learning rates are passed such as slice(min_lr, max_lr) the last layer will have max_lr as a learning rate and the first one will have min_lr. All middle layers will have learning rates logarithmically interpolated ranging from min_lr to max_lr