mindpose.optim

mindpose.optim.create_optimizer(params, name='adam', learning_rate=0.001, weight_decay=0.0, filter_bias_and_bn=True, loss_scale=1.0, **kwargs)[source]

Create optimizer.

Parameters:
  • params (List[Any]) – Netowrk parameters

  • name (str) – Optimizer Name. Default: adam

  • learning_rate (Union[float, LearningRateSchedule]) – Learning rate. Accept constant learning rate or a Learning Rate Scheduler. Default: 0.001

  • weight_decay (float) – L2 weight decay. Default: 0.

  • filter_bias_and_bn (bool) – whether to filter batch norm paramters and bias from weight decay. If True, weight decay will not apply on BN parameters and bias in Conv or Dense layers. Default: True.

  • loss_scale (float) – Loss scale in mix-precision training. Default: 1.0

  • **kwargs (Any) – Arguments feeding to the optimizer

Return type:

Optimizer

Returns:

Optimizer