pymc.variational.KLqp.fit#
- KLqp.fit(n=10000, score=None, callbacks=None, progressbar=True, progressbar_theme=<rich.theme.Theme object>, *, backend=None, **kwargs)#
Perform Operator Variational Inference.
- Parameters:
- n
int number of iterations
- scorebool
evaluate loss on each iteration or not
- callbacks
list[function: (Approximation,losses,i) ->None] calls provided functions after each iteration step
- progressbarbool
whether to show progressbar or not
- progressbar_theme
Theme Custom theme for the progress bar
- backend
str, optional Which computational backend to use. Recommended to be one of “numba”, “c”, and “jax”.
- n
- Returns:
- Other Parameters:
- obj_n_mc: int
Number of monte carlo samples used for approximation of objective gradients
- tf_n_mc: `int`
Number of monte carlo samples used for approximation of test function gradients
- obj_optimizer: function (grads, params) -> updates
Optimizer that is used for objective params
- test_optimizer: function (grads, params) -> updates
Optimizer that is used for test function params
- more_obj_params: `list`
Add custom params for objective optimizer
- more_tf_params: `list`
Add custom params for test function optimizer
- more_updates: `dict`
Add custom updates to resulting updates
- total_grad_norm_constraint: `float`
Bounds gradient norm, prevents exploding gradient problem
- compile_kwargs: `dict`
Add kwargs to pytensor.function (e.g. {‘profile’: True}).
compile_kwargs["mode"]cannot be combined withbackend.- more_replacements: `dict`
Apply custom replacements before calculating gradients