pyhgf.updates.learning.learning_weights#

pyhgf.updates.learning.learning_weights(attributes, node_idx, edges, lr=None, adam_beta1=None, adam_beta2=None, adam_epsilon=None)[source]#

Unified weights update.

Branches on the lr and adam_beta1 parameters:

  • Adam (adam_beta1 is a float): uses the Adam optimiser.

  • Fixed (lr is a float, no Adam): uses a fixed learning rate. \(\Delta w_i = \text{lr} \cdot (\text{PE} \cdot \pi_\text{child}) \cdot g(\text{parent}_i)\)

  • Dynamic (lr is None, no Adam): uses a precision-based learning rate (Kalman gain). \(K_i = \pi_{\text{parent}_i} / (\pi_{\text{parent}_i} + \pi_\text{child})\)

Parameters:
  • attributes (dict[int | str, dict]) – The attributes of the probabilistic network.

  • node_idx (int) – Pointer to the input node.

  • edges (tuple[AdjacencyLists, ...]) – The edges of the probabilistic nodes as a tuple of pyhgf.typing.Indexes. The tuple has the same length as node number. For each node, the index list value and volatility parents and children.

  • lr (float | None) – Fixed learning rate. When None (default) the dynamic precision-weighted rule is used instead. When Adam is active, this is the Adam step size.

  • adam_beta1 (float | None) – Adam first moment decay rate. When None (default) Adam is not used.

  • adam_beta2 (float | None) – Adam second moment decay rate.

  • adam_epsilon (float | None) – Adam numerical stability constant.

Returns:

The attributes of the probabilistic network.

Return type:

attributes