pyhgf.distribution.logp#

pyhgf.distribution.logp(mean_1, mean_2, mean_3, precision_1, precision_2, precision_3, tonic_volatility_1, tonic_volatility_2, tonic_volatility_3, tonic_drift_1, tonic_drift_2, tonic_drift_3, volatility_coupling_1, volatility_coupling_2, input_precision, response_function_parameters, input_data, time_steps, hgf, response_function_inputs, response_function=<function <lambda>>)[source]#

Compute the log-probability of a decision model under belief trajectories.

This function returns the evidence of a single Hierarchical Gaussian Filter given network parameters, input data and behaviours under a decision model.

Parameters:
  • mean_1 (float) – The mean at the first level of the HGF. For the continuous HGF, this is the mean of the first value parent (x_1). For the binary HGF this is the mean of the binary state node (x_0).

  • mean_2 (float) – The mean at the second level of the HGF. For the continuous HGF, this is the mean of the first volatility parent (x_2). For the binary HGF this is the mean of the first continuous state node (x_1).

  • mean_3 (float) – The mean at the third level of the HGF. The value of this parameter will be ignored when using a two-level HGF (n_levels=2). For the continuous HGF, this is the mean of the second volatility parent (x_3). For the binary HGF this is the mean of the first volatility parent (x_2).

  • precision_1 (float) – The precision at the first level of the HGF. For the continuous HGF, this is the precision of the first value parent (x_1). For the binary HGF this is the precision of the binary state node (x_0).

  • precision_2 (float) – The precision at the second level of the HGF. For the continuous HGF, this is the precision of the first volatility parent (x_2). For the binary HGF this is the precision of the first continuous state node (x_1).

  • precision_3 (float) – The precision at the third level of the HGF. The value of this parameter will be ignored when using a two-level HGF (n_levels=2). For the continuous HGF, this is the precision of the second volatility parent (x_3). For the binary HGF this is the precision of the first volatility parent (x_2).

  • tonic_volatility_1 (float) – The tonic volatility at the first level (x_1 for the continuous HGF, x_2 for the binary HGF). This parameter represents the tonic part of the variance (the part that is not inherited from parent nodes).

  • tonic_volatility_2 (float) – The tonic volatility at the second level (x_2 for the continuous HGF, x_3 for the binary HGF). This parameter represents the tonic part of the variance (the part that is not inherited from parent nodes).

  • tonic_volatility_3 (float) – The tonic volatility at the third level of the HGF. This parameter represents the tonic part of the variance (the part that is not inherited from parent nodes). This parameter is only used for a three-level continuous HGF.

  • tonic_drift_1 (float) – The tonic drift at the first level of the HGF (x_1 for the continuous HGF, x_2 for the binary HGF). This parameter represents the drift of the random walk.

  • tonic_drift_2 (float) – The tonic drift at the second level of the HGF (x_2 for the continuous HGF, x_3 for the binary HGF). This parameter represents the drift of the random walk.

  • tonic_drift_3 (float) – The tonic drift at the third level of the HGF. This parameter represents the drift of the random walk. This parameter is only used for a three-level continuous HGF.

  • volatility_coupling_1 (float) – The volatility coupling between the first and second levels of the HGF (between x_1 and x_2 for a continuous HGF, and between x_2 and x_3 for a binary HGF). This represents the phasic part of the variance (the part affected by the parent nodes). Defaults to 1.0 (full connectivity).

  • volatility_coupling_2 (float) – The volatility coupling between the second and third levels of the HGF (x_2 and x_2 for a continuous HGF, not applicable to a binary HGF). This represents the phasic part of the variance (the part affected by the parent nodes). Defaults to 1.0 (full connectivity). The value of this parameter will be ignored when using a two-level HGF (n_levels=2).

  • input_precision (float) – The expected precision associated with the continuous input.

  • response_function_parameters (Array | ndarray | bool | number | bool | int | float | complex) – An array of additional parameters that will be passed to the response function to compute the surprise. This can include values over which inference is performed in a PyMC model (e.g. the inverse temperature of a binary softmax).

  • input_data (Array | ndarray | bool | number | bool | int | float | complex) – An array of input time series. The first dimension is the number of time steps and the second dimension is the number of features. The number of features is the number of input nodes time the input dimensions.

  • time_steps (Array | ndarray | bool | number | bool | int | float | complex) – An array of input time steps where the first dimension is the number of models to fit in parallel.

  • hgf (HGF) – An instance of a two or three-level Hierarchical Gaussian Filter.

  • response_function_inputs (tuple | Array | ndarray | bool | number | bool | int | float | complex | None) – An array of behavioural inputs passed to the response function where the first dimension is the number of models to fit in parallel.

  • response_function (Callable) – The response function that is used by the decision model.

Returns:

The log-probability (negative surprise).

Return type:

logp