pyhgf.distribution.HGFLogpGradOp#

class pyhgf.distribution.HGFLogpGradOp(input_data=nan, time_steps=None, model_type='continous', update_type='eHGF', n_levels=2, response_function=None, response_function_inputs=None)[source]#

Gradient Op for the HGF distribution.

Parameters:
  • input_data (Array | ndarray | bool | number | bool | int | float | complex)

  • time_steps (ndarray | Array | bool | number | bool | int | float | complex | None)

  • model_type (str)

  • update_type (str)

  • n_levels (int)

  • response_function (Callable | None)

  • response_function_inputs (ndarray | Array | bool | number | bool | int | float | complex | None)

__init__(input_data=nan, time_steps=None, model_type='continous', update_type='eHGF', n_levels=2, response_function=None, response_function_inputs=None)[source]#

Initialize function.

Parameters:
input_data

An array of input time series where the first dimension is the number of models to fit in parallel. By default, the associated time_steps vector is the unit vector. A different time vector can be passed to the time_steps argument.

time_steps

An array with shapes equal to input_data containing the time_steps vectors. If one of the list items is None, or if None is provided instead, the time_steps vector will default to an integer vector starting at 0.

model_type

The model type to use (can be “continuous” or “binary”).

update_type

The type of update to perform for volatility coupling. Can be “unbounded” (defaults), “eHGF” or “standard”. The unbounded approximation was recently introduced to avoid negative precisions updates, which greatly improve sampling performance. The eHGF update step was proposed as an alternative to the original definition in that it starts by updating the mean and then the precision of the parent node, which generally reduces the errors associated with impossible parameter space and improves sampling.

n_levels

The number of hierarchies in the perceptual model (can be 2 or 3). If None, the nodes hierarchy is not created and might be provided afterwards using add_nodes().

response_function

The response function to use to compute the model surprise.

response_function_inputs

A list of tuples with the same length as the number of models. Each tuple contains additional data and parameters that can be accessible to the response functions.

Parameters:
  • input_data (Array | ndarray | bool | number | bool | int | float | complex)

  • time_steps (ndarray | Array | bool | number | bool | int | float | complex | None)

  • model_type (str)

  • update_type (str)

  • n_levels (int)

  • response_function (Callable | None)

  • response_function_inputs (ndarray | Array | bool | number | bool | int | float | complex | None)

Methods

L_op(inputs, outputs, output_grads)

Construct a graph for the L-operator.

R_op(inputs, eval_points)

Construct a graph for the R-operator.

__init__([input_data, time_steps, ...])

Initialize function.

add_tag_trace(thing[, user_line])

Add tag.trace to a node or variable.

do_constant_folding(fgraph, node)

Determine whether or not constant folding should be performed for the given node.

grad(inputs, output_grads)

Construct a graph for the gradient with respect to each input variable.

inplace_on_inputs(allowed_inplace_inputs)

Try to return a version of self that tries to inplace in as many as allowed_inplace_inputs.

make_node([mean_1, mean_2, mean_3, ...])

Initialize node structure.

make_py_thunk(node, storage_map, ...[, debug])

Make a Python thunk.

make_thunk(node, storage_map, compute_map, ...)

Create a thunk.

perform(node, inputs, outputs)

Perform node operations.

prepare_node(node, storage_map, compute_map, ...)

Make any special modifications that the Op needs before doing Op.make_thunk().

Attributes

default_output

An int that specifies which output Op.__call__() should return.

destroy_map

A dict that maps output indices to the input indices upon which they operate in-place.

itypes

otypes

view_map

A dict that maps output indices to the input indices of which they are a view.