ActiveInference
Documentation for ActiveInference.
ActiveInference.Environments.bayesian_model_average
ActiveInference.Environments.calculate_bayesian_surprise
ActiveInference.Environments.capped_log
ActiveInference.Environments.capped_log
ActiveInference.Environments.capped_log
ActiveInference.Environments.capped_log
ActiveInference.Environments.capped_log_array
ActiveInference.Environments.dot_likelihood
ActiveInference.Environments.get_joint_likelihood
ActiveInference.Environments.kl_divergence
ActiveInference.Environments.normalize_arrays
ActiveInference.Environments.normalize_arrays
ActiveInference.Environments.normalize_distribution
ActiveInference.Environments.outer_product
ActiveInference.Environments.softmax_array
ActiveInference.Environments.spm_wnorm
ActiveInference.action_select
ActiveInference.array_of_any_zeros
ActiveInference.bayesian_model_average
ActiveInference.calc_expected_utility
ActiveInference.calc_free_energy
ActiveInference.calc_pA_info_gain
ActiveInference.calc_pB_info_gain
ActiveInference.calc_states_info_gain
ActiveInference.calculate_SAPE
ActiveInference.calculate_bayesian_surprise
ActiveInference.capped_log
ActiveInference.capped_log
ActiveInference.capped_log
ActiveInference.capped_log
ActiveInference.capped_log_array
ActiveInference.check_probability_distribution
ActiveInference.check_probability_distribution
ActiveInference.check_probability_distribution
ActiveInference.compute_accuracy
ActiveInference.compute_accuracy_new
ActiveInference.construct_policies
ActiveInference.create_matrix_templates
ActiveInference.create_matrix_templates
ActiveInference.create_matrix_templates
ActiveInference.create_matrix_templates
ActiveInference.create_matrix_templates
ActiveInference.dot_likelihood
ActiveInference.fixed_point_iteration
ActiveInference.get_expected_obs
ActiveInference.get_expected_states
ActiveInference.get_expected_states
ActiveInference.get_joint_likelihood
ActiveInference.get_log_action_marginals
ActiveInference.get_model_dimensions
ActiveInference.infer_policies!
ActiveInference.infer_states!
ActiveInference.init_aif
ActiveInference.kl_divergence
ActiveInference.normalize_arrays
ActiveInference.normalize_arrays
ActiveInference.normalize_distribution
ActiveInference.onehot
ActiveInference.outer_product
ActiveInference.process_observation
ActiveInference.process_observation
ActiveInference.sample_action
ActiveInference.sample_action!
ActiveInference.select_highest
ActiveInference.softmax_array
ActiveInference.spm_wnorm
ActiveInference.update_A!
ActiveInference.update_B!
ActiveInference.update_D!
ActiveInference.update_obs_likelihood_dirichlet
ActiveInference.update_posterior_policies
ActiveInference.update_posterior_states
ActiveInference.update_state_likelihood_dirichlet
ActiveInference.update_state_prior_dirichlet
ActiveInference.action_select
— MethodSelects action from computed actions probabilities – used for stochastic action sampling
ActiveInference.array_of_any_zeros
— MethodCreates an array of "Any" with the desired number of sub-arrays filled with zeros
ActiveInference.bayesian_model_average
— MethodCalculate Bayesian Model Average (BMA)
Calculates the Bayesian Model Average (BMA) which is used for the State Action Prediction Error (SAPE). It is a weighted average of the expected states for all policies weighted by the posterior over policies. The qs_pi_all
should be the collection of expected states given all policies. Can be retrieved with the get_expected_states
function.
qs_pi_all
: Vector{Any}
q_pi
: Vector{Float64}
ActiveInference.calc_expected_utility
— MethodCalculate Expected Utility
ActiveInference.calc_free_energy
— FunctionCalculate Free Energy
ActiveInference.calc_pA_info_gain
— MethodCalculate observation to state info Gain
ActiveInference.calc_pB_info_gain
— MethodCalculate state to state info Gain
ActiveInference.calc_states_info_gain
— MethodCalculate States Information Gain
ActiveInference.calculate_SAPE
— MethodCalculate State-Action Prediction Error
ActiveInference.calculate_bayesian_surprise
— MethodCalculate Bayesian Surprise
ActiveInference.capped_log
— Methodcapped_log(array::Array{Float64})
ActiveInference.capped_log
— Methodcapped_log(x::Real)
Arguments
x::Real
: A real number.
Return the natural logarithm of x, capped at the machine epsilon value of x.
ActiveInference.capped_log
— Methodcapped_log(array::Vector{Real})
ActiveInference.capped_log
— Methodcapped_log(array::Array{T}) where T <: Real
ActiveInference.capped_log_array
— MethodApply capped_log to array of arrays
ActiveInference.check_probability_distribution
— MethodCheck if the vector of vectors is a proper probability distribution.
Arguments
- (Array::Vector{Vector{T}}) where T<:Real
Throws an error if the array is not a valid probability distribution:
- The values must be non-negative.
- The sum of the values must be approximately 1.
ActiveInference.check_probability_distribution
— MethodCheck if the vector of arrays is a proper probability distribution.
Arguments
- (Array::Vector{<:Array{T}}) where T<:Real
Throws an error if the array is not a valid probability distribution:
- The values must be non-negative.
- The sum of the values must be approximately 1.
ActiveInference.check_probability_distribution
— MethodCheck if the vector is a proper probability distribution.
Arguments
- (Vector::Vector{T}) where T<:Real : The vector to be checked.
Throws an error if the array is not a valid probability distribution:
- The values must be non-negative.
- The sum of the values must be approximately 1.
ActiveInference.compute_accuracy
— MethodCalculate Accuracy Term
ActiveInference.compute_accuracy_new
— MethodEdited Compute Accuracy [Still needs to be nested within Fixed-Point Iteration]
ActiveInference.construct_policies
— Methodconstruct_policies(n_states::Vector{T} where T <: Real; n_controls::Union{Vector{T}, Nothing} where T <: Real=nothing,
policy_length::Int=1, controllable_factors_indices::Union{Vector{Int}, Nothing}=nothing)
Construct policies based on the number of states, controls, policy length, and indices of controllable state factors.
Arguments
n_states::Vector{T} where T <: Real
: A vector containing the number of states for each factor.n_controls::Union{Vector{T}, Nothing} where T <: Real=nothing
: A vector specifying the number of allowable actions for each state factor.policy_length::Int=1
: The length of policies. (planning horizon)controllable_factors_indices::Union{Vector{Int}, Nothing}=nothing
: A vector of indices identifying which state factors are controllable.
ActiveInference.create_matrix_templates
— Functioncreate_matrix_templates(shapes::Vector{Int64}, template_type::String)
Creates templates based on the specified shapes vector and template type. Templates can be uniform, random, or filled with zeros.
Arguments
shapes::Vector{Int64}
: A vector specifying the dimensions of each template to create.template_type::String
: The type of templates to create. Can be "uniform" (default), "random", or "zeros".
Returns
- A vector of arrays, each corresponding to the shape given by the input vector.
ActiveInference.create_matrix_templates
— Methodcreate_matrix_templates(n_states::Vector{Int64}, n_observations::Vector{Int64}, n_controls::Vector{Int64}, policy_length::Int64, template_type::String = "uniform")
Creates templates for the A, B, C, D, and E matrices based on the specified parameters.
Arguments
n_states::Vector{Int64}
: A vector specifying the dimensions and number of states.n_observations::Vector{Int64}
: A vector specifying the dimensions and number of observations.n_controls::Vector{Int64}
: A vector specifying the number of controls per factor.policy_length::Int64
: The length of the policy sequence.template_type::String
: The type of templates to create. Can be "uniform", "random", or "zeros". Defaults to "uniform".
Returns
A, B, C, D, E
: The generative model as matrices and vectors.
ActiveInference.create_matrix_templates
— Methodcreate_matrix_templates(shapes::Vector{Int64})
Creates uniform templates based on the specified shapes vector.
Arguments
shapes::Vector{Int64}
: A vector specifying the dimensions of each template to create.
Returns
- A vector of normalized arrays.
ActiveInference.create_matrix_templates
— Methodcreate_matrix_templates(shapes::Vector{Vector{Int64}}, template_type::String)
Creates a multidimensional template based on the specified vector of shape vectors and template type. Templates can be uniform, random, or filled with zeros.
Arguments
shapes::Vector{Vector{Int64}}
: A vector of vectors, where each vector represent a dimension of the template to create.template_type::String
: The type of templates to create. Can be "uniform" (default), "random", or "zeros".
Returns
- A vector of arrays, each having the multi-dimensional shape specified in the input vector.
ActiveInference.create_matrix_templates
— Methodcreate_matrix_templates(shapes::Vector{Vector{Int64}})
Creates a uniform, multidimensional template based on the specified shapes vector.
Arguments
shapes::Vector{Vector{Int64}}
: A vector of vectors, where each vector represent a dimension of the template to create.
Returns
- A vector of normalized arrays (uniform distributions), each having the multi-dimensional shape specified in the input vector.
ActiveInference.dot_likelihood
— MethodDot-Product Function
ActiveInference.fixed_point_iteration
— MethodRun State Inference via Fixed-Point Iteration
ActiveInference.get_expected_obs
— MethodGet Expected Observations
ActiveInference.get_expected_states
— MethodGet Expected States
ActiveInference.get_expected_states
— MethodMultiple dispatch for all expected states given all policies
Multiple dispatch for getting expected states for all policies based on the agents currently inferred states and the transition matrices for each factor and action in the policy.
qs::Vector{Vector{Real}}
B: Vector{Array{<:Real}}
policy: Vector{Matrix{Int64}}
ActiveInference.get_joint_likelihood
— MethodGet Joint Likelihood
ActiveInference.get_log_action_marginals
— MethodFunction to get log marginal probabilities of actions
ActiveInference.get_model_dimensions
— FunctionGet Model Dimensions from either A or B Matrix
ActiveInference.infer_policies!
— MethodUpdate the agents's beliefs over policies
ActiveInference.infer_states!
— MethodUpdate the agents's beliefs over states
ActiveInference.init_aif
— MethodInitialize Active Inference Agent function initaif( A, B; C=nothing, D=nothing, E = nothing, pA = nothing, pB = nothing, pD = nothing, parameters::Union{Nothing, Dict{String,Real}} = nothing, settings::Union{Nothing, Dict} = nothing, savehistory::Bool = true)
Arguments
- 'A': Relationship between hidden states and observations.
- 'B': Transition probabilities.
- 'C = nothing': Prior preferences over observations.
- 'D = nothing': Prior over initial hidden states.
- 'E = nothing': Prior over policies. (habits)
- 'pA = nothing':
- 'pB = nothing':
- 'pD = nothing':
- 'parameters::Union{Nothing, Dict{String,Real}} = nothing':
- 'settings::Union{Nothing, Dict} = nothing':
- 'settings::Union{Nothing, Dict} = nothing':
ActiveInference.kl_divergence
— Methodkl_divergence(P::Vector{Vector{Vector{Float64}}}, Q::Vector{Vector{Vector{Float64}}})
Arguments
P::Vector{Vector{Vector{Real}}}
Q::Vector{Vector{Vector{Real}}}
Return the Kullback-Leibler (KL) divergence between two probability distributions.
ActiveInference.normalize_arrays
— MethodNormalizes multiple arrays
ActiveInference.normalize_arrays
— MethodNormalizes multiple arrays
ActiveInference.normalize_distribution
— MethodNormalizes a Categorical probability distribution
ActiveInference.onehot
— MethodCreates a onehot encoded vector
ActiveInference.outer_product
— FunctionMulti-dimensional outer product
ActiveInference.process_observation
— Methodprocess_observation(observation::Int, n_modalities::Int, n_observations::Vector{Int})
Process a single modality observation. Returns a one-hot encoded vector.
Arguments
observation::Int
: The index of the observed state with a single observation modality.n_modalities::Int
: The number of observation modalities in the observation.n_observations::Vector{Int}
: A vector containing the number of observations for each modality.
Returns
Vector{Vector{Real}}
: A vector containing a single one-hot encoded observation.
ActiveInference.process_observation
— Methodprocess_observation(observation::Union{Array{Int}, Tuple{Vararg{Int}}}, n_modalities::Int, n_observations::Vector{Int})
Process observation with multiple modalities and return them in a one-hot encoded format
Arguments
observation::Union{Array{Int}, Tuple{Vararg{Int}}}
: A collection of indices of the observed states for each modality.n_modalities::Int
: The number of observation modalities in the observation.n_observations::Vector{Int}
: A vector containing the number of observations for each modality.
Returns
Vector{Vector{Real}}
: A vector containing one-hot encoded vectors for each modality.
ActiveInference.sample_action!
— MethodSample action from the beliefs over policies
ActiveInference.sample_action
— MethodSample Action [Stochastic or Deterministic]
ActiveInference.select_highest
— MethodSelects the highest value from Array – used for deterministic action sampling
ActiveInference.softmax_array
— MethodSoftmax Function for array of arrays
ActiveInference.spm_wnorm
— MethodSPM_wnorm
ActiveInference.update_A!
— MethodUpdate A-matrix
ActiveInference.update_B!
— MethodUpdate B-matrix
ActiveInference.update_D!
— MethodUpdate D-matrix
ActiveInference.update_obs_likelihood_dirichlet
— MethodUpdate obs likelihood matrix
ActiveInference.update_posterior_policies
— FunctionUpdate Posterior over Policies
ActiveInference.update_posterior_states
— MethodUpdate Posterior States
ActiveInference.update_state_likelihood_dirichlet
— MethodUpdate state likelihood matrix
ActiveInference.update_state_prior_dirichlet
— MethodUpdate prior D matrix
ActiveInference.Environments.bayesian_model_average
— MethodCalculate Bayesian Model Average (BMA)
Calculates the Bayesian Model Average (BMA) which is used for the State Action Prediction Error (SAPE). It is a weighted average of the expected states for all policies weighted by the posterior over policies. The qs_pi_all
should be the collection of expected states given all policies. Can be retrieved with the get_expected_states
function.
qs_pi_all
: Vector{Any}
q_pi
: Vector{Float64}
ActiveInference.Environments.calculate_bayesian_surprise
— MethodCalculate Bayesian Surprise
ActiveInference.Environments.capped_log
— Methodcapped_log(array::Array{Float64})
ActiveInference.Environments.capped_log
— Methodcapped_log(x::Real)
Arguments
x::Real
: A real number.
Return the natural logarithm of x, capped at the machine epsilon value of x.
ActiveInference.Environments.capped_log
— Methodcapped_log(array::Vector{Real})
ActiveInference.Environments.capped_log
— Methodcapped_log(array::Array{T}) where T <: Real
ActiveInference.Environments.capped_log_array
— MethodApply capped_log to array of arrays
ActiveInference.Environments.dot_likelihood
— MethodDot-Product Function
ActiveInference.Environments.get_joint_likelihood
— MethodGet Joint Likelihood
ActiveInference.Environments.kl_divergence
— Methodkl_divergence(P::Vector{Vector{Vector{Float64}}}, Q::Vector{Vector{Vector{Float64}}})
Arguments
P::Vector{Vector{Vector{Real}}}
Q::Vector{Vector{Vector{Real}}}
Return the Kullback-Leibler (KL) divergence between two probability distributions.
ActiveInference.Environments.normalize_arrays
— MethodNormalizes multiple arrays
ActiveInference.Environments.normalize_arrays
— MethodNormalizes multiple arrays
ActiveInference.Environments.normalize_distribution
— MethodNormalizes a Categorical probability distribution
ActiveInference.Environments.outer_product
— FunctionMulti-dimensional outer product
ActiveInference.Environments.softmax_array
— MethodSoftmax Function for array of arrays
ActiveInference.Environments.spm_wnorm
— MethodSPM_wnorm