•

•

•

•

Applied to Formalizing «Boundaries» with Markov blankets + Criticism of this approach by Chipmonk 3mo ago

•

•

•

Applied to A Brief Introduction to ACI, 3.5: How ACI improves Active Inference by Akira Pyinya 7mo ago

•

•

•

•

•

•

•

Active Inference is one of the most general theories of ~~agency: it~~agency. It can be seen as a generalisation of the predictive coding theory of brain function (or, the Bayesian Brain hypothesis). Specifically, while predictive coding explains the agent's perception as Bayesian inference, Active Inference models both prediction and action as inference under the single unifying objective: minimisation of the agent's VFE or EFE. Active Inference also recovers Bayes-optimal reinforcement learning, optimal control theory, and Bayesian Decision Theory (aka EDT) under different simplifying assumptions^{[1]}^{[6]}.

The mathematical content of Active Inference is based on Variational Bayesian methods.

FEP/Active Inference is an energy-based model of intelligence: a FEP agent minimises an informational quantity called

variational free energy(VFE), and Active Inference nuances this picture further, modelling agents as minimising an informational quantity calledexpected free energy(EFE), which is derived from VFE. This likens FEP/Active Inference to Bengio's GFlowNets^{[4]}and LeCun's Joint Embedding Predictive Architecture (JEPA)^{[5]}, which are also energy-based. On the other hand, this distinguishes FEP/Active Inference from Reinforcement Learning, which is a reward-based model of agency, and, more generally, utility-maximising decision~~theories.~~theories.