The Free Energy Principle (FEP) is a principle that suggests that dynamic systems, including the brain and other physical systems, are organized to minimize prediction errors, or the difference between the predictions made about the environment and the actual outcomes experienced. According to the FEP, dynamic systems encode information about their environment in a way to reduce surprisal from its input. The FEP proposes that dynamic systems are motivated to minimize prediction errors in order to maintain stability within the environment. FEP has been influential in neuroscience and neuropsychology and more recently has been used to describe systems on all spatiotemporal scales, from cells and biological species to AIs and societies.

FEP gives rise to Active Inference[1]: a process theory of agency, that can be seen both as an explanatory theory and as an agent architecture. In the latter sense, Active Inference rivals Reinforcement Learning. It has been argued[2] that Active Inference as an agent architecture manages the model complexity (i. e., the bias-variance tradeoff) and the exploration-exploitation tradeoff in a principled way, favours explicit, disentangled, and hence more interpretable belief representations, and is amenable for working within hierarchical systems of collective intelligence (which are seen as Active Inference agents themselves[3]). Building ecosystems of hierarchical collective intelligence can be seen as a proposed solution for and an alternative conceptualisation of the general problem of alignment.

FEP/Active Inference is an energy-based model of intelligence: a FEP agent minimises an informational quantity called variational free energy (VFE), and Active Inference nuances this picture further, modelling agents as minimising an informational quantity called expected free energy (EFE), which is derived from VFE. This likens FEP/Active Inference to Bengio's GFlowNets[4] and LeCun's Joint Embedding Predictive Architecture (JEPA)[5], which are also energy-based. On the other hand, this distinguishes FEP/Active Inference from Reinforcement Learning, which is a reward-based model of agency, and, more generally, utility-maximising decision theories....

(Read More)