AI ALIGNMENT FORUM
AF

Wikitags

Free Energy Principle

Written by Roman Leventov, plex last updated 26th Dec 2022

The Free Energy Principle (FEP) is a principle that suggests that dynamic systems, including the brain and other physical systems, are organized to minimize prediction errors, or the difference between the predictions made about the environment and the actual outcomes experienced. According to the FEP, dynamic systems encode information about their environment in a way to reduce surprisal from its input. The FEP proposes that dynamic systems are motivated to minimize prediction errors in order to maintain stability within the environment. FEP has been influential in and neuropsychology and more recently has been used to describe systems on all spatiotemporal scales, from cells and biological species to AIs and societies.

FEP gives rise to Active Inference[1]: a process theory of , that can be seen both as an explanatory theory and as an agent architecture. In the latter sense, Active Inference rivals . It has been argued[2] that Active Inference as an agent architecture manages the model complexity (i. e., the bias-variance tradeoff) and the exploration-exploitation tradeoff in a principled way, favours explicit, disentangled, and hence more belief representations, and is amenable for working within hierarchical systems of collective intelligence (which are seen as Active Inference agents themselves[3]). Building ecosystems of hierarchical collective intelligence can be seen as a proposed solution for and an alternative conceptualisation of the general problem of alignment.

FEP/Active Inference is an energy-based model of intelligence: a FEP agent minimises an informational quantity called variational free energy (VFE), and Active Inference nuances this picture further, modelling agents as minimising an informational quantity called expected free energy (EFE), which is derived from VFE. This likens FEP/Active Inference to Bengio's [4] and LeCun's Joint Embedding Predictive Architecture (JEPA)[5], which are also energy-based. On the other hand, this distinguishes FEP/Active Inference from Reinforcement Learning, which is a reward-based model of agency, and, more generally, -maximising .

Active Inference is one of the most general theories of agency. It can be seen as a generalisation of the theory of brain function (or, the Bayesian Brain hypothesis). Specifically, while predictive coding explains the agent's perception as Bayesian inference, Active Inference models both prediction and action as inference under the single unifying objective: minimisation of the agent's VFE or EFE. Active Inference also recovers Bayes-optimal reinforcement learning, optimal control theory, and (aka ) under different simplifying assumptions[1][6].

The mathematical content of Active Inference is based on Variational Bayesian methods.

References

  1. ^

    Parr, Thomas, Giovanni Pezzulo, and Karl J. Friston. Active inference: the free energy principle in mind, brain, and behavior. MIT Press, 2022.

  2. ^

    Friston, Karl J., Maxwell JD Ramstead, Alex B. Kiefer, Alexander Tschantz, Christopher L. Buckley, Mahault Albarracin, Riddhi J. Pitliya et al. "Designing Ecosystems of Intelligence from First Principles." arXiv preprint arXiv:2212.01354 (2022).

  3. ^

    Kaufmann, Rafael, Pranav Gupta, and Jacob Taylor. "An active inference model of collective intelligence." Entropy 23, no. 7 (2021): 830.

  4. ^

    Bengio, Yoshua. "GFlowNet Tutorial." (2022).

  5. ^

    LeCun, Yann. "A path towards autonomous machine intelligence." preprint posted on openreview (2022).

  6. ^

    Friston, Karl, Lancelot Da Costa, Danijar Hafner, Casper Hesp, and Thomas Parr. "Sophisticated inference." Neural Computation 33, no. 3 (2021): 713-763.

neuroscience
Subscribe
1
Subscribe
1
Reinforcement Learning
agency
utility
GFlowNets
decision theories
interpretable
predictive coding
Discussion0
Discussion0
EDT
Bayesian Decision Theory
Posts tagged Free Energy Principle
3Active Inference as a formalisation of instrumental convergence
Roman Leventov
3y
0
50My computational framework for the brain
Steve Byrnes
5y
9
44«Boundaries», Part 3a: Defining boundaries as directed Markov blankets
Andrew Critch
3y
11
39Agent Boundaries Aren't Markov Blankets. [Unless they're non-causal; see comments.]
Abram Demski
2y
6
15LOVE in a simbox is all you need
jacob_cannell
3y
0
17Multi-agent predictive minds and AI alignment
Jan_Kulveit
7y
0
21All the posts I will never write
Alexander Gietelink Oldenziel
3y
2
18Critique of some recent philosophy of LLMs’ minds
Roman Leventov
2y
0
13AXRP Episode 32 - Understanding Agency with Jan Kulveit
DanielFilan
1y
0
6Refinement of Active Inference agency ontology
Roman Leventov
2y
0
191AI 2027: What Superintelligence Looks Like
Daniel Kokotajlo, Thomas Larsen, elifland, Scott Alexander, Jonas V, Romeo Dean
3mo
2
31Why Simulator AIs want to be Active Inference AIs
Jan_Kulveit, rosehadshar
2y
4
28FixDT
Abram Demski
2y
12
16A multi-disciplinary view on AI safety research
Roman Leventov
2y
0
10A reply to Byrnes on the Free Energy Principle
Roman Leventov
2y
0
Load More (15/16)
Add Posts