AI ALIGNMENT FORUM
AF

1650
Singular Learning Theory

Singular Learning Theory

Feb 16, 2023 by Alexander Gietelink Oldenziel

Singular Learning Theory (SLT) is a novel mathematical framework that expands and improves upon traditional Statistical Learning theory using techniques from algebraic geometry, bayesian statistics, and statistical physics. It has great promise for the mathematical foundations of modern machine learning. 

From the meta-uni seminar on SLT:

The canonical references are Watanabe’s two textbooks:

  • The gray book: S. Watanabe “Algebraic geometry and statistical learning theory” 2009.
  • The green book: S. Watanabe “Mathematical theory of Bayesian statistics” 2018.

Some other introductory references:

  • Matt Farrugia-Roberts’ MSc thesis, October 2022, Structural Degeneracy in Neural Networks.
  • Spencer Wong’s MSc thesis, May 2022, From Analytic to Algebraic: The Algebraic Geometry of Two Layer Neural Networks.
  • Liam Carroll’s MSc thesis, October 2021, Phase transitions in neural networks.
  • Tom Waring’s MSc thesis, October 2021, Geometric Perspectives on Program Synthesis and Semantics.
  • S. Wei, D. Murfet, M. Gong, H. Li , J. Gell-Redman, T. Quella “Deep learning is singular, and that’s good” 2022.
  • Edmund Lau’s blog Probably Singular.
  • Shaowei Lin’s PhD thesis, 2011, Algebraic Methods for Evaluating Integrals in Bayesian Statistics.
  • Jesse Hoogland’s blog posts: general intro to SLT, and effects of singularities on dynamics.
  • Announcement of the devInterp agenda. 
67Neural networks generalize because of this one weird trick
Jesse Hoogland
3y
6
23Interview Daniel Murfet on Universal Phenomena in Learning Machines
Alexander Gietelink Oldenziel
3y
0
20Spooky action at a distance in the loss landscape
Jesse Hoogland, Filip Sondej
3y
1
14Gradient surfing: the hidden role of regularization
Jesse Hoogland
3y
7
23The shallow reality of 'deep learning theory'
Jesse Hoogland
3y
0
18Empirical risk minimization is fundamentally confused
Jesse Hoogland
3y
0