This is a linkpost for https://arxiv.org/abs/2401.08585

Submitted on 6 Nov 2023

Sean Tull, Razin A. Shaikh, Sara Sabrina Zemljic, Stephen Clark

Abstract

In this article we present a new modelling framework for structured concepts using a category-theoretic generalisation of conceptual spaces, and show how the conceptual representations can be learned automatically from data, using two very different instantiations: one classical and one quantum. A contribution of the work is a thorough category-theoretic formalisation of our framework. We claim that the use of category theory, and in particular the use of string diagrams to describe quantum processes, helps elucidate some of the most important features of our approach. We build upon Gardenfors' classical framework of conceptual spaces, in which cognition is modelled geometrically through the use of convex spaces, which in turn factorise in terms of simpler spaces called domains. We show how concepts from the domains of shape, colour, size and position can be learned from images of simple shapes, where concepts are represented as Gaussians in the classical implementation, and quantum effects in the quantum one. In the classical case we develop a new model which is inspired by the Beta-VAE model of concepts, but is designed to be more closely connected with language, so that the names of concepts form part of the graphical model. In the quantum case, concepts are learned by a hybrid classical-quantum network trained to perform concept classification, where the classical image processing is carried out by a convolutional neural network and the quantum representations are produced by a parameterised quantum circuit. Finally, we consider the question of whether our quantum models of concepts can be considered conceptual spaces in the Gardenfors sense.

Parts of the introduction:

The study of concepts has a long history in a number of related fields, including philosophy, linguistics, psychology and cognitive science (Murphy, 2002; Margolis & Laurence, 2015). More recently, researchers have begun to consider how mathematical tools from quantum theory can be used to model cognitive phenomena, including conceptual structure. The general use of quantum formalism in psychology and cognitive science has led to an emerging area called quantum cognition (Aerts, 2009; Pothos & Busemeyer, 2013). The idea is that some of the features of quantum theory, such as entanglement, can be used to account for psychological data which can be hard to model classically. Examples include ordering effects in how subjects answer questions (Trueblood & Busemeyer, 2011) and concept combination (Aerts & Gabora, 2005; Tomas & Sylvie, 2015).

Another recent development in the study of concepts has been the application of machine learning to the problem of how artificial agents can automatically learn concepts from raw perceptual data (Higgins et al., 2017, 2018). The motivation for endowing an agent with conceptual representations, and learning those representations automatically from the agent’s environment, is that this will enable it to reason and act more effectively in that environment, similar to how humans use concepts (Lake et al., 2017). One hope is that the explicit use of concepts will ameliorate some of the negative consequences of the “black-box” nature of neural architectures currently being used in AI.

In this article we present a new modelling framework for concepts based on the mathematical formalism used in quantum theory, and demonstrate how the conceptual representations can be learned automatically from data, using both classical and quantum-inspired models. A contribution of the work is a thorough category-theoretic formalisation of our framework, following Bolt et al. (2019) and Tull (2021). Formalisation of conceptual models is not new (Ganter & Obiedkov, 2016), but we claim that the use of category theory (Fong, 2019), and in particular the use of string diagrams to describe quantum processes (Coecke & Kissinger, 2017), helps elucidate some of the most important features of our approach to concept modelling. This aspect of our work also fits with the recent push to introduce category theory into machine learning and AI more broadly. The motivation is to make deep learning less ad-hoc and less driven by heuristics, by viewing deep learning models through the compositional lens of category theory (Shiebler et al., 2021).

Murphy (2002, p.1) describes concepts as “the glue that holds our mental world together”. But how should concepts be modelled and represented mathematically? There are many modelling frameworks in the literature, including the classical theory (Margolis & Laurence, 2022), the prototype theory (Rosch, 1973), and the theory theory (Gopnik & Meltzoff, 1997). Here we build upon G¨ardenfors’ framework of conceptual spaces (G¨ardenfors, 2004, 2014), in which cognition is modelled geometrically through the use of convex spaces, which in turn factorise in terms of simpler spaces called domains.

Our category-theoretic formalisation of conceptual spaces allows flexibility in how the framework is instantiated and then implemented, with the particular instantiation determined by the choice of category. First we show how the framework can be instantiated and implemented classically, by using the formalisation of “fuzzy” conceptual spaces from Tull (2021), and developing a probabilistic model based on Variational Autoencoders (VAEs) (Rezende et al., 2014; Kingma & Welling, 2014). Having “fuzzy” probabilistic representations not only extends G¨ardenfors’ framework in a useful way, it also provides a natural mechanism for dealing with the vagueness inherent in the human conceptual system, and allows us to draw on the toolkit from machine learning to provide effective learning mechanisms. Our new model—which we call the Conceptual VAE—is an extension of the β-VAE from Higgins et al. (2017), with the concepts having explicit labels and represented as multivariate Gaussians in a factored conceptual space.

[...]

What are some of the main reasons for applying the formalism of quantum theory to the modelling of concepts? First, it provides an alternative, and interesting, mathematical structure to the convex structure of conceptual spaces (see Section 2.7). Second, this structure comes with features which are wellsuited to modelling concepts, such as entanglement for capturing correlations, and partial orders for capturing conceptual hierarchies. Third, the use of the tensor product for combining domains leads to machine learning models with different characteristics to those typically employed in concept learning, such as the Conceptual VAE (i.e. neural networks which use direct sum as the monoidal product plus non-linearities to capture interactions between features) (Havlicek et al., 2019; Schuld & Killoran, 2019). The advantages this may bring, especially with the advent of larger, fault-tolerant quantum computers in the future, is still being worked out by the quantum machine learning community, but the possibilities are intriguing at worst and transformational at best.

Note that, in this article, our goal is to set out a novel framework for concept modelling, and demonstrate empirically—with two very different implementations— how concepts can be learned in practice. Further work is required to demonstrate that the framework can be applied fruitfully to data from a psychology lab—which is one of the goals of quantum cognition (Pothos & Busemeyer, 2013)—and also to agents acting in (virtual) environments—one of the goals of agent-based AI (Abramson et al., 2020). Note also that no claims are being made here regarding the existence of quantum processes in the brain, only that some cognitive processes can be effectively modelled at an abstract level using the quantum formalism.

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 11:12 PM

Though it's a bit beyond me, those folks are doing some interesting work. Here's an informal introduction from Jan. 27, 2023: Bob Coecke, Vincent Wang-Mascianica, Jonathon Liu, Our quest for finding the universality of language.