It is sometimes claimed that the ultimate, unifying goal of artificial intelligence research is to instantiate human-level cognition in a computational system (e.g., Minsky, 1961; Lake et al, 2017). If artificial general intelligence (AGI) of this sort is ever successfully developed, the consequences would be unimaginable in scope—surely, it would be the most impressive invention of our tool-making species to date. 

In what follows, I’ll argue that current AI systems almost entirely lack a critical facet of human-level cognition. I’ll discuss the reasons why it is particularly hard for us to recognize—let alone instantiate—this aspect of our cognition, and I'll investigate the predictions that emerge from this sort of account. After sketching this general picture, I'll argue that the framing put forward here ultimately has the potential to unify AI engineering and AI safety as one single project.

Introduction: How We Come to Understand the Mind

At the outset, it is worth asking the extent to which cognitive science bears relevance to AI. It seems clear, however, that if the overarching goal of AI research is to capture the computations that comprise human-level cognition, then a sufficiently comprehensive understanding of human cognition seems a necessary precondition for bringing about this outcome. In other words, if we want to build mind-like computational systems, it follows that we must first understand the mind to some sufficient degree. 

What, then, are the epistemological resources we have at our disposal for understanding the mind? Philosophers and cognitive scientists generally answer along the following lines: to the degree that “the mind is what the brain does,” as Minsky put it, the investigations of neuroscience and psychology allow us to better understand the mind as a standard, third-person, external, empirical object of scientific inquiry (Minsky, 1988). 

But the mind is also unlike other objects of scientific inquiry in one particular way. In addition to—and in some sense prior to—objective inquiry, we can also come to understand the mind through first-person, subjective experience of having (or being) minds ourselves. For instance, our use of a standard cognitive vocabulary (i.e., speaking about beliefs, values, goals, and thoughts as such) both in scientific research and in everyday conversation does not happen because we have consulted the empirical literature and decided to adopt our preferred terminology; instead, we speak this way because of the fact that everyone’s first-person experience agrees that such language corresponds to what we might call “self-evidently subjectively real” mental phenomena (e.g., Pylyshyn, 1984). 

It is also fairly clear that our first-person experiences of mind are not diametrically opposed to scientific inquiry, but rather, actually do much of the work of calibrating the relevant empirical investigations: namely, our motivation to study phenomenon X versus Y in cognitive science almost always originates from some first-person intuition about the relative prominence of the phenomena in question. For instance, the reason the neuropsychological mechanisms of punishment avoidance are far better empirically understood than putative mechanisms of punishment-seeking (i.e., masochism) is because we court experience-based intuitions that the former phenomenon seems real and important for broadly understanding the mind (cognitive science thus studies it rigorously), while the latter phenomenon is generally unrelatable, rare, and pathological (cognitive science thus studies it far less intensely). Our first-person, nonempirical experience of having (or being) minds thus not only directly supplements our understanding of cognition, but also broadly informs, motivates, and calibrates subsequent objective investigations of both neuropsychology and AI. 

What happens, then, when the third-person, empirical apparatus of cognitive science turns to investigate these highly relevant, inquiry-directing, first-person experiences themselves? In other words, what happens when we study empirically what features of mind are and are not actually included in our experience of having a mind? A fairly determinate answer emerges: the mind can neither uniformly penetrate nor uniformly evaluate claims about its own processes. That is, our experience of having (or being) a mind selectively and reliably misses critical information about many of its actual underlying phenomena. 

Smell the Problem?

A simple example that I find particularly illustrative (in the case of human cognition) is the profound asymmetry between the two sensory modalities of olfaction (i.e., smelling) and vision. Whereas researchers posit that we can hardly communicate (“see what others are saying”) without relying on visual analogies and concepts (Ferreira & Tenenhaus, 2007; Huettig et al, 2020), olfaction, on the other hand, has been dubbed “the muted sense” in light of the well-documented difficulty individuals have in verbalizing basic smell-related data, such as identifying the source of common odors (Olofsson and Gottfried, 2015). It is often quipped that there are really only five words exclusively dedicated to smell in the English language: smelly, stinky, acrid, fragrant, and musty—all other seemingly olfactory descriptions are argued to co-opt gustatory or visual language (e.g, we say something “smells like cinnamon,” but we do not say something “looks like banana”—we simply say “yellow”) (Yong, 2015). 

Asymmetries in linguistic accessibility of olfactory and visual information are not the only relevant discrepancies between the two sensory modalities. Perl and colleagues outline the bizarre, pervasive role of subconscious sniffing in the emerging field of social olfaction, including discoveries of highly specific mechanisms to this end, such as individuals subconsciously increasing sniffing of their “shaking” hand after within-sex handshakes (putative “other-inspection”) while subconsciously increasing sniffing of their “non-shaking” hand after cross-sex handshakes (putative “self-inspection”) (Perl et al, 2020). Of particular note to our idiosyncratic inquiry (as well as the posited connection between first-person intuitions about cognition and subsequent research agendas), the researchers in this paper explicitly comment that “we are hard-pressed to think of a human behaviour that is so widespread, that troubles so many people, that clearly reflects underlying processes with developmental, clinical and social relevance, and yet has so little traction in the formal medical/psychological record” (Perl et al, 2020). 

Needless to say, in spite of the demonstrated importance of olfaction in the greater system of the mind from an empirical point of view (and in spite of much still remaining unknown about the functional role of the mysterious modality), AI research has all but ignored the relevance of olfaction for the field’s stipulated goal of instantiating human-level cognition in a computational system. A simple Google Scholar search for “AI olfaction” and “computation olfaction” yield 26,100 and 29,400 results, respectively, while “AI vision” and “computation vision” yield 3 million and 2.7 million results, respectively. 

While the computations associated with vision may indeed be orders of magnitude more familiar to us than those associated with olfaction, it is extremely implausible that this 100-fold asymmetry will be found to map onto the comparative importance of the modalities and their associated computations in the mind/brain. Of course, just because we do not understand from experience what olfaction is up to does not imply that olfaction is not essential in the greater system of the mind. But, given these startling asymmetries in research interest across sensory modalities, it seems as though AI—and neuropsychology more broadly—operates as if this were true. 

More All-But-Missing Pieces: Social Cognition and Skill Learning

This problem, of course, is not limited to discrepancies between vision and olfaction: there are many other extremely important functions of the mind that we either do not experience as such or otherwise find notoriously difficult to comprehend in explicit, systematic, linguistic terms. Two highly relevant further examples are (1) social cognition writ large and (2) skill learning and memory. 

With regard to the former, decades of research primarily led by John Bargh at Yale has demonstrated the ubiquity of automaticity and subconscious processing in social contexts, including imitation (Dijksterhuis & Bargh, 2001), subliminal priming in a social environment (Bargh et al, 2009), social goal pursuit (Bargh et al, 2001), and the effects of group stereotypes (Bargh et al, 1996). In short, humans are profoundly unaware (i.e., do not have any firsthand experience) of many—if not most—of the underlying computational processes active in social contexts. We are, in some sense, the recipients of these processes rather than the authors of them. 

With respect to skill learning and memory, also known as procedural learning and memory, similar patterns emerge: first, “nondeclarative” procedural learning processes have been demonstrated by double dissociation in the brain to function largely independently of “declarative” learning and memory processes, strongly indicating that behavioral learning and memory exists and operates independently from our discrete capacity to systematize the world in explicit, verbal terms (e.g., Tranel et al, 1994). Accordingly, it has been found that not only do people find it exceedingly challenging to explicitly articulate how to perform known skills (e.g., dancing, riding a bike, engaging in polite conversation), but also that attempting to do so can actually corrupt the skill memory, demonstrating that procedural learning is not only “implicit,” but can sometimes be “anti-explicit” (Flegal & Anderson, 2008). There is thus a well-documented “dark side of the mind;” a large subset of ubiquitous cognitive phenomena that—unlike vision, say—we have serious trouble self-inspecting.

Descriptive and Normative Cognition

Let’s now attempt to make some sense of these mysterious computations: is there some discoverable, underlying pattern that helps to elucidate which class of cognitive processes are presumably (1) highly consequential in the greater system of the mind, but (2) with which we are experientially underacquainted? I submit that there is such a pattern—and that by superimposing this pattern onto the current state of AI research, it will retroactively become clear what the field has successfully achieved, what is currently lacking, and what implications this understanding carries for building safe and effective AI. 

The hypothesis is as follows. There is a fundamental, all-encompassing distinction to be drawn between two domains of cognition: descriptive and normative cognition. 

As it will be defined, it is the category of normative cognition into which all the previously considered, implicit, experientially unclear cognitive processes seem to fall (recall: olfaction, social cognition, procedural learning and memory). An account will subsequently be given as to why their being normative necessarily renders them exceedingly hard to understand in explicit, linguistic terms.  

This descriptive-normative dichotomy is not unfamiliar, and it bears specific resemblance to Hume’s well-known distinction between claims of “is” and “ought” (Cohon, 2004). “Descriptive cognition” as it is being used here will refer to the mind’s general capacity to map “true” associations between external phenomena—as they are transduced by sense organs—and to successively use these remembered associations to build systems of concepts (“models”) that map reality. Behavioral psychologists often refer to this kind of learning as classical conditioning—the stuff of Pavlov’s dogs. Interestingly, this account is associated with many of the functional properties of the posterior half of neocortex, including vision (in the occipital lobe), conceptual association (in the parietal lobe), and explicit memory formation (in the hippocampi) storage (in the temporal lobe). In a (hyphenated) word, descriptive cognition entails model-building. In accordance with the Humean distinction, descriptive cognition is responsible for computing “what is”—it thus ignores questions of “should” and “should not.”

“Normative cognition” as it is being used here will refer to the process of behavioral decision-making and its dependence on the construction, maintenance, and update of a complex “value system” (analogous to a descriptive cognition’s “belief system”) that can be deployed to efficiently and effectively adjudicate highly complex decision-making problems. Analogously, this account is associated with many of the functional properties of the anterior half of neocortex, which is known to be differentially responsible for executive functions, valuation, emotion, behavioral planning, and goal-directed cognition (for a comprehensive review, see Stuss and Knight, 2013). In a word, normative cognition entails (e)valuation. In Hume’s vocabulary, normative cognition is the computational apparatus that can be said to deal with “ought” claims. 

In my own research, people were found to vastly differ from one another in the relative attention and interest they devote to their own descriptive and normative representations, further bolstering the legitimacy of the distinction. More descriptively-oriented people tend to prioritize science, rationality, logic, and truth, while more normatively-oriented people tend to prioritize the humanities, art, narrative, and aesthetics (Berg, 2021).  

Before proceeding, it is worth considering the nature of the relationship between descriptive and normative cognition as they have been defined. Clearly, these overarching processes must interact in some way, but how, exactly? And if they interact to a sufficient degree, what right do we really have to differentiate these processes? Here, I will characterize descriptive and normative cognition as epistemologically independent but as mutually enabling: were it not for the constraining influence of the other, the unique computational role of each would be rendered either irrelevant or impossible. Though I do believe the relevant neuropsychology supports this account, I think it can be demonstrated on logical grounds alone. 

First, without a sufficiently accurate (descriptive) model of the external world, it is impossible for an agent to efficiently and adaptively pursue its many (normative) goals and avoid their associated obstacles in a complex, dynamic environment. Here is why I believe this must necessarily be true: one cannot reliably select and navigate towards some desired “point B” (the “normative” computational problem) without having a sufficiently well-formed understanding of (1*) where “point A” is, in environmental terms, (2*) which of the many possible point Bs are actually plausible, (3) which of the plausible point Bs are preferable, (4*) where the preferred point B is, in practical, implementable terms, (5*) which of the many possible “routes” from A to B are actually plausible, and (6) which of the plausible “routes” from A to B are preferable. Of these six preconditions for normative action, four of them (denoted with an asterisk) unambiguously depend upon descriptive models of the agent’s environment. Therefore, on a purely theoretical level, descriptive cognition can be demonstrated to be what actually renders the hard normative problems of (3) and (6) tractable. 

In the same vein, normative cognition enables and constrains descriptive cognition. This is because the only way to adjudicate the hard problem of which models are actually worth the trouble of building given highly finite time, energy, intelligence, and information is by appealing to the misleadingly simple answer, the most relevant models—that is, those models that most reliably facilitate pursuit of the most important goals (and avoidance of the most important obstacles), where “important” really means “important given what I care about” and where what one cares about is in turn determined by one’s constructed and constantly-evolving value system. So while it is certainly true that descriptive and normative cognition are tightly interrelated, these two broad domains of mind are indeed calibrated to different epistemologies—descriptive cognition, to something ultimately like “cross-model” predictive accuracy (e.g., “what do I believe?”); normative cognition, to something ultimately like “cross-goal” reward acquisition (e.g., “what do I care about?”).

The “Dark Side of Cognition” Hypothesis

An intriguing hypothesis, relevant to the future success of AI, emerges from this account. If descriptive and normative cognition are fundamentally computationally discrete, then it should follow that, within any one mind, (A) descriptive cognition would be technically incapable of mapping (i.e., building models of) normative cognition itself, and, (B) analogously, normative cognition would be technically incapable of evaluating (i.e., assigning a goal-directed value to) descriptive cognition itself. This is because all the evidence there is for the internal structure of the normative cognition (to be hypothetically modeled by descriptive cognition) could only ever be conceivably accessed “during” normative cognition itself (e.g., while introspecting during one’s own model-building), and so too for descriptive cognition.   

Of particular relevance to the field of AI is (A), that descriptive cognition would be technically incapable of mapping (i.e., building models of) normative cognition itself. This is because, returning to our starting point, the unifying goal of AI research is to instantiate human-level cognition in a computational system, which seems to require a descriptive understanding of all cognition—descriptive and normative alike. But herein lies what I strongly believe to be the overriding oversight in current AI approaches: if (1) all cognition can be validly classified as either descriptive or normative, (2) what we descriptively know about the mind is either directly supplemented by or indirectly guided by our first-person experience of having (or being) minds, and (3) it is technically impossible in building a descriptive model of our own minds to map its normative parts, then we should reasonably expect current approaches to AI omit, ignore, or discount normative cognition. I will call this the “Dark Side of Cognition Hypothesis,” or “DSCH” for short.  

Examining the Hypothesis

Is DSCH—the idea that up to this point, AI has largely ignored normative cognition—borne out by the available evidence? Let us attempt to answer this question using as our case study Lake and colleagues’ paper, Building machines that learn and think like people, which helpfully captures both the state of the field and its own researchers’ thoughts about its trajectory (Lake et al, 2017; hereafter, “L, 2017”). 

After its introduction, the paper presents two modern challenges for AI, dubbed “the characters challenge,” which concerns accurate machine parsing and recognition of handwritten characters, and “the Frostbite challenge,” which refers to control problems related to the eponymous Atari game using a DQN (L, 2017). Then, the paper talks at length about the interesting prospect of embedding core concepts like number, space, physics, and psychology into AI in order to assist with what is referred to as the “model-building” process (explicitly contrasted against the notion of “pattern recognition”) (L, 2017). Finally, in “future directions,” the paper talks at length about the predictive power of deep learning and future prospects for further enhancing its capabilities (L, 2017). 

As a dual index into the state of the field and the minds of its researchers, this paper offers both a sophisticated account of what we might now refer to as “artificial descriptive cognition” (particularly in its cogent emphasis on “model-building”) and a number of intriguing proposals for enhancing it in the future. However—and in spite of the paper itself quoting Minsky in saying “I draw no boundary between a theory of human thinking and a scheme for making an intelligent machine”—in its 23 total sections on the present and future of AI, the paper brings up topics related to “artificial normative cognition” in only three (and this is when counting generously) (L, 2017). Two of these invocations relate to DQNs, which, by the paper’s own characterization of the class of algorithms (“a powerful pattern recognizer...and a simple model-free reinforcement learning algorithm [emphasis added]”), still derive most of their power from descriptive, not normative, computation. 

The third example comes from the paper’s discussion of using a partially observable MDP for instantiating theory of mind into AI (L, 2017). This example is particularly illustrative of the kind of oversight we might expect under an account like DSCH: the researchers seem to acknowledge that to fundamentally make sense of other minds, an agent should attempt to predict their goals and values using a POMDP (as if to say, “others’ goals and values are the most fundamental part of their minds”), and yet, in discussing how to build minds, the researchers all but ignore the instantiation of complex goals and values, instead opting to focus solely on descriptive questions of bringing about maximally competent model-building algorithms (L, 2017). 

Though the Lake paper is just a single datapoint—and in spite of the ample credit the paper deserves for its genuinely interesting proposals to innovate “artificial descriptive cognition”—the paper nonetheless supports the account DSCH provides: our intuitive descriptive theories of how to model the mind, much to our collective embarrassment, omit the evaluative, socially-enabling processes that render us distinctly human. Needless to say, the paper uses the word “vision” eight times and does not mention olfaction (L, 2017).

Human-level cognition features normative “value systems” that are equally complex—and as computationally relevant to what makes human-level cognition “human-level”—as more-familiar, descriptive “belief systems,” and yet most AI research seems to almost exclusively attend to the “algorithmization” of latter, as DSCH would predict. As understandable as this state of affairs may be, this oversight is not only stymying the progress of the multi-billion dollar field of AI research; it is also highly dangerous from the perspective of AI safety. 

Normative Cognition as a Safety Mechanism

One of the more troubling and widely-discussed aspects of the advent of increasingly competent AI is that there is no guarantee that its behavior will be aligned with humanity’s values. While there have been numerous viable proposals for minimizing the likelihood of this kind of scenario, few involve positive projects (i.e., things to do rather than things to avoid) that straightforwardly overlap with current agendas in AI research. Devoting meaningful effort to the explicit construction of human-level normative cognition will simultaneously progress the field of AI and the adjacent mission of AI safety researchers: endowing AI systems with a value system (and means for updating it) designed in accordance with our own will vastly decrease the likelihood of catastrophic value-based misunderstandings between engineers and their algorithms. 

It is important to note that there is a reason we trust humans more than a hypothetical superintelligence (and hence support human-in-the-loop-type proposals for ensuring AI alignment): virtually all humans have a certain kind of cognition that intuitively renders them trustworthy. They generally care about others, they want to avoid catastrophe, they can err on the side of caution, they have some degree of foresight, and so on. But this is because we expect them to value these things—and to competently map these values onto their behavior. If we understood normative cognition—the cognition that enables competent valuation—we could in theory build AI systems that we would trust not to accidentally upend civilization as much as (if not far more than) human engineers, systems with a genuine sense of duty, responsibility, and caution. 

The ultimate danger of current AI approaches is that valueless and unvaluing systems are being constructed with the prayer that their behavior will happen to align with our values. This is sure to fail (or, at the very least, not succeed perfectly), especially as these systems become increasingly competent. An AGI without normative cognition would be one that we would immediately recognize as horrifyingly unbalanced: at once a genius map-maker, able to build highly complex models, and a highly foolish navigator, unable to use these models in a manner that we would deem productive—or safe. In order to build AGI whose values are aligned with our own, its intelligence must scale with its wisdom. The former, I believe, is descriptive in character; the latter, normative. Both are mutually necessary for avoiding catastrophe.      

Consilience

What, then, should be done to correct this asymmetry in AI between descriptive and normative cognitive modeling? We would imagine one obvious answer to be that the field should simply spend relatively less time on pattern recognition and model-building and relatively more time on developing and formalizing normative computations of value judgment, goal pursuit, social cognition, skill acquisition, olfaction, and the like, in accordance with the foundation already laid by current RL approaches. This, I believe, is highly necessary but not alone sufficient. The simple reason why is because AI researchers, for all their talents, are generally not experts in the complexities of human normative cognition—and this is not their fault. Understanding these processes has not, at least up to this point, a skill-set required to excel in the field. 

However, such experts do exist, even if they do not self-identify as such: these are predominantly the scholars and thinkers of the humanities. Before, we reasoned that within any one mind, one cannot make descriptive sense of one’s own normative cognition given a fundamental epistemological gap between the two processes; humanities scholars cleverly innovate around this problem by distilling the content of normative cognition into an external narrative, philosophy, artwork, or other text, thereby enabling investigation into the underlying mechanics of its rich normative (value-based) content. In this way, normative cognition has been studied rigorously for millennia, just not under this idiosyncratic name.

Once AI reaches the point in its near development where it will become necessary to confront questions about the implementation of higher-level goals, values, and motivations—especially in the social domain—I believe that the probability of the field’s success in instantiating human-level cognition (and doing so safely) will be proportional to its capacity to accommodate, synthesize, and ultimately “program in” the real and important insights of the humanities. Not only would this proposal for the inclusion of the humanities in the future trajectory of AI research increase the likelihood of the field’s success, but it would also enable a crucial bulwark against the possibility for profound ethical blunders that could more generally accompany the poorly understood integration of (potentially sentient, suffering-capable) minds into computational systems. 

Generally speaking, the goal of computationally instantiating human-level cognition is surely the most ambitious, profound, and evolutionarily significant in the history of humankind. Such an accomplishment would be all but certain to dramatically radically alter the trajectory of everything we care about as a species, especially if one grants the possibility of an “intelligence explosion,” which most AI researchers in fact do (Good, 1966; Muller and Bostrom, 2016). Accordingly, the construction of a human-level cognitive system must not be considered an esoteric task for clever programmers, but rather as a profound responsibility of descriptively- and normatively-minded thinkers alike. In the absence of multidisciplinary collaboration on this grand project, it is overwhelmingly likely that some critical feature (or, as the DSCH posits, an entire domain) of our minds that renders them truly human will be omitted, ignored, underestimated, or never considered in the first place, the consequences of which we will be all too human to fully understand and from which we may never have the opportunity to recover. The stakes are high, and it is incumbent on researchers and thinkers of all backgrounds and persuasions to get the initial conditions right. 

 

 

Works Cited

Bargh, J. A., Chen, M., & Burrows, L. (1996). Automaticity of social behavior: Direct effects of trait construct and stereotype activation on action. Journal of Personality and Social Psychology, 71(2), 230–244. https://doi.org/10.1037/0022-3514.71.2.230

Bargh, J. A., Gollwitzer, P. M., Lee-Chai, A., Barndollar, K., & Trötschel, R. (2001). The automated will: Nonconscious activation and pursuit of behavioral goals. Journal of Personality and Social Psychology, 81(6), 1014–1027. https://doi.org/10.1037/0022-3514.81.6.1014

Berg, C. (2021). Hierarchies of Motivation Predict Individuals’ Attitudes and Values: A Neuropsychological Operationalization of the Five Factor Model. PsyArXiv. https://doi.org/10.31234/osf.io/wk6tx

Cohon, R. (2018). Hume’s Moral Philosophy. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2018). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/fall2018/entries/hume-moral/

Dijksterhuis, A., & Bargh, J. A. (2001). The perception–behavior expressway: Automatic effects of social perception on social behavior. In Advances in experimental social psychology, Vol. 33 (pp. 1–40). Academic Press.

Ferreira, F., & Tanenhaus, M. K. (2007). Introduction to the special issue on language–vision interactions. Journal of Memory and Language, 57(4), 455–459. https://doi.org/10.1016/j.jml.2007.08.002

Flegal, K. E., & Anderson, M. C. (2008). Overthinking skilled motor performance: Or why those who teach can’t do. Psychonomic Bulletin & Review, 15(5), 927–932. https://doi.org/10.3758/PBR.15.5.927

Good, I. J. (1966). Speculations Concerning the First Ultraintelligent Machine. In Advances in Computers (Vol. 6, pp. 31–88). Elsevier. https://doi.org/10.1016/S0065-2458(08)60418-0

Good—1966—Speculations Concerning the First Ultraintelligent.pdf. (n.d.). Retrieved May 15, 2021, from https://asset-pdf.scinapse.io/prod/1586718744/1586718744.pdf

Harris, J. L., Bargh, J. A., & Brownell, K. D. (2009). Priming effects of television food advertising on eating behavior. Health Psychology, 28(4), 404–413. https://doi.org/10.1037/a0014399

Huettig, F., Guerra, E., & Helo, A. (n.d.). Towards Understanding the Task Dependency of Embodied Language Processing: The Influence of Colour During Language-Vision Interactions. Journal of Cognition, 3(1). https://doi.org/10.5334/joc.135

Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2016). Building Machines That Learn and Think Like People. ArXiv:1604.00289 [Cs, Stat]. http://arxiv.org/abs/1604.00289

Lawson et al. - 2017—Adults with autism overestimate the volatility of .pdf. (n.d.). Retrieved May 15, 2021, from https://www.nature.com/articles/nn.4615.pdf?origin=ppub

Lawson, R. P., Mathys, C., & Rees, G. (2017). Adults with autism overestimate the volatility of the sensory environment. Nature Neuroscience, 20(9), 1293–1299. https://doi.org/10.1038/nn.4615

Lord, C., Risi, S., Lambrecht, L., Cook, E. H., Leventhal, B. L., DiLavore, P. C., Pickles, A., & Rutter, M. (n.d.). The Autism Diagnostic Observation Schedule–Generic: A Standard Measure of Social and Communication Deficits Associated with the Spectrum of Autism. 19.

Lord et al. - The Autism Diagnostic Observation Schedule–Generic.pdf. (n.d.). Retrieved May 15, 2021, from https://link.springer.com/content/pdf/10.1023/A:1005592401947.pdf

Miller, L. K. (1999). The Savant Syndrome: Intellectual impairment and exceptional skill. Psychological Bulletin, 125(1), 31–46. https://doi.org/10.1037/0033-2909.125.1.31

Minsky, M. (1961). Steps toward Artificial Intelligence. Proceedings of the IRE, 49(1), 8–30. https://doi.org/10.1109/JRPROC.1961.287775

Minsky, M. (1988). Society Of Mind. Simon and Schuster.

Müller, V. C., & Bostrom, N. (2016). Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In V. C. Müller (Ed.), Fundamental Issues of Artificial Intelligence (pp. 555–572). Springer International Publishing. https://doi.org/10.1007/978-3-319-26485-1_33

Olofsson, J. K., & Gottfried, J. A. (2015). The muted sense: Neurocognitive limitations of olfactory language. Trends in Cognitive Sciences, 19(6), 314–321. https://doi.org/10.1016/j.tics.2015.04.007

Perl et al. - Are humans constantly but subconsciously smelling .pdf. (n.d.). Retrieved May 14, 2021, from https://royalsocietypublishing.org/doi/pdf/10.1098/rstb.2019.0372

Perl, O., Mishor, E., Ravia, A., Ravreby, I., & Sobel, N. (n.d.). Are humans constantly but subconsciously smelling themselves? 13.

Pylyshyn, X. (n.d.). Computation and Cognition | The MIT Press. The MIT Press. Retrieved May 5, 2021, from https://mitpress.mit.edu/books/computation-and-cognition

Stuss, D. T., & Knight, R. T. (2013). Principles of Frontal Lobe Function. OUP USA.

Tranel, D., Damasio, A. R., Damasio, H., & Brandt, J. P. (1994). Sensorimotor skill learning in amnesia: Additional evidence for the neural basis of nondeclarative memory. Learning & Memory, 1(3), 165–179. https://doi.org/10.1101/lm.1.3.165

Wilson, E. O. (1999). Consilience: The Unity of Knowledge. Vintage Books.

Yong, E. (2015, November 6). Why Do Most Languages Have So Few Words for Smells? The Atlantic. https://www.theatlantic.com/science/archive/2015/11/the-vocabulary-of-smell/414618/

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 8:51 AM

Very interesting. I’m an Experimental Psychologist by training, and I found this piece to be extremely well-written and well-researched. However, I'm not sure I can agree with the framing of your hypothesis.

There is a pervasive pattern in cognitive science (AI and cognitive psychology, in particular) of relying on a naïve Cartesian world-view. In other words, Descartes’ formulation of the Cogito, the thinking-self, is the implicit paradigm on which research is conducted. 

In this worldview, the “self” is taken to be an irreducible whole – Descartes’ placed his whole metaphysical system on the supposedly firm bedrock of Cogito Ergo Sum (I think, therefore I am). Later thinkers, including Kant, Hegel, and Nietzsche, would find many problems in the Cartesian formulation, and Nietzsche in particular would be significantly influential on psychoanalysis.

The research of Bargh and co that you have referenced here amounts to a recovery of psychoanalysis, which is also occurring elsewhere in neuroscience (see the work of Mark Solms) – although within more empirically scientific frameworks. Psychoanalysis, in part, was an exploration of the hidden processes that lie outside consciousness, either because they are components of the self, or because they are rejected from consciousness for whatever reason.

This is the line of reasoning I thought you were going to follow. Your conclusion, however, was different from the one I was expecting. After noting that some parts of cognition are not available to consciousness, you did not argue that these processes are under represented in cognitive science. Instead, you argued that normative cognition (which is generally taken to be available to consciousness) is underrepresented in cognitive science. I think this point is correct, but perhaps not for the reasons you've given. I found the idea that descriptive cognition cannot map normative cognition, and vice versa, to be a little confused.

I’m sure you’re aware of the early history of cognitive science, but if anything it was overly focused on normative cognition. Early attempts at AI, such as the work of John McCarthy, saw it attempted using logic as a means of representation. It was only after this failed spectacularly that many engineers were open to the idea that other forms of representation would be required. 

The current neglect of normative models is more a function of the historical flow of research than some psychological limitation of researchers. There's a plausible argument that some cognitive processes have been neglected due to their lack of availability to consciousness, but I'm not sure this can be applied to normative cognition. Rather, the spectacular success of learning techniques, combined with the earlier spectacular failure of reasoning techniques, has led to descriptive cognition being overrated in the research literature.