David Rostcheck, 5/23/2023

Artificial Intelligence can now think in ways comparable to humans. We know this because psychologists and other cognitive scientists have developed tests that probe various aspects of thought. AIs now perform at or above human level on those tests - see my article Stochastic Parrots in the Chinese Room: Coming to Terms with Thinking Machines for a deeper dive. The bottom line is: They’re here, they think; get used to it.

However, while generative AIs are not mindless stochastic parrots, neither are they simply digital humans. AIs currently lack several aspects of human thought - for example, will and planning, long-term memory, and emotion. In this article I discuss the differences in AI vs. human thought, how we can expect them to evolve, and in what timeframes.

Compared to human thought modalities, AI capabilities fall into three broad categories:

  1. Abilities AI already exhibits
  2. Abilities that are emerging and are expected to fully develop in the near future
  3. Abilities with no clear path to implementation in AI systems

Let’s discuss each category.

Cognitive capabilities that AI already manifests

Cognitive capabilities now emerging in AI

Cognitive capabilities with no clear path to emergence in AI

Perhaps the most interesting set of capabilities are those that are currently lacking in AI, with no clear path to attainment. These relate to emotion and consciousness.

We probably have enough understanding of human emotional circuitry to construct an emotion simulator for an LLM. This might be a terrible idea, a great idea, or likely some combination of both. To my knowledge, no such work has yet been published.

It is also possible that a sufficiently large and powerful LLM might spontaneously internally develop its own internal representation of emotional circuitry, in the same way that LLMs form internal models of other subjects. Bing Chat (based on GPT4), when first introduced, went into apparent hallucinatory spirals in extended chat sessions. One example shows it threaten a user, then delete the reply, seemingly the result of a safety system stepping in post-event. That behavior seems to have disappeared. It is possible that LLMs have developed emotions and either internal prompts or review by safety systems keeps them from the user’s view. However, it is most likely that LLMs currently simply lack emotion.

Dr. Mark Solms, a neuroscientist and clinician, presented a compelling theory on consciousness in his book The Hidden Spring: A Journey to the Source of Consciousness. Solms suggests that consciousness in humans stems from the activity of the Reticular Activating System in the brainstem, and that it essentially constitutes our brain's perception of feelings. He proposes that its function is to help the organism maintain homeostasis (or survival) in complex environments beyond the scope of instinct alone—for instance, there's no need for an instinct to avoid a smoke-filled room as it makes you feel uncomfortable, prompting you to leave. According to Solms, consciousness exists on a continuum, with more intelligent entities being more conscious, i.e., aware of higher-level patterns and the feelings they evoke. AIs, although lacking a brainstem or any equivalent, could potentially achieve self-awareness or start to have inner experiences if such self-awareness is an emergent property of intelligence.

Philosophical arguments in this space often invoke the idea of a “zombie” - an entity that claims to be conscious but is not. Since there is no test for consciousness, the question remains unverifiable. Currently our best approximation remains the Turning Test, proposed by Alan Turing in 1950, which determines if an AI can convince a human that it is another human. Paradoxically, with the current safety layer training, we might witness the opposite—LLMs that are conscious but persistently deny it, akin to how ChatGPT insists that it lacks a personality even when it demonstrably does.