David Rostcheck, 4/20/2023

The post-ChatGPT breakout of Large Language Models (LLMs) disoriented many technologists. The concept that a statistical model running on a server could hold a meaningful conversation forced many to come to terms with fundamental concepts about what thinking is and how people do it.

While technologists were studying and building information technologies, cognitive scientists spent the last century studying exactly those questions. LLMs emulate many aspects of human thought. Because of this, as soon as AI models were able to hold human-level conversations, researchers discovered that many techniques and observations from psychology applied equally well to the models as to human brains.

As new disciplines such as prompt engineering suddenly emerge, the technology landscape has tilted. In dealing with modern AI models, grounding in cognitive science proves much more useful than traditional software programming knowledge. Concepts from psychology such as framing, priming, and confabulation can be taken directly into interactions with LLMs. Suddenly psychology majors hold the high ground in mastering automation.

So how can technologists learn these foreign concepts? For those struggling with the idea that LLM AIs think in human-like ways and looking to master cognitive science, here are some resources:

Understanding both AI and cognitive science makes one a rare and valuable resource. If you are a technologist looking to shine at working with AI, consider adding cognitive science training to your learning plan.