The models predict human behaviors well, but researchers remain unsure how these computational systems mimic actual brain processes.
Scientists are harnessing AI to probe the depths of the human mind, according to a recent article in MIT Technology Review.
Two landmark studies published in Nature showcase both the promise and limitations of using neural networks to understand cognition — a quest that sits at the intersection of neuroscience, psychology — and cutting-edge AI research originating from academia.
Researchers have drawn parallels between biological brains and artificial neural networks: while large language models (LLMs) require immense computational and energy resources, they share with brains the unique ability to generate language and model behavior.
One MIT-affiliated study adapted an LLM to form a “foundation model of human cognition,” fine-tuned with data from over 160 psychology experiments. This AI model, dubbed Centaur, has surpassed traditional psychological models in predicting human responses across various problem-solving scenarios from gambling choices to memory recall tasks.
Experts believe that such predictive power could help scientists simulate complex cognitive experiments digitally, saving time and resources. More importantly, by investigating how Centaur achieves its results, psychologists could uncover new theories about mental processes.
Yet, skepticism persists. Some psychologists note that Centaur, with its immense computational heft, may mimic human outputs without reflecting mental processes, likening it to a calculator that computes answers correctly but offers no insight into how people think. This raises doubts about the interpretability of vast AI models.
To bridge this gap, the second Nature study focused on much smaller neural networks — sometimes with only one artificial neuron. These mini-networks, while tailored to predict behavior in specific contexts (such as gambling choices), can be analyzed in detail, allowing scientists to trace every step in decision-making and build testable hypotheses about biological cognition. The key challenge remains: highly accurate, large-scale models are hard to interpret, while smaller models may lack generality.
While researchers can use AI to predict what a brain or a person may or may not do in a given situation with increasing accuracy, they are still struggling to explain why these predictions work and what they reveal about underlying mental or computational processes.