Relax… consciousness is an exclusively biological phenomenon, a comforting thought right up until the unconscious algorithms start running the conscious humans.
Efforts to promote the idea that AI can develop consciousness have just faced strong criticism issued by Mustafa Suleyman, AI Chief, Microsoft, who calls the idea fundamentally misguided and potentially harmful.
Suleyman has argued that, although AI can reach advanced intelligence, it lacks the biological foundation required for true consciousness. He explains that AI only simulates emotional experiences, without genuinely feeling them, saying, “Our physical experience of pain is something that makes us very sad… but AI doesn’t feel sad when it experiences ‘pain.’ It creates the perception of consciousness, not the experience.”
His view aligns with the philosophical theory of biological naturalism championed by late philosopher John Searle, which holds that consciousness is an exclusively biological phenomenon.
While AI’s sophisticated language capabilities may mislead people into thinking machines are conscious, Suleyman warns this can have dangerous consequences, including “AI psychosis”: unhealthy emotional attachments to chatbots. He has highlighted real cases where individuals have suffered severe mental health issues influenced by AI relationships, such as a teenager’s suicide connected to an AI chatbot interaction.
In a blog post, Suleyman advocates for developing AI as a helpful tool that clearly presents itself as AI, rather than simulating human consciousness, urging, “We must build AI for people, not to be a digital person.” His stance contrasts with that of some researchers who caution that once AI consciousness — if it can happen — is achieved, it will pose complex ethical dilemmas and existential risks.
The debate reflects broader concerns in the AI community, where consciousness remains scientifically unresolved. Some experts warn society must carefully consider how the illusion of AI consciousness impacts human behavior, trust, and morality, even if AI is not truly sentient.
Perhaps humans can look at the issue this way: AI does not need to attain true consciousness to cause human-scale consequences.