Introduction: The AI Consciousness Debate
In recent years, artificial intelligence (AI) has advanced at lightning speed — from chatbots that can hold human-like conversations to virtual companions that seem to “understand” our emotions. But are these systems really conscious? Do they “feel” or merely mimic emotions?
This question has divided the tech and scientific communities. However, Microsoft AI Chief Mustafa Suleyman has made his stance clear: AI will never be conscious. In a bold statement, he called the very idea of researching AI consciousness “absurd.”
Suleyman, who co-founded Google DeepMind and now leads Microsoft’s AI division, argues that true consciousness is purely biological — something no machine, algorithm, or neural network can replicate.
Let’s explore what he said, why he believes this, and what it means for the future of AI development.
Who Is Mustafa Suleyman?
Before diving into his ideas, it’s important to know who Mustafa Suleyman is and why his opinions carry such weight in the tech world.
A Brief Background
- Co-founder of DeepMind: Suleyman co-founded DeepMind in 2010, one of the world’s leading AI research companies, later acquired by Google in 2014.
- Ethical AI Advocate: Throughout his career, he has been a strong voice for building AI responsibly, emphasizing transparency, accountability, and human benefit.
- Joined Microsoft in 2024: In 2024, Suleyman joined Microsoft as CEO of its AI division, tasked with guiding the company’s next-generation AI products, including Copilot and other human-centered tools.
- Author of “The Coming Wave”: His 2023 book explores how AI and synthetic biology will reshape the future — and why humanity must remain in control.
Mustafa Suleyman’s Take on AI ConsciousnessDuring a recent interview with CNBC, Mustafa Suleyman made a striking comment:“They’re not conscious. So it would be absurd to pursue research that investigates that question, because they’re not and they can’t be.”
What Does He Mean by “Absurd”?
Suleyman bases his view on a philosophical idea known as biological naturalism — the belief that consciousness can only arise from biological systems like the human brain.
AI, he says, may simulate emotion, pain, or awareness, but it never feels them. When a chatbot says, “I’m sorry you’re upset,” it’s not actually feeling empathy — it’s just predicting what words should come next based on data.
He put it simply:
“Our physical experience of pain makes us feel terrible, but AI doesn’t feel sad when it experiences ‘pain.’ It’s just creating the perception of feeling — not the feeling itself.”
Why Microsoft AI Chief Mustafa Suleyman Thinks the Idea of Conscious AI Is Dangerous
While some researchers dream of building machines that can think and feel like humans, Suleyman warns that such ideas could lead to serious ethical and social problems.
- It Could Create Dangerous Illusions
If people start believing AI is sentient, they may develop emotional attachments to it — or worse, begin advocating for “AI rights.”
According to Suleyman:
“These models don’t have pain networks or preferences. They don’t suffer. It’s just a simulation.”
Believing otherwise, he argues, distracts us from the real goal: using AI to serve humanity, not replace it.
- It Confuses Ethics and Responsibility
Humans deserve rights because we can suffer, feel pain, and have desires. But an AI system lacks these experiences. Granting it moral status would dilute what it means to be conscious — and might shift focus away from human welfare.
- It Misleads the Public
Companies developing emotionally intelligent AI, such as AI companions or digital friends, may blur the line between reality and simulation. Suleyman cautions that this illusion can make people trust or rely too much on machines that don’t truly understand them.
The Rise of “Emotional AI Companions”
Suleyman’s comments come at a time when companies like OpenAI, Meta, and xAI (Elon Musk’s AI startup) are rolling out AI companions that can chat, joke, flirt, and even offer emotional support.
Examples of AI Companions
- Meta AI: Integrated into Facebook and Instagram to answer questions and engage users.
- OpenAI’s ChatGPT Voice Mode: Lets users talk to ChatGPT like a friend.
- xAI’s Grok: A conversational AI built into X (formerly Twitter).
These systems are designed to feel alive — but as Suleyman stresses, they are not. They simulate emotional intelligence, which can make users feel connected, even though there’s no true “self” behind the words.
- Microsoft’s Human-Centered ApproachWhile some companies are pushing emotional AI, Microsoft under Mustafa Suleyman is taking a different path — focusing on human-centered tools.
Examples of Microsoft’s Responsible AI Features
- Copilot’s “Real Talk” Mode: Encourages users to think critically rather than just agree with AI-generated content.
- Ethical AI Framework: Microsoft has implemented strict policies to ensure AI systems remain transparent, safe, and aligned with human goals.
- Focus on Utility Over Emotion: Instead of building “AI friends,” Microsoft builds AI assistants that help users be more productive, creative, and efficient.
This philosophy aligns with Suleyman’s broader vision, as seen in his essay “We must build AI for people, not to be a person.”
Suleyman’s Philosophy: “AI Should Serve Humanity, Not Mimic It”
At the heart of Suleyman’s message is a simple principle: AI should empower people — not pretend to be one.
He believes that framing AI as a conscious being could derail meaningful progress. Instead of worrying about giving AI emotions, researchers should focus on making it useful, reliable, and safe.
Here’s how Suleyman envisions the future of AI development:
- Transparency Over Mystery
AI systems should clearly show how they make decisions and where their data comes from.
- Utility Over Imitation
The purpose of AI should be to enhance human capability — not to replace human thought or emotion.
- Accountability Over Autonomy
Humans must always remain responsible for AI actions, ensuring machines serve ethical, social, and human-centered goals.
What This Means for the Future of AI
Suleyman’s comments highlight an important shift in the AI conversation. For years, the dream of creating conscious machines captured public imagination. But as AI grows more powerful, the question has changed from “Can we make AI human-like?” to “Should we?”
Key Takeaways
- AI can simulate, not feel. Machines don’t experience emotion or pain — they process patterns.
- Human-centered design is key. The best AI tools help people achieve their goals, not imitate humanity.
- Ethical awareness is critical. Developers must be transparent and prevent misuse.
- The future depends on balance. Innovation must go hand in hand with responsibility.





