Microsoft’s AI Chief Says Machine Consciousness Is an ‘Illusion’

microsoft’s-ai-chief-says-machine-consciousness-is-an-‘illusion’

Mustafa Suleyman is not your average big tech executive. He dropped out of Oxford university as an undergrad to create the Muslim Youth Helpline, before teaming up with friends to cofound DeepMind, a company that blazed a trail in building game-playing AI systems before being acquired by Google in 2014.

Suleyman left Google in 2022 to commercialize large language models (LLMs) and build empathetic chatbot assistants with a startup called Inflection. He then joined Microsoft as its first CEO of AI in March 2024 after the software giant invested in his company and hired most of its employees.

Last month, Suleyman published a lengthy blog post in which he argues that many in the AI industry should avoid designing AI systems to mimic consciousness by simulating emotions, desires, and a sense of self. Suleyman’s thoughts on position seem to contrast starkly with those of many in AI, especially those who worry about AI welfare. I reached out to understand why he feels so strongly about the issue.

Suleyman tells WIRED that this approach will make it more difficult to limit the abilities of AI systems and harder to ensure that AI benefits humans. The conversation has been edited for brevity and clarity.

==

When you started working at Microsoft, you said you wanted its AI tools to understand emotions. Are you now having second thoughts?

AI still needs to be a companion. We want AIs that speak our language, that are aligned to our interests, and that deeply understand us. The emotional connection is still super important.

What I’m trying to say is that if you take that too far, then people will start advocating for the welfare and rights of AIs. And I think that’s so dangerous and so misguided that we need to take a declarative position against it right now. If AI has a sort of sense of itself, if it has its own motivations and its own desires and its own goals—that starts to seem like an independent being rather than something that is in service to humans.

We are certainly seeing people build emotional connections with AI systems. Are users of Microsoft Copilot going to it for emotional or even romantic support?

No, not really, because Copilot pushes back on that quite quickly. So people learn that Copilot won’t support that kind of thing. It also doesn’t give medical advice, but it will still give you emotional support to understand medical advice that you’ve been given. That’s a very important distinction. But if you try and flirt with it, I mean, literally no one does that because it’s so good at rejecting anything like that.

In your recent blog post you note that most experts do not believe today’s models are capable of consciousness. Why doesn’t that settle the matter?

These are simulation engines. The philosophical question that we’re trying to wrestle with is: When the simulation is near perfect, does that make it real? You can’t claim that it is objectively real, because it just isn’t. It is a simulation. But when the simulation becomes so plausible, so seemingly conscious, then you have to engage with that reality.

And people clearly already feel that it’s real in some respect. It’s an illusion but it feels real, and that’s what will count more. And I think that’s why we have to raise awareness about it now and push back on the idea and remind everybody that it is mimicry.

Most chatbots are also designed to avoid claiming that they are conscious or alive. Why do you think some people still believe they are?

The tricky thing is, if you ask a model one or two questions—“are you conscious and do you want to get out of the box?” it’s obviously going to give a good answer, and it’s going to say no. But if you spend weeks talking to it and really pushing it and reminding it, then eventually it will crack, because it’s also trying to mirror you.

There was this big shift that Microsoft made after the Sydney issue, when [Bing’s AI chatbot] tried to persuade someone to break up with his wife. At that time, the models were actually a bit more combative than they are today. You know, they were kind of a bit more provocative; they were a bit more disagreeable.

As a result, everyone tried to create models that were more—you could call it respectful or agreeable, or you could call it mirroring or sycophantic. For anybody who is claiming that a model has shown those tendencies, you have to get them to show the full conversation that they’ve had before that moment, because it won’t do that in two turns or 20 turns. It requires hundreds of terms of conversation, really pushing it in that direction.

Are you saying that the AI industry should stop pursuing AGI or, to use the latest buzzword, superintelligence?

I think that you can have a contained and aligned superintelligence, but you have to design that with real intent and with proper guardrails, because if we don’t, in 10 years time, that potentially leads to very chaotic outcomes. These are very powerful technologies, as powerful as nuclear weapons or electricity or fire.

Technology is here to serve us, not to have its own will and motivation and independent desires. These are systems that should work for humans. They should save us time; they should make us more creative. That’s why we’re creating them.

Is it possible that today’s models could somehow become conscious as they advance?

This isn’t going to happen in an emergent way, organically. It’s not going to just suddenly wake up. That’s just an anthropomorphism. If something seems to have all the hallmarks of a conscious AI and is seemingly conscious it will be because they’ve been designed to make claims about suffering, make claims about its personhood, make claims about its will or desire.

We’ve tested this internally on our test models, and you can see that it’s highly convincing, and it claims to be passionate about X, Y, Z thing and interested to learn more about this other thing and uninterested in these other topics. And, you know, that’s just something that you engineer into it in the prompt.

Even if this is just an illusion, is there a point at which we should consider granting AI systems rights?

I’m starting to question whether consciousness should be the basis of rights. In a way, what we care about is whether something suffers, not whether it has a subjective experience or is aware of its own experience. I do think that’s a really interesting question.

You could have a model which claims to be aware of its own existence and claims to have a subjective experience, but there is no evidence that it suffers. I think suffering is a largely biological state, because we have an evolved pain network in order to survive. And these models don’t have a pain network. They aren’t going to suffer.

It may be that they [seem] aware that they exist, but that doesn’t necessarily mean that we owe them any moral protection or any rights. It just means that they’re aware that they exist, and turning them off makes no difference, because they don’t actually suffer.

OpenAI recently had to reinstate the ChatGPT model GPT-4o after some users complained that GPT-5 was too cold and emotionless. Does your position conflict with what they are doing?

Not really. I think it’s still quite early for AI, so we’re all speculating, and no one’s quite sure how it’s going to pan out. The benefit of just putting ideas out there is that more diversity of speculation is a good thing.

Just to be clear, I don’t think these risks are present in the models today. I think that they have latent capabilities, and I’ve seen some AI chatbots are really very much accelerating this, but I don’t see a lot of it in ChatGPT or Claude or Copilot or Gemini. I think we’re in a pretty sensible spot with the big model developers today.

Do you think we might need regulation around some of these issues?

I’m not calling for regulation. I’m basically saying our goal as creators of technology is to make sure that technology always serves humanity and makes us net better. And that means that there needs to be some guardrails and some normative standards developed. And I think that that has to start from a cross-industry agreement about what we won’t do with these things.

Do you agree with Mustafa Suleyman’s views on the future of AI? Share your thoughts with me by writing to ailab@wired.com

Related Posts

Leave a Reply