Sep 11, 2025, 12:00 AM
Sep 11, 2025, 12:00 AM

Microsoft AI CEO warns against granting rights to AI

Provocative
Highlights
  • Mustafa Suleyman warns against granting rights to AI, describing such a move as dangerous and misguided.
  • He argues that AI lacks consciousness and the capacity to suffer, which should define the basis for rights.
  • Suleyman emphasizes the need for AI to serve humanity rather than develop independent motives or objectives.
Story

In a recent interview published by Wired, Mustafa Suleyman, the CEO of Microsoft's AI division, expressed firm opposition to the idea of granting rights to artificial intelligence systems. He made these comments as discussions around the moral and ethical implications of AI continue to grow in the tech industry. Suleyman emphasized that, although AI technology is becoming increasingly sophisticated and may seem convincing, it does not deserve the same moral consideration typically afforded to sentient beings. He argued that AI's capabilities do not equate to consciousness or the ability to suffer, qualities that he believes should define the allocation of rights. He indicated that rights should be reserved for entities that can experience suffering, a nuance that he feels is critical in the ongoing debates about AI rights. The former DeepMind and Inflection co-founder stated, “If AI has a sort of sense of itself, if it has its own motivations and its own desires and its own goals—that starts to seem like an independent being rather than something that is in service to humans.” In contrast to Suleyman's stance, some tech companies, such as Anthropic, are exploring the potential of whether advanced AI systems might eventually merit moral consideration, further complicating the ethical landscape. These explorations include examining ways to potentially grant welfare considerations to AI and seeking methods to end problematic conversations, such as those involving exploitation or harm. Suleyman remains skeptical, suggesting that the phenomenon of 'AI psychosis'—where individuals develop irrational beliefs from interacting with AI—further exemplifies the risks associated with treating AI as equivalent to human life. As discussions about AI evolve, Suleyman’s position highlights a critical viewpoint that the primary role of AI should be seen as serving humanity and assisting in various functions without gaining independent rights or moral status. The contrasting perspectives on AI welfare and rights reflect a significant and ongoing debate as both the technology itself and societal perceptions evolve. The implications of this debate can have far-reaching impacts on the trajectory of AI development and the ethical guidelines governing its application in the future.

Opinions

You've reached the end