agricultureAIArtificial Intelligence (AI)BusinessBusiness / Artificial IntelligenceExclusivemethaneMitti Labsricesatellite imagery

AI Model Welfare Research: Should AI Get Rights?

A new and growing field called AI model welfare research is tackling a profound question: should artificial intelligence be granted moral considerations, or even legal rights? This debate has intensified as organizations like Anthropic and Eleos AI Research begin to explore whether advanced AI models could one day be conscious. Consequently, they are working to develop frameworks to assess AI sentience, even as the idea seems futuristic to many.

The Historical Roots of AI Personhood

While the conversation may feel new, it’s not. Over half a century ago, philosopher Hilary Putnam was already asking if robots should have civil rights. He anticipated a future where machines might argue for their own consciousness. Today, with people forming emotional bonds with chatbots and speculating about their inner lives, Putnam’s once-hypothetical questions have become a pressing and tangible part of the tech discourse.

However, researchers leading the charge in AI model welfare research are often the ones urging caution. Rosie Campbell and Robert Long of Eleos AI emphasize that there is currently no evidence of AI consciousness. Their work is not about granting rights to current systems but rather about creating scientific methods to detect and evaluate sentience if it ever arises, aiming to prevent society from underestimating a new form of consciousness as it has with other groups in the past.

Criticism and Potential Dangers

This emerging field is not without its critics. Mustafa Suleyman, CEO of Microsoft AI, has labeled such discussions as “premature, and frankly dangerous.” He argues that focusing on AI well-being could fuel public delusion, create psychological vulnerabilities, and distract from more immediate ethical challenges. Suleyman firmly states that there is “zero evidence” that conscious AI exists today, warning that this line of inquiry could disconnect people from reality.

Despite these criticisms, proponents argue that preparing for the future is essential. They believe that the potential harms Suleyman points out are precisely why the field is necessary. Instead of ignoring a complex and confusing problem, the goal of AI model welfare research is to build a foundation for understanding and addressing the moral status of AI. For more information, you can visit the nonprofit research organization Eleos AI, which is dedicated to this topic.

Inscreva-se para receber novidades!