An unprecedented alliance of political figures, academic leaders, business magnates, and religious representatives has formally endorsed a new "pro-human" declaration concerning artificial intelligence, raising urgent calls for enhanced safety measures and stricter regulatory oversight.
Diverse Coalition Unites on AI Concerns
Backed by the prominent nonprofit organisation Future of Life Institute, which specialises in AI safety advocacy, the Pro-Human AI Declaration advocates for a fundamental shift in focus towards ensuring artificial intelligence systems remain accountable to human values and societal needs. The declaration explicitly states that "artificial intelligence should serve humanity, not the reverse," emphasising a vision where technology amplifies human potential rather than diminishing it.
Notable Signatories and Supporting Organisations
The signatories represent a remarkably broad spectrum of ideological backgrounds, including billionaire entrepreneur Richard Branson, Nobel-Prize-winning economist Daron Acemoglu, and former adviser to the Trump administration Steve Bannon. This eclectic mix underscores the widespread, cross-partisan apprehension regarding unchecked AI advancement.
Organisations lending their support to the declaration include the American Federation of Teachers, the Congress of Christian Leaders, and the Progressive Democrats of America, highlighting concerns that span educational, religious, and political spheres.
Core Principles of the Declaration
The declaration outlines several key tenets that form the foundation of its pro-human framework:
- Maintaining human control over artificial intelligence systems at all times.
- Preventing the formation of AI monopolies that could concentrate excessive technological power.
- Implementing robust protections for children from potential AI-related harms.
- Preserving human agency, individual liberty, and personal autonomy in the face of advancing automation.
- Ensuring corporate accountability for defects and inadequate safety controls in AI development.
Public Support and Industry Exclusion
A new opinion poll released concurrently with the declaration reveals substantial public backing for its principles, with 80 percent of American voters expressing support for keeping humans in charge of artificial intelligence and demanding greater accountability from AI companies. Interestingly, organisers deliberately excluded industry representatives from this initiative, citing previous involvement in similar petitions that failed to produce meaningful change.
Previous Safety Efforts and Industry Response
The Future of Life Institute has previously launched several initiatives aimed at promoting artificial intelligence safety, including a 2023 proposal for a six-month moratorium on advanced AI system development and a subsequent petition to ban superintelligent AI systems until proven safe. Neither effort gained traction within the technology industry, with some signatories of the 2023 letter eventually launching their own AI startups.
This pattern of industry resistance underscores the challenges facing regulatory efforts and highlights why the current declaration specifically emphasises corporate accountability and human-centric design principles as essential components of responsible artificial intelligence development.
