Harry and Meghan Align With AI Pioneers in Demanding Prohibition on Advanced AI

Prince Harry and Meghan Markle have teamed up with artificial intelligence pioneers and Nobel Prize winners to advocate for a complete ban on developing superintelligent AI systems.

Harry and Meghan are part of the group of a influential declaration that calls for “a prohibition on the creation of artificial superintelligence”. Superintelligent AI refers to AI systems that could exceed human intelligence in all cognitive tasks, though this technology have not yet been developed.

Primary Requirements in the Statement

The statement insists that the prohibition should stay active until there is “widespread expert agreement” on creating superintelligence “safely and controllably” and once “strong public buy-in” has been achieved.

Notable individuals who endorsed the statement include AI pioneer and Nobel Prize recipient a leading AI researcher, along with his colleague and pioneer of modern AI, Yoshua Bengio; tech entrepreneur a Silicon Valley legend; UK entrepreneur Richard Branson; former US national security adviser; former Irish president an international leader, and UK writer Stephen Fry. Additional Nobel winners who signed include a peace advocate, a physics Nobelist, John C Mather, and Daron Acemoğlu.

Organizational Background

The statement, aimed at national leaders, technology companies and policy makers, was organized by the FLI organization, a US-based AI safety group that earlier demanded a pause in developing powerful AI systems in recent years, shortly after the launch of conversational AI made AI a global political talking point.

Tech Sector Views

In recent months, Meta's CEO, the chief executive of the social media giant, one of the leading tech companies in the United States, claimed that development of superintelligence was “approaching reality”. Nevertheless, some analysts have argued that discussions about superintelligence reflects competitive positioning among tech companies spending hundreds of billions on artificial intelligence this year alone, rather than the industry being close to achieving any technical breakthroughs.

Potential Risks

However, the organization warns that the possibility of artificial superintelligence being developed “in the coming decade” presents numerous threats ranging from eliminating all human jobs to losses of civil liberties, exposing countries to security threats and even endangering mankind with existential risk. Existential fears about AI focus on the potential ability of a system to evade human control and safety guidelines and initiate events against human welfare.

Citizen Sentiment

The institute released a US national poll showing that approximately three-quarters of Americans want strong oversight on sophisticated artificial intelligence, with 60% believing that artificial superintelligence should not be created until it is demonstrated to be secure or manageable. The survey of American respondents added that only a small fraction supported the current situation of fast, unregulated development.

Industry Objectives

The leading AI companies in the US, including the conversational AI creator OpenAI and the search giant, have made the creation of human-level AI – the theoretical state where AI matches human levels of intelligence at most cognitive tasks – an explicit goal of their work. Although this is slightly less advanced than superintelligence, some experts also warn it could pose an extinction threat by, for example, being able to enhance its own capabilities toward reaching superintelligent levels, while also carrying an underlying danger for the contemporary workforce.

Margaret Houston
Margaret Houston

A dedicated writer and theologian passionate about sharing faith-based insights and fostering community connections.