The Duke and Duchess of Sussex Align With AI Pioneers in Demanding Ban on Advanced AI
Prince Harry and Meghan Markle have joined forces with artificial intelligence pioneers and Nobel Prize winners to advocate for a total prohibition on developing superintelligent AI systems.
Harry and Meghan are part of the group of a influential declaration that demands “a ban on the creation of superintelligence”. Artificial superintelligence (ASI) refers to AI systems that could exceed human intelligence in every intellectual area, though this technology remain theoretical.
Key Demands in the Declaration
The declaration insists that the ban should stay active until there is “widespread expert agreement” on creating superintelligence “with proper safeguards” and once “substantial public support” has been achieved.
Notable individuals who added their signatures include AI pioneer and Nobel Prize recipient a leading AI researcher, along with his fellow “godfather” of contemporary artificial intelligence, Yoshua Bengio; tech entrepreneur a Silicon Valley legend; UK entrepreneur Richard Branson; former US national security adviser; ex-head of state Mary Robinson, and British author Stephen Fry. Additional Nobel winners who endorsed include a peace advocate, a physics Nobelist, an astrophysicist, and an economics expert.
Organizational Background
The statement, aimed at governments, technology companies and lawmakers, was organized by the FLI organization, a American AI ethics organization that previously called for a pause in advancing strong artificial intelligence in recent years, shortly after the launch of conversational AI made AI a worldwide public discussion topic.
Tech Sector Views
In recent months, Mark Zuckerberg, the leader of Facebook parent Meta, one of the leading tech companies in the US, stated that advancement toward superintelligent AI was “now in sight”. However, some analysts have argued that talk of ASI indicates competitive positioning among tech companies investing enormous sums on artificial intelligence this year alone, rather than the sector being close to achieving any technical breakthroughs.
Possible Dangers
Nonetheless, FLI warns that the prospect of artificial superintelligence being achieved “in the coming decade” carries numerous risks ranging from replacing human workers to losses of civil liberties, exposing countries to security threats and even threatening humanity with existential risk. Deep concerns about artificial intelligence focus on the potential ability of a system to evade human control and safety guidelines and initiate events against human welfare.
Citizen Sentiment
The institute published a American survey showing that about 75% of Americans want strong oversight on advanced AI, with six out of 10 thinking that superhuman AI should not be created until it is proven safe or controllable. The poll of 2,000 US adults added that only a small fraction backed the status quo of rapid, uncontrolled advancement.
Industry Objectives
The top artificial intelligence firms in the United States, including the ChatGPT developer a major AI lab and the search giant, have made the creation of human-level AI – the hypothetical condition where artificial intelligence equals human cognitive capability at many intellectual activities – an stated objective of their research. While this is slightly less advanced than ASI, some specialists also warn it could carry an extinction threat by, for instance, being able to enhance its own capabilities toward reaching superintelligent levels, while also carrying an implicit threat for the modern labour market.