As concerns over artificial intelligence (AI) grow, a group of AI scientists is calling for the establishment of a framework that would oversee the potential dangers posed by AI technology.
The group warns that without adequate safeguards, the same technology they helped bring to life could become dangerous if it falls out of human hands.
The scientists urged governments worldwide to take action by developing authorities that can monitor and address potential AI incidents. This would include implementing a "global contingency plan" to handle emergencies that might arise if AI technology malfunctions or is misused.
Did you know?
Want to get smarter & wealthier with crypto?
Subscribe - We publish new crypto explainer videos every week!
10 Biggest Crypto Scams & How to Avoid Them (ANIMATED)
The researchers proposed three primary actions: the establishment of emergency preparedness measures, the development of a safety assurance framework, and the promotion of global AI safety and verification research.
The group's statement also suggested that international regulatory systems should be established to prevent the creation of AI models that could present catastrophic risks on a global level. They stressed:
The global nature of these risks from AI makes it necessary to recognize AI safety as a global public good, and work towards global governance of these risks.
The statement gathered support from 31 AI researchers across various countries, including Canada, China, the United States, and Ireland. Among the signatories were several recipients of the Turing Award, the highest distinction in computer science.
Overall, it appears that cooperative international action is essential to address the emerging threats of advanced AI technologies.
In other news, Ireland's Data Protection Commission has recently launched an inquiry into Google to assess whether the development of the company's AI model aligns with EU data protection laws.