• OpenAI CEO Sam Altman believes an international body is necessary to rein in the technology
  • OpenAI leaders say the public should also be involved in regulatory planning
  • "Individual companies should be held to an extremely high standard of acting responsibly," they say

OpenAI's leadership believes an international watchdog and "strong" public oversight are needed for regulating the fast-evolving artificial intelligence space.

"We are likely to eventually need something like an IAEA (International Atomic Energy Agency) for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.," OpenAI's CEO Sam Altman, president Greg Brockman and chief scientist Ilya Sutskever wrote in a blog post Monday.

The IAEA, the United Nations' nuclear power watchdog, was established in 1957 "in response to the deep fears and expectations generated by the discoveries and diverse uses of nuclear technology."

While an AI governing body built around the model of the IAEA may not have the authority to shut down a company that has violated rules, it can monitor potential bad actors and ensure international standards and agreements are followed, TechCrunch reported.

Altman and others reiterated that there needs to be "some degree of coordination among the leading development efforts" to ensure superintelligence is integrated safely into society. They said it was important to have "strong public oversight" as communities around the world "should democratically decide on the bounds and defaults for AI systems."

"And of course, individual companies should be held to an extremely high standard of acting responsibly," the executives wrote.

Altman called for regulating AI during a Senate judiciary subcommittee on artificial intelligence last week.

During the hearing, some lawmakers expressed fears about AI's developments. Altman stressed the need for tougher regulation, underlining the importance of placing generative AI such as ChatGPT under special transparency measures.

However, some AI experts observed that the tone of the hearing and the lawmakers' decision to explore the possibilities of AI tech could sway initially unwavering beliefs about the dangers surrounding the tech.

Meredith Whittaker, co-founder of the AI Now Institute at New York University, told CNBC that lawmakers praising Altman at times during the hearing seemed like there was a "fandom" and it looked like "celebrity worship."

"It doesn't sound like the kind of hearing that's oriented around accountability," said Sarah Myers West, managing director of the AI Now Institute.

"Honestly, it's disheartening to see Congress let these CEOs pave the way for carte blanche, whatever they want, the terms that are most favorable to them," said Safiya Umoja Noble, UCLA professor and author of "Algorithms of Oppression: How Search Engines Reinforce Racism."

Margaret Mitchell, researcher and chief ethics scientist at AI company Hugging Face, noted that instead of overseeing AI, Altman's suggestions about AI regulation may "slow down any progress" on keeping the tech in check.

Ravit Dotan, tech ethicist and lead of an AI ethics lab at the University of Pittsburgh, said there is concern among smaller AI companies that regulations may only work for bigger firms.

Meanwhile, some experts want the government to leave AI regulation fully to the companies.

Former Google CEO Eric Schmidt told NBC's "Meet the Press" on Sunday that "there's no way a nonindustry person can understand what is possible," so regulation should be left in the hands of companies working on the tech.

Sen. Michael Bennet, D-Colo., has since proposed an updated version of a bill he introduced in 2022 that would establish a federal digital platform commission. In the updated bill, the lawmaker proposes measures that explicitly target AI products, CNN reported.

The proposed legislation does not cover a licensing program for AI tools and products, but it mentions the creation of a commission that would generate rules to oversee the sector.

Illustration shows OpenAI and ChatGPT logos
OpenAI's Sam Altman, Greg Brockman and Ilya Sutskever have provided details about the kind of regulatory oversight they think would help ensure that AI tech is kept in check. Reuters