Supply Chain, Open Source Post Major Challenge to AI Systems


Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development

ChatGPT’s ‘Giant Leap’ Means AI Could Achieve Human-Level Intelligence in 5 Years

Supply Chain, Open Source Pose Major Challenge to AI Systems
From left, UC-Berkeley Professor Stuart Russell, Anthropic CEO Dario Amodel and University of Montreal Professor Yoshua Bengio

Supply chain compromise, open source technology and rapid advances in artificial intelligence capabilities pose significant challenges to safeguarding AI, experts told a Senate panel Tuesday.

See Also: Perspectives on Security for the Board

University of Montreal Professor Yoshua Bengio was surprised by the “giant leap” achieved by systems like ChatGPT, which has made it tough to discern whether someone is interacting with another human or a machine. Prior to ChatGPT, he thought it would be decades or perhaps centuries until AI systems achieved human-level intelligence. Now, Bengio worries it could happen in as soon as five years.

“If this technology goes wrong, it could go terribly wrong,” Bengio told the Senate Judiciary Committee’s subcommittee on privacy, technology and the law during a Tuesday hearing on the principles for artificial intelligence regulation and oversight.
“These severe risks could arise either intentionally because of malicious actors using AI systems to achieve tactical goals or unintentionally if an AI system developed strategies that are misaligned with our values and norms,” he said.

The U.S. must redouble its efforts to secure the AI supply chain, which consists of everything from chips and semiconductor manufacturing equipment to the security of AI models stored on the severs of firms like Anthropic, said company CEO Dario Amodei. He said there are “substantially more bottlenecks” in AI systems as compared with software, since the AI system itself could be stolen or released in an uncontrolled way (see: 7 Tech Firms Pledge to White House to Make AI Safe, Secure).

Amodei and Bengio testified alongside University of California, Berkeley Computer Science Professor Stuart Russell.

‘Would You Allow Open Source Nuclear Bombs?’

Bengio said one big risk area around AI systems is open source technology, which “opens the door” to bad actors. Adversaries can take advantage of open source technology without huge amounts of compute or strong expertise in cybersecurity, according to Bengio. He urged the federal government to establish a definition of what constitutes open source technology – even if it changes over time – and use it to ensure future open source releases for AI systems are vetted for potential misuse before being deployed.

“Open source is great for scientific progress,” Bengio said. “But if nuclear bombs were software, would you allow open source nuclear bombs?”

Bengio said the United States must ensure that spending on AI safety is equivalent to how much the private sector is spending on new AI capabilities, either through incentives to businesses or direct investment in nonprofit organizations. The safety investments should address the hardware used in AI systems as well as cybersecurity controls necessary to safeguard the software that powers AI systems.

AI eventually will be responsible for a majority of U.S. economic output, making it absolutely critical that a regulatory agency be established with oversight around artificial intelligence, according to Russell. He said the government should remove AI models from the market that engage in unacceptable behavior, which in turn should drive investment to make the systems more predictable and controlled.

Test AI for Safety, Urge Experts

Amodei called for a testing and auditing regime for newer, more powerful AI models that’s similar to what cars and airplanes go through before being released to the general public. New AI models should pass “a rigorous battery of safety tests,” which Amodei said should include tests by both third parties as well as national security experts in government (see: US Senate Leader Champions More AI Security, Explainability).

“AI models in the near future will be powerful machines that possess great utility, but can be lethal if designed incorrectly or misused,” Amodei said.

Amodei cautioned the science of testing and auditing AI systems is still in its infancy, meaning that the bad behaviors an AI system is capable of can’t be detected until the system is broadly deployed to users, which creates greater risk. The National Institute of Standards and Technology and the National AI Research Resource are well-suited to assess and measure the testing and auditing regime for AI systems to ensure it’s actually effective, Amodei said.

“Probably it will happen at least once – and unfortunately, perhaps repeatedly – that we run these tests, we think things are safe, and then they turn out not to be safe,” Amodei said. “We need a mechanism for recalling things or modifying things if the test ends up being wrong. That seems like common sense to me.”

Bengio, meanwhile, called for the government to limit who has access to powerful AI systems and put protocols and incentives in place for those who do have access to act safely. He said lawmakers should ensure AI systems act as intended in agreement with American values and norms and assess the potential for harm associated with AI systems through either human action or an internet connection.

From a geopolitical perspective, Russell said the level of threat China poses around AI has been “slightly overstated” since they’re mostly building copycat systems that aren’t as good as Open AI or Anthropic. The primary customer for most Chinese AI startups is the Ministry of State Security, meaning AI systems excel at voice recognition and facial recognition but struggle in areas like reasoning and planning.

“They’re not producing the basic research breakthroughs that we’ve seen both in the academic and the private sector in the U.S.,” Russell said. “They don’t give people the freedom to think hard about the most important problems.”





Source link