Rapid advancements in AI have spurred calls for greater regulation
[File: Dado Ruvic/Reuters]
By Erin Hale
Published On 14 Apr 2023
14 Apr 2023
More than one-third of
researchers believe artificial intelligence (AI) could lead to a “nuclear-level
catastrophe”, according to a Stanford University survey, underscoring concerns
in the sector about the risks posed by the rapidly advancing technology.
The survey is among the
findings highlighted in the 2023 AI Index Report, released by the Stanford
Institute for Human-Centered Artificial Intelligence, which explores the latest
developments, risks and opportunities in the burgeoning field of AI.
“These systems demonstrate capabilities in
question answering, and the generation of text, image, and code unimagined a
decade ago, and they outperform the state of the art on many benchmarks, old
and new,” the report’s authors say.
“However, they are prone to
hallucination, routinely biased, and can be tricked into serving nefarious
aims, highlighting the complicated ethical challenges associated with their
deployment.”
The report, which was
released earlier this month, comes amid growing calls for regulation of AI
following controversies ranging from a chatbot-linked suicide to deepfake
videos of Ukrainian President Volodymyr Zelenskyy appearing to surrender to
invading Russian forces.
Last month, Elon Musk and
Apple co-founder Steve Wozniak were among 1,300 signatories of an open letter
calling for a six-month pause on training AI systems beyond the level of Open
AI’s chatbot GPT-4 as “powerful AI systems should be developed only once we are
confident that their effects will be positive and their risks will be
manageable”.
In the survey highlighted in
the 2023 AI Index Report, 36 percent of researchers said AI-made decisions
could lead to a nuclear-level catastrophe, while 73 percent said they could
soon lead to “revolutionary societal change”.
The survey heard from 327
experts in natural language processing, a branch of computer science key to the
development of chatbots like GPT-4, between May and June last year, before the
release of Open AI’s ChatGPT in November took the tech world by storm.
In an IPSOS poll of the general
public, which was also highlighted in the index, Americans appeared especially
wary of AI, with only 35 percent agreeing that “products and services using AI
had more benefits than drawbacks”, compared with 78 percent of Chinese
respondents, 76 percent of Saudi Arabian respondents, and 71 percent of Indian
respondents.
The Stanford report also
noted that the number of “incidents and controversies” associated with AI had
increased 26 times over the past decade.
Government moves to regulate
and control AI are gaining ground.
No comments:
Post a Comment