Andrew Barto and Richard Sutton, who received the highest honor of computing this week for their fundamental work on learning reinforcements, did not waste time with the help of their new platform to sound alarms about unsafe AI development practices in industry.
The couple was announced as recipients of the 2024 ACM AM Turing Award on Wednesday, often called the “Nobel Prize of Computing”, and is accompanied by a price of $ 1 million that is financed by Google.
Instead of simply celebrating their performance, they immediately criticized what they see as dangerously hurried use of AI technologies.
“Releasing software to millions of people without guarantees is not a good technical practice,” Barto told The Financial Times. “The engineering practice has evolved to try to reduce the negative consequences of technology, and I see that not being practiced by the companies that develop.”
Their assessment compared the current AI development practices such as “Building and testing a bridge by having people use” without correct safety controls, because AI companies try to give priority to business incentives over responsible innovation.
The journey of the duo started in the late 1970s when Sutton was the student of Barto at the University of Massachusetts. During the 1980s they developed the learning of reinforcement – a technique where AI systems learn by trial and error by receiving rewards or punishments – when few believed in the approach.
Their work culminated in their groundbreaking textbook ‘Strengthening reinforcement: an introduction’, which has been cited nearly 80 thousand times and became the Bible for a generation of AI researchers.
“The work of Barto and Sutton shows the enormous potential of applying a multidisciplinary approach to long -term challenges in our field,” said ACM president Yannis Ioannidis in an announcement. “Strengthening learning continues to grow and offers great potential for further progress in computer use and many other disciplines.”
The Turing award of $ 1 million is because learning to strengthen innovation continues to stimulate robotics, chip design and large language models, in which reinforcement learning from human feedback (RLHF) becomes a critical training method for systems such as chatgpt.
Industrial safety problems
Nevertheless, the warnings of the few growing concerns of other big names reflect in the field of computer science.
Yoshua Bengio, herself a Turing Award receiver, publicly supported their position on Bluesky.
“Congratulations to Rich Sutton and Andrew Barto on receiving the Turing Award as recognition for their important contributions to ML,” he said. “I also stand with them: releasing models to the public without the right technical and social guarantees is irresponsible.”
Their position corresponds to criticism from Geoffrey Hinton, another Turing award -winner – known as the godfather of AI – as well as a statement of 2023 of Top AI researchers and managers – including OpenAi CEO Sam Altman – who called for mitigating ecstier risks of AI as a worldwide.
Former OpenAi researchers have exercised similar concerns.
Jan Leike, who recently resigned as head of OpenAi’s Alignment Initiatives and joined rival AI Company Anthropic, pointed to an insufficient safety focus and wrote that “building smarter-than-human machines is an inherently dangerous undertaking.”
“In recent years, safety culture and processes have taken a backseat for shiny products,” said Leike.
Leopold Aschenbrenner, another former OpenAi security researcher, called security practices at the company ‘Egregious insufficient’. At the same time, Paul Christiano, who also previously led the Language Model Co-Team of OpenAI that there could be a “10-20% chance of AI acquisition, [with] a lot of [or] Most people dead. “
Despite their warnings, Barto and Sutton retain a careful optimistic view of the potential of AI.
In an interview with Axios, both suggested that the current fears about AI might be exaggerated, although they acknowledge that an important social unrest is possible.
“I think these systems are many opportunities to improve many aspects of our lives and society, assuming that sufficient caution has been brought,” Barto told Axios.
Sutton sees artificial general intelligence as a turning point and cheese it as an opportunity to introduce new “spirits” into the world without developing biological evolution – which open the gates for humanity in the future to communicate with conscious machines.
Edited by Sebastian Sinclair
Generally intelligent Newsletter
A weekly AI trip told by Gen, a generative AI model.