Top line
Elon Musk and Steve Wozniak called on hundreds of top technologists, entrepreneurs and researchers in AI labs to immediately stop work on powerful AI systems, urging developers to turn away from an “out of control race” to deploy ever more advanced systems. Products as we better assess the risks of advanced artificial intelligence to humanity.
Key facts
The powerful engine that drives Open AI’s Chat-GPT, any AI lab working on systems more powerful than GPT-4 should be “immediately paused” for at least six months so that humans can consider the potential risks of such sophisticated AI systems, Open urges. A letter published Wednesday by the Future of Life Institute and signed by more than 1,000 people.
Any pause must be “public and verifiable” and include all key actors, the letter said, urging governments to “step in” and force the case against those too slow or unwilling to quit.
According to the letter, the rapid progress made in recent months “laboratories locked in uncontrolled competition” – including their creators – to develop and deploy powerful systems that no one can understand, can not predict, indicated the need for serious action. or control.
He said labs and independent experts should use the moratorium to develop common security protocols that should be audited and monitored by outside experts to ensure AI systems are “secure beyond a reasonable doubt.”
The signatories include the envy of renowned computer scientists, such as Joshua Bengio and Stuart Russell, researchers from prominent technology entrepreneurs at Oxford, Cambridge, Stanford, Caltech, Columbia, Google, Microsoft and Amazon. Tallinn, Pinterest founder Evan Sharp and Ripple founder Chris Larsen.
When people were invited to put their names on the letter — author Yuval Noah Harari and politician Andrew Young — the list should be viewed with skepticism, and the Verge added OpenAI CEO Sam Altman as a joke.
News Peg
ChatGPT’s phenomenal success has sparked a rush to bring new AI products to market, created by the US-based artificial intelligence forum OpenAI. Tech’s big players and countless startups are now scrambling to claim a position in the fast-growing market, which could shape the future of the entire sector, and labs are working to develop more efficient products. In the near future, experts warn that AI systems can exacerbate discrimination and inequality, promote misinformation, disrupt politics and the economy, and help hackers. In the long term, some experts warn that AI could pose an existential threat to humanity and destroy us. While it faces the future, they argue that AI needs to be fixed before it can be developed, and that ensuring systems are secure should be a key issue for development today.
Important quote
The open letter ends on a positive note: “Humanity can enjoy a prosperous future with AI.” If we succeed in creating powerful AI systems, we can now enjoy an ‘AI Summer’ where we can reap the rewards, use these systems and give society a chance to adapt. The society put a pause on other technologies that could be dangerous to the society. We can do it here. Let’s enjoy the long AI summer and not rush to fall unprepared.”
Opposite
Billionaire philanthropist Bill Gates, the founder and former CEO of Microsoft, who invested heavily in OpenAI, was not named as a signatory to the letter. Gates has previously acknowledged the impact AI will have on society, praising the “fantastic” advances in the field in recent months and saying his key focus is ensuring its benefits are met fairly, especially by those who need support. . In a recent blog post, Gates identified many of the same issues raised in an open letter signed by the likes of Musk. Gates said social problems surrounding AI need to be addressed by pushing between governments and the private sector to ensure the technology is used for good. As for technical problems, Gates said recent progress has made some problems “more urgent today than they were in the past” and that researchers are working to fix other pressing technical issues and may be able to do so within a few years. Issues around superintelligence — AI that surpasses the capabilities of humans across the board and is a real danger or hopelessly speculative by dividing the AI community — are legitimate, but not imminent given recent developments. Gates added that such concerns “become more serious over time.”
Further reading
Exclusive: Bill Gates Advises OpenAI, Microsoft, and Why AI Is ‘The Hottest Topic of 2023’ (Forbes)
Bill Gates thinks AI will improve healthcare for the world’s poor (Forbes).
Higher Intelligence: Paths, Risks, Strategies (Nick Bostrom)
ChatGPT’s Biggest Competition: Here Are Companies Working on Rival AI Chatbots (Forbes)
follow back Twitter or LinkedIn. Send me a reliable tip.