Existential Dangers of AI are Dismissed by Meta's AI Leaders

The conversation surrounding artificial intelligence (AI) has taken a dramatic turn in recent months. What was once a topic focused on lighthearted chatbots generating funny sea shanties has now shifted to discussions about AI systems potentially causing the extinction of humanity. This sudden shift has left many feeling whiplash.

Experts in the field have been asked why there is a growing concern over existential risks associated with AI and why this discussion is gaining momentum now. According to Whittaker, Signal Foundation's president and a former researcher at Google, people are naturally drawn to fear and excitement. She explains that "ghost stories are contagious" and the fear of AI poses an exciting and stimulating narrative.

While the AI community has experienced cycles of hype and doom before, the current situation feels different. The Overton window, which represents the range of acceptable public discourse, has shifted towards recognizing and discussing the risks and policies associated with AI. An opinion that was previously considered radical has now gained widespread acceptance, drawing the interest of both the general public and global leaders.

Meredith Whittaker's viewpoint is shared by others in the industry. Unlike influential individuals in leading tech organizations such as artificial intelligence startups like OpenAI, along with Microsoft and Google, who emphasize the risks of AI and limit public access to their AI models, Meta, a prominent player in the field, has adopted a contrasting approach.

Yann LeCun, chief AI scientist at Meta and a recipient of the Turing Award, recently expressed his belief that the thought of an incredibly genius Artificial Intelligence system dominating the world is absurd. He dismisses concerns regarding AI systems. acquiring global resources to convert the universe into paper clips, referring to the hypothetical scenario known as the "paper clip maximizer problem." This delves into the exploration of unintentional harm inflicted upon humans during the pursuit of maximizing the production of paper clips.

LeCun's position distinguishes him from Yoshua Bengio and Geoffrey Hinton, two prominent researchers in the field of artificial intelligence who has also been awarded the Turing Prize. Hinton and Bengio have recently expressed their concerns regarding the potential dangers associated with AI's existence.

The AI Research Vice President at Meta, Joelle Pineau, shares LeCun's perspective. She describes the current conversation on existential risk as "unhinged," suggesting that the excessive focus on future risks prevents meaningful discussions on the ongoing dangers caused by AI. Pineau argues that evaluating risk rationally involves assessing the likelihood of results and multiplying them by their respective costs. However, those advocating for existential dangers attribute an immeasurable value to these consequences, inhibiting rational deliberations on alternative options.

While acknowledging that discussions on existential risk raise awareness about AI risks, LeCun and Pineau argue that those promoting tech doom scenarios have ulterior motives: shaping the laws governing the tech industry. LeCun raises the issue of whether AI systems should prioritize transparency and public accountability or be under the control of a limited number of technology firms located on the western coast of the United States.

Despite being slower than its rivals in implementing advanced generative AI and AI models integrated into products, Meta adopts an open-source strategy to gain a competitive advantage in the AI market. Meta plans to release its initial model as open-source, which aligns with the vision of LeCun of creating AI systems with intelligence like humans. Pineau explains that open-sourcing technology not only enables external scrutiny and accountability but also enhances the integration of technologies of Meta into the Internet's infrastructure.

The AI Act draft regulations have been approved by the European Parliament. The regulations entail several key provisions, including the prohibition of the use of biometric technology in real-time and the practice of predictive policing in public areas. The regulations also impose transparency requirements on significant AI models, including the prohibition of copying copyrighted material, as well as the classification of recommendation algorithms as "high-risk" AI, which calls for more rigorous regulation.

The next step involves European Parliament members working with the Council of the European Union and the European Commission to finalize the details and shape the legislation. Lawmakers of Europe aim to have the AI Act in its final form by December, with the regulation expected to be in force by 2026.


H/T: TR

Read next: Security Researchers Raise The Alarm As More Than 100,000 ChatGPT Accounts Breached By Hackers
Previous Post Next Post