laitimes

Why am I joining Musk in calling for a moratorium on AI training?

AI-driven conversational influence will be the most powerful form of targeted persuasion humanity has ever seen.

On March 29, the Future of Life Institute released an open letter calling on all artificial intelligence (AI) labs to suspend training for at least six months on all large-scale AI systems, which are more powerful than the current state-of-the-art GPT-4. I signed the "moratorium letter" along with a number of other industry luminaries, including Elon Musk and Steve Wozniak. I don't know if the appeal is useful, but I believe there is indeed danger ahead, and here are the reasons why I say so.

I've been worried about the impact of AI on society for some time, and I'm not referring to the long-term existential risk of humanity as a sudden takeover of the world by superintelligence, but the short-term risk of AI being used to manipulate society, i.e. intentionally influencing the information we receive through AI and determining how that information spreads among people. This risk has risen rapidly over the past 12 months due to significant performance gains from two overlapping technologies, often referred to as "generative AI" and "conversational AI." Let me talk about why I'm worried about these two technologies.

Generative AI refers to the ability of large language models (LLMs) to create original content based on human requirements. AI-generated content now includes images, artwork, videos, essays, poetry, computer software, music, and scientific articles. In the past, generative content was breathtaking, but no one would mistake it for a human creation. That has changed in the last 12 months. AI systems are suddenly able to create things that can easily trick us into believing that they were either really created by humans, or real videos or photos captured in the real world. This capability is currently being deployed on a large scale, posing a number of significant risks to society.

The risks to the job market are most obvious. Because LLM systems create amazing things, professionals now refer to LLM systems as "intelligence that can compete with humans" and can replace content creators. From artists and writers to computer programmers and financial analysts, a whole host of professions will be affected. In fact, a new study by Open AI, OpenResearch, and the University of Pennsylvania explores the impact of AI on the U.S. labor market by comparing GPT-4 capabilities and job requirements. The study's authors estimate that at least 50 percent of jobs in the 20 percent of the U.S. workforce will be affected by GPT-4, with high-paying jobs being more affected.

This imminent threat to the job market is worrying, but that's not why I signed the "moratorium." My concern is that AI-generated content looks and feels real and often gives the impression of being authoritative, but it's actually prone to factual errors. There is no exact standard or regulatory body for how to ensure that these systems, which will be a major part of the workforce, do not go from spreading small mistakes to spreading crazy fabrications, and it will take time for safeguards to be put in place and for regulators to ensure that protections are put in place.

There is a clear next clear risk that ill-wishers may deliberately create factual content that is used to deliberately spread misinformation, disinformation, and outright lies. This risk has always existed, but generative AI can reach a level not seen before, and our world is easily filled with content that looks authoritative but is completely fabricated. This situation has already been demonstrated with AI face-changing tools like Deepfake, where public figures can do anything or say anything in a video that looks very realistic. As AI becomes more sophisticated, the public will not be able to distinguish between real and fake. We need watermarking systems and other technologies to distinguish between real content and AI-generated content. It will also take time to develop and implement these protections.

Conversational AI systems pose a unique set of risks. This form of generative AI can have real-time conversations with users through text chat and voice chat. These systems have recently evolved to the point where AI can have coherent conversations with humans and track the conversation process and context over time. These technologies are my biggest concern because they introduce a whole new form of influence movement that regulators are not prepared for – conversational influence.

Every salesperson knows that the best way to convince someone to buy something or believe something is to talk to them so you can make your point, watch their reaction, and then adjust your strategy to deal with their resistance or concern. With the release of GPT-4, it is now clear that AI systems will be able to engage users in real, real-time conversations as a targeted form of influence. My concern is that third parties use APIs or plugins to deliver promotional targets into seemingly natural conversations, and unsuspecting users will be manipulated into buying products they don't want, signing up for services they don't need, or believing untrue information.

I call this the "AI manipulation problem." For a long time, this was only a theoretical risk, but now technology can deploy personalized conversational impact campaigns that target users based on their values, interests, and context to drive sales, promotion, or misleading messages.

In fact, I fear that AI-driven conversational influence will become the most powerful form of targeted persuasion humanity has ever seen. We need time to put safeguards in place. Personally, I think there may be a need for regulations that prohibit or severely limit the impact of AI-mediated conversations.

So I signed the "moratorium letter" because I clearly saw how quickly the new risks posed by AI systems spread, with no time for industry researchers and industry professionals to regulators and policymakers.

Will this letter work? It's not clear to me whether the AI industry will agree to a six-month moratorium, but the letter has already drawn attention to the issue. Personally, I think we need to sound as many alarm bells as possible to wake up regulators, policymakers, and industry leaders that they need to act, and this technology is coming to market faster than anything I've ever experienced, which is why we have to pause to adapt. We need some time.

The author is Unanimous AICEO, Chief Scientist of the Responsible Metaverse Alliance and Global Technical Advisor for the XR Safety Initiative.

Text | Louis Rosenberg

Edit | Guo Liqun

Copyright Notice:

Original article by Barronschina may not be reproduced without permission. For the English version, see "Why AI Needs to Pause" on March 30, 2023.

(The content of this article is for informational purposes only and the investment advice does not represent the interests of Barron's; The market is risky, and investment should be cautious. )

Read on