laitimes

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

The winner of the Outstanding Question Award poses with the roundtable panelists

This lecture is co-sponsored by Wen Wei Po, Shanghai TreeMap Blockchain Research Institute, Institute of Modern Chinese Thought and Culture of East China Normal University, and Ethics and Wisdom Research Center of the Department of Philosophy of East China Normal University.

Now it has been organized, and the publication is interactive to listen to friends and readers. Please refer to the links at the end of the article for keynote speakers and roundtable dialogues.

The ceiling that large models will encounter towards more intelligent is still in the process of collective exploration

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

Gao Peng, director of Convertlab in the AI industry: Since February this year, we have been tossing and turning to do automation applications on large models, and we found that it is not as smart as expected. May I ask this sequence-to-sequence model, where is the technical ceiling? What are KLCII's breakthrough plans?

Lin Yonghua: This is a very professional question. From the perspective of practitioners, the current domestic big model is not as smart as everyone thinks, and most cases still rely on people to adjust. Frankly, we haven't been able to touch the ceiling yet. The AI research team, including KLCII itself, cannot guarantee that we are doing it right every step of the way. There are many factors here, including many algorithm choices or algorithm breakthroughs, and there is still a lot of practice to be done. Because large models are too large, the cost of trial and error is high, making some more radical ideas, innovations or breakthrough ideas dare not be easily implemented, because they may require millions or tens of millions of investments.

Enterprise crowdfunding reduces the cost of developing large models, and also requires smarter mechanisms

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

Hira Yuanhai, secretary general of the Business Association: I graduated from the Department of Economics and am very concerned about costs. Now that developing large models requires high costs, have you ever thought of mobilizing companies with the same interests to do crowdfunding together? For example, jointly develop a large model specifically for sorting with express delivery companies? If so, what challenges have you faced?

Lin Yonghua: This idea has been in place since the first half of this year. Each industry can find at least eight to ten companies of a common nature to crowdfund and train the industry model, and each one can be forged more finely within the enterprise after it is made. The idea makes sense. It's just that there needs to be a top-level design full of wisdom here, and the difficulty is not at the technical level, but more in the top-level design of the cooperation model, business model, and future benefit sharing.

Humanities scholars can continue to ask questions and critique in the era of AI-powered experience

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

Li Xiangxiang, a doctoral student in the Department of Philosophy of East China Normal University: AI has far surpassed humans in information collection and processing, and in the future, humans may lag behind AI in analyzing and answering problems. Based on what you know about science and technology and philosophy, philosophers and humanities and social sciences, what is the meaning and value of future existence, and is the task of philosophers reduced to asking questions?

Ji Quanquan: According to my understanding, the profession of a philosopher is good at asking questions. If the productive forces develop and the robot becomes the main body of labor, the task left for us philosophers is to ask good questions in the age of experience. The UBI experiment being tried in Britain (a basic monthly salary but the freedom to choose whether to work or not) is just that, requiring philosophers to ask new questions: For example, have we humans entered the era of experience that includes seeing the scenery?

In addition, philosophers have a job to criticize, which is always needed. In today's roundtable dialogue, we found that our views are different, that is, we are discussing "critically". For example, there are many reservations about whether general artificial intelligence will have autonomous awareness. But as long as the big model reaches a certain level, it is certain that some ability to "emerge" is certain. At present, artificial intelligence models have not reached 1% of the parameters of the human brain. So, we need an open, critical mindset.

Aesthetics does create jobs, provided that people don't rely on machine judgment

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

Ding Hongran, a doctoral student in the Department of Philosophy of East China Normal University: There are more and more jobs that can be replaced by artificial intelligence, but human aesthetics are very different, is it impossible for aesthetics to be replaced by AI, thereby creating more career choices and lifestyles?

He Liang: Under the influence of artificial intelligence, the work that needs to be done by humans in the future should be less and less, of course, we humans will passively or actively find ways to design things that are suitable for us, and aesthetics is one of the possibilities. Beauty itself has different definitions in different eras and among different people. One is the natural feeling of living beings; The second is the definition from human beings, which is highly subjective. The current standard of beauty will change in five or ten years. Therefore, as long as people are still the subjects of the world, they are expected to be defined according to human standards, thus creating occupations beyond the capabilities of machines.

But the premise is that the standard to avoid beauty is established and judged by AI to help us, which will be reduced to the current game of Go, whether a move is good or not, many rely on machine judgment and guidance, rather than by us to judge. Once these creative links have become mechanical certainties, they are easily bored.

Productivity leap in translation: machine efficiency + human brain intelligence and experience

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

Lin Shaochun, founder of a translation company: When I joined Wenhui Lecture Hall six years ago, I was a staff member of a translation company, and later I became an independent entrepreneur in response to the call of the national entrepreneurship and entrepreneurship. How will AIGC technology, including ChatGPT, improve productivity in the translation field?

Fu Changzhen: How to improve productivity in the translation field? I have no entrepreneurial experience myself, so I can only "talk on paper". The relationship between technology and people is a two-way empowerment, and it is very necessary to constantly update the way you start your own business with the development of technology. ChatGPT has strong semantic understanding and contextual reasoning capabilities, and through human-computer interaction, the translation process can be made more smooth and efficient, and to some extent, it can indeed greatly improve productivity. However, in some key links, it is still necessary to emphasize human personality and imagination, and it is necessary to process and improve and test translators.

I think that as long as we maintain humility and prudence in technology, human-machine collaboration can better cope with complexity and continuously improve the quality of translation and the efficiency of the company.

One way to protect privacy is to deploy large models into private networks

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

Chai Zhongyu, senior engineer in the communications industry: Ask about the company's business through ChatGPT, and the company prohibits the use of ChatGPT, how to protect privacy when landing?

Lin Yonghua: Regarding data privacy, there are usually two considerations based on different deployment methods of large models. First, similar to ChatGPT, it includes some large models in China, providing services on the cloud, users upload questions and related information, and then answer. In theory, large model operators need to be authorized to access the data uploaded by users, otherwise it is illegal. So, there's the question here, whether you trust these big model operators.

Second, for those industries that require information confidentiality, including financial institutions, medical institutions, etc., we can support the privatization deployment of large models and deploy them into the internal network of enterprises to avoid the leakage of business information.

The encryption of copyright is being explored and should eventually be implemented from a technical level

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

Ye Peisong, a human resources worker: Driven by the basic big model, how to protect our own copyright and intellectual property rights? Driven by the basic big model, how to protect our own copyright and intellectual property?

Lin Yonghua: At present, the discussion on copyright issues is mainly aimed at the knowledge content put into the pre-training data, covering pictures, videos, and written works. How can we encrypt these copyrighted works during training, so as to avoid the disclosure of copyrighted works? At present, the academic community is constantly studying various methods, including homomorphic encryption technology. But with this type of technology, the cost of computing will rise a lot more now.

Therefore, there are also non-technical ways to alleviate this type of copyright problem. For example, when we collect a large amount of copyrighted data that needs to be made public, the copyright owner can ask the data collector to delete it by stating that their work does not want to be collected. For example, The Stack, a recent website that collects global code data for training large models, published a link stating that developers can request deletion from the dataset if they find that their code is included in their dataset, but do not want the code to become part of the dataset.

Wensheng 3D in multi-modal mode is technically just around the corner, and it needs to be continuously iterated after that

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

3D intelligent animation technology company marketing promotion Weng Miaochun: Our company used AI technology to develop a 3D intelligent animation three or four years ago, which takes less time and cost. After the large model comes out, it encounters a bottleneck with high R&D costs. Excuse me, how to solve this cost contradiction through the big model?

Lin Yonghua: Big models are constantly making new breakthroughs in multimodality, especially visual multimodality. At present, it has developed more mature is the model application of Wensheng diagram, in addition to Wensheng 3D, that is, through a piece of text can automatically generate three-dimensional images, and further can use text to generate video, but now the automatically generated video is still relatively short. Therefore, the technology is just around the corner, perhaps in the next year or two, through the large model, let the creator enter a text description, you can automatically generate 3D animation.

Under the dominance of strong artificial intelligence, is it possible for people to develop into comprehensive people?

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

Guo Jiangyong, a doctoral student in the Department of Philosophy of East China Normal University: Human beings as laborers have such experiences: from independent workers to a link on the machine assembly line, the future may be to the era of artificial intelligence digital buttons or symbols, which means the gradual dissolution of the status of human labor subjects. Therefore, I am particularly puzzled: in the era of intelligence, the rich experience, skills and styles of workers are gradually weakened, so will workers lose the rights or qualifications of labor subjects? At this point, can a person still become a well-rounded person?

Pan Bin: The various intelligent subjects we see today, including human-machine collaboration, humanoid robots or robotic arms, etc., are still dominated and controlled by people, so at present, people are still the only labor subjects, and it is still far from the era of strong artificial intelligence and super artificial intelligence.

However, if intelligent robots or strong AI are likely to become new labor subjects, we cannot rule out that they will have the idea or motivation to covet and monopolize the earth's resources. With the blessing and help of intelligent technology, whether human beings can truly establish the qualifications of labor subjects depends on how we understand and regulate the research and development and application of artificial intelligence, and whether artificial intelligence technology can become a real tool and specific extension of human beings.

From an optimistic standpoint, it is possible for humans to become comprehensive and free human beings in the age of advanced intelligence.

Obtaining high-quality training data from the publishing industry and still maintaining copyright requires regulations

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

Customs engineer Hu Yeliang: You just mentioned that the quantity and quality of high-quality corpus in the development of artificial intelligence are insufficient, but copyright protection should be implemented. If this copyright protection is accommodated, it will hinder the development of productivity. My idea is that the state or industry associations will pay a certain amount of copyright fees to replace the original monopoly of copyright by various companies to achieve copyright sharing. Is it feasible?

Lin Yonghua: On the one hand, there is a realization from the social and government levels that copyright protection cannot stall the development of an important technology that may affect the whole society or all industries. On the other hand, the copyright we are talking about here is only the training data used to make the model bigger. It's not about taking a book and publishing it again, it's about letting machines read it, not people reading it.

Based on these two important premises, there are currently many positive discussions in the industry, hoping to promote some new laws and regulations or industry norms, which can make us organize in a more orderly manner, and have the opportunity to use the copyright data that is now scattered in various places to be organized for large model training or paid use.

High-rise design wisdom is also needed here, as it will break the boundaries of two different industries.

If machines can replace human creativity, it is in line with the theory of "natural selection of the universe"

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

Hou Zhiwei, a doctoral student in engineering at East China University of Science and Technology: The book "The Nature of Technology" said that the essence of various high-tech products in the world of science and technology is a combination of technology, under a product is technology, technology is still technology, and finally technology is a phenomenon, and the phenomenon is used by human beings. Artificial intelligence can already recognize images, cognitive images, one day can organically combine some technologies, can it be considered that according to this logic, then the machine will inevitably replace humans or replace human creativity?

Q: I think with the development of technology, will there be superintelligence, and we should see what forces are driving it? Simply put, there are two forces, one is personal curiosity, and the other is what Nvidia founder Jensen Huang said at the National Taiwan University graduation ceremony recently, "Whether you are chasing food or avoiding being food, you have to run, don't go." "That is, you have to run if you don't fall behind, you have to run faster.

Finally, what will happen to humans? It may be the creation of new intelligent species. I think that's a good thing. If humans are to leave the earth, leave this biological system, and enter another state, they will need new intelligent species to go beyond the ecosystem of living things and achieve new ways of evolution. At present, there is a theory of "natural selection of the universe" in science, which advocates universal Darwinism. The electronic ecosystem in which artificial intelligence bodies are located is different from the biological ecosystem in which humans live, and there is a broader space for evolution, so I believe that AI will surpass humans.

Methods to reduce hallucination rates, or fuse with external knowledge in a single large model technique

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

Shengnan Gong, a master's student in Chinese philosophy at East China Normal University: In your keynote speech, you mentioned that human participation is involved in assessments. I was using GPT and found it to lie. Is its objectivity technically guaranteed?

Lin Yonghua: Big models produce content based on probability, so hallucination rate (aka "a serious nonsense") is difficult to avoid. At present, the feasible method is to integrate and enhance the large model with external knowledge. Combine with the external objective existence of knowledge bases, databases or search engines to minimize the hallucination rate of large models, and make it answer the real "reasonable".

Regardless of the length of time AI talents have been in the industry, the key is to maintain their ability to learn and be integrated

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

The 60-second question was controlled by a small alarm clock on the stage, urging listeners to refine the questions concisely, and a total of 12 questions were interactive in 36 minutes

Wen Wei Po reporter Li Nian: After entering the era of big models of AI, how is the industry's training and demand for talents and school disciplines different from before? (Off-site questions, for post-conference exchanges)

Lin Yonghua: I don't think there is much difference in the cultivation of talents in the field of AI. When deep learning became popular more than a decade ago, many non-AI talents switched careers and influxes, and now large models have appeared, and a group of non-large model talents have transformed. And, due to the vigorous development of various open source platforms, the threshold for learning this round is lower. For example, there are various video tutorials for open source large model code on site B, and everyone can quickly get started.

Therefore, I think that for AI talents, what remains unchanged is that the first thing is to maintain lasting learning ability, even if AI practitioners in the traditional track, even developers who are forty or fifty years old, as long as they keep learning and are good at integration, there are good development opportunities.

Second, I believe that large models and small models will coexist, and the landing of large model industries is inseparable from many existing technologies. For example, after deep learning comes out, talents who use OPEN CV to do computer vision processing have even greater demand for them, because deep learning cannot solve all problems, and these traditional image processing technologies are needed to complete pre-processing and post-processing. The same is true for large models.

In terms of discipline design, we feel that we need to integrate more basic disciplines, and need more disciplines and AI to combine to produce new thinking patterns. We believe that similar to computer software technology, AI will become a universal tool in the near future.

Finishing: Li Nian

【Wonderful Moment】

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

1.163-1 Wenhui Lecture Hall warm-up session, played 4 related short videos produced by Wenhui News Agency, followed by "163 issues of "Digital Power" opened with NFT anchored by Li Nian of Wenhui Lecture Hall, "Wang An Yi Yuhua Talks about ChatGPT" produced by reporter Xu Jiao's "Xu Peach Afternoon Tea", "AI Music Becomes New Life" planned by Shaoling of the Ministry of Literature and Art and produced by the Visual Department of Fusion Media, and "Mo Yan Talks about ChatGPT" produced by Wenhui Literary and Art Review, and quoted "Literary Review Weekly" at the beginning Jiang Xiaoyuan's views published.

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

In the entrance from 2.13:10, listening friends Zhang Di, Gao Ping, Hu Yang, Ding Qizheng and others volunteered to sign in for the conference and distribute new things NFT digital badges (above), and the prizes for the question prize on the day were NFT digital badges and the new book "10,000 Years of China" (above) to be published in August.

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

3. Exactly two years later, in June 2023, the offline lectures of Wenhui Lecture Hall resumed, and the "live + live broadcast" mode began.

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

4. The first session was a speech by Yang Guorong, Director of the Institute of Modern Chinese Thought and Culture of East China Normal University, "Science and Technology Development and Human Life", video production / Chai Jun

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

5. Lin Yonghua came to Shanghai by early flight to give a keynote speech on the same day, and she praised the Wenhui Lecture Hall for its strong humanistic atmosphere, and the questions from scholars and listeners further expanded her thinking

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

6. The roundtable dialogue session will have heated discussions on four major issues, including paradigm change, whether artificial intelligence will replace human intelligence, and when will GAI arrive

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

7. The 50 NFT digital badges minted by Shanghai Treemap Blockchain Research Institute were distributed on the same day, and the words "tiger" and "feather" on them came from the calligraphy of Wang Longdao, a 90-year-old teacher in the Department of Philosophy, from the process PPT/Pingyuanhai Design

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

8. Host Li Nian and guests (from left to right) He Liang, Jun Quanquan, Lin Yonghua, Fu Changzhen, Pan Bin and Liu Liangjian.

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

9. Before the opening, Lin Yonghua had a lively discussion with Professor He Liang from the School of Computer Science. Before and after the lecture, there was always cross-border collision.

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

10. The live video broadcast of ECNU's WeChat public account attracted more than 7,000 viewers from all over the country, and a 24-hour playback was set up since then. The small picture shows Dong Sheng, the live broadcast team of the Propaganda Department of East China Normal University, a day ago, doing the director test.

Big Model 13 Q&A: How not to "talk nonsense"? How to be smarter? |163-1 Lecture 3

11. The back row of outstanding questioners from left to right: Li Xiangxiang, Ye Peisong, Ding Hongran, Guo Jiangyong, Ping Yuanhai and the guests took a group photo

Link at the end of the article

Lin Yonghua: AI has entered the era of big models, how will the tide rise and fall in the new decade?|163-1 Lecture 1

Digital & Human Dialogue: How AI as a Tool or Friend Grows Kindness|163-1Lecture 2

Authors: Lecture Hall Listener Lin Yonghua, Ji Quanquan, Pan Bin, He Liang, Fu Changzhen

Photo: Shooting/Zhou Wenqiang Post-production/Hu Yang Li Nian Live Video Play/Wang Yuan Chen Ziyang

Editor: Li Nian