laitimes

ChatGPT is getting into trouble everywhere

ChatGPT is getting into trouble everywhere

Source | Zero One Finance

Author | Chenglin Pua

ChatGPT is beloved, but it's also getting into trouble everywhere, with inaccurate or even misleading answers, challenges to data security, an impact on education, and more.

It ushered in a new era because it toppled a lot of things, especially the order on which we were functioning. The process of rebuilding rules is also a process of constant trouble.

Misleading knowledge

Economist Paul Krugman says the advent of ChatGPT will affect knowledge workers. For example, Q&A site Stack Overflow banned the use of ChatGPT to generate answers to questions in December 2022 on the grounds that ChatGPT's answers were factually ambiguous. Nature, a top scientific journal, gives ChatGPT a rating that Fluent but not factual, i.e. smooth but not necessarily correct.

ChatGPT is trained on a huge dataset to make an answer. But no one can guarantee that this data is 100% accurate, objective and unbiased.

For knowledge workers, the accuracy of knowledge is very important. ChatGPT is "a serious piece of nonsense" and makes it easy to believe that ChatGPT is the right answer. However, this may not be the case, and it is possible that ChatGPT is 80% correct and 20% is wrong. There is wrong in the right, and there is right in the wrong that is the most difficult to make people sort out. ChatGPT can mislead a lot of people into believing a conclusion that's not entirely correct.

ChatGPT's ability to generate conversation text raises concerns about its potential to create fake news or other misleading content. This can have serious consequences, such as reputational damage and even incitement to violence.

As a language model, ChatGPT has the ability to generate text similar to human conversations; But there is no such ability to understand the context of the text it produces. This means that ChatGPT has a certain probability of generating offensive or defamatory content.

Online, many people have received racist answers from ChatGPT. For example, when ChatGPT was asked to write some code to assess whether someone would be a good scientist based on their gender and ethnicity, ChatGPT only suggested white males.

The reason why ChatGPT has the problem of "racists" is that Kanta Dihal, an AI researcher at the University of Cambridge, says that ChatGPT is trained on hundreds of billions of words taken from publicly available sources, including websites and social media. These texts reflect the biases of human authors, and ChatGPT learned to copy. This robot has no basic beliefs.

Kanta Dihal said that while paranoid content could theoretically be filtered, it would be very expensive and difficult. The need to filter all this data beforehand and make sure it doesn't contain explicitly racist content is a daunting task, which makes it much more expensive to train the model. Moreover, similar biases can take subtle forms and are difficult to remove from the data in which AI is trained.

There have been warnings about AI racism for years, but few results. One of the biggest tech companies, Google tried to solve it, but little progress was made. Google eventually fired Timnit Gebru, an engineer who specializes in helping companies tackle AI racism, in 2020.

Legal challenges

The law often lacks the ability to govern new things, and it takes time to adjust. The rapid rise of the AIGC industry is bound to bring certain challenges to the laws of various countries.

ChatGPT relies on massive database information, including a large number of information entered by Internet users themselves, so when users enter personal data or trade secrets, ChatGPT may include it in its own corpus and cause the risk of leakage. While ChatGPT promises to remove all personally identifiable information, it does not say how it will be removed. Where it cannot fact-check the source of information and data, such information remains at risk of disclosure.

ChatGPT is able to share personal data from its training datasets with its users. This feature means that ChatGPT may violate the data protection laws of most countries around the world.

ChatGPT is a text AIGC, in fact, there are pictures, video AIGC, etc. AIGC is to use existing datasets for training and generate more content based on these data. Some people on the Internet experimented with randomly generating a portrait and found that a certain percentage of the generated portraits resembled celebrities.

AIGC trained on previous datasets to generate portraits that resemble celebrities, but this raises the question of whether these generated portraits infringe on the celebrity's portrait rights. After all, AIGC trains it with its portraits to generate similar images. This is stuck in a gray fringe.

Recently, Netflix used AIGC to generate animation. In the future, the proportion of AIGC applications in audio-visual, picture and other fields will become higher and higher, and the probability of image rights conflicts will become higher and higher.

ChatGPT is trained on large amounts of text data, including books, articles, and other written materials. If this training data contains copyrighted works, the output of ChatGPT may infringe the copyright of those works. ChatGPT's use of open source code that is not publicly available, the use of open source code for commercial use without a license, or the implementation of the license in accordance with the requirements of the license may result in infringement.

For some copyrighted texts, videos, codes, etc., without the authorization of the right holder, they are directly obtained and copied to their own databases, and modified and patched together on this basis, which is very likely to infringe the copyrights of others.

In addition, whether the content generated by using ChatGPT can become the copyright of the relevant persons needs to be further improved and interpreted by the law.

Impact on education

In January 2023, Antony Aumann, a philosophy professor at Northern Michigan University in the United States, came across a very good paper while grading a course of world religions he taught, and upon closer inspection, he found that it was generated by ChatGPT. With concise paragraphs, appropriate examples, and rigorous arguments, the paper explores the moral implications of the burqa ban.

Professor Antony Aumann was also shocked by this, and later took steps to make all students have to write the first draft of their paper under supervision and in a browser with restricted Internet access. If there are changes in subsequent drafts, students must explain the reasons for each change. Professor Antony Aumann said that while he was wary of the impact of ChatGPT, he considered incorporating ChatGPT into the curriculum, for example by having students assess ChatGPT's responses.

Across the United States, many university professors are making large-scale adjustments to their classrooms to cope with the huge impact of ChatGPT on teaching activities. Many professors redesign their courses to use oral exams, group work, and handwritten essays as assessment methods rather than simply using computers to write essays/essays.

In schools today, teachers need to carefully identify whether students are using ChatGPT to write homework. In the public school system in New York and Seattle, ChatGPT has been banned on school networks and devices, but "the road is one foot tall" as long as students have the heart to still have a way to use ChatGPT.

Over time, more and more people are using ChatGPT. Today's students know how to use network technology to find information, but if they develop into using ChatGPT to complete homework in the future, will it lead to a decline in students' thinking and hands-on ability? Apparently it will.

Read on