laitimes

ChatGPT won again: it tripled the stock price and became an artifact

Since its inception, ChatGPT has been in the limelight: a new AI chatbot tool from OpenAI that can quickly generate articles, stories, lyrics and even code according to user requirements, answer all kinds of questions, and more.

When it launched, it went viral due to the staggering amount of information and completion in the answers, and it had millions of users overnight. It also helped OpenAI pull in a new $10 billion investment from Microsoft, bringing OpenAI's latest valuation to $29 billion. You know, when Google wholly acquired DeepMind, it only cost $600 million.

With the development of more than a month, ChatGPT seems to have stepped out of the stage of being "teased" by users and began to truly show its potential. Similar AI tools are beginning to be really used by the industry.

News site: A warm welcome

In the past two days, one of the most discussed news in Silicon Valley is that the new media website Buzzfeed is based on the big halo of ChatGPT and even OpenAI, and the salted fish turned over, and the stock price jumped directly by three times!

ChatGPT won again: it tripled the stock price and became an artifact

The reason was simply that Buzzfeed announced that it would use the artificial intelligence API provided by OpenAI — not even ChatGPT itself, which was misrepresented by some media outlets, to help create some content.

Jonah Peretti, CEO of BuzzFeed, said in a memo: "By 2023, you'll see us transform AI-powered content that is still in development into part of our core business to enhance the Quiz experience, inform our brains, and personalize content for our audience." ”

Compared to regular news sites, Buzzfeed, which is aimed at young people, is known for various tests on the Internet, including "test which princess in Disney you are" and "which superhero in the Avengers is best for your boyfriend".

ChatGPT won again: it tripled the stock price and became an artifact

Its cooperation with OpenAI will be mainly applied to the production of such "fast food" content. Specifically, BuzzFeed will use OpenAI's artificial intelligence technology to help generate relevant test questions on the site, helping brainstorming editors find better ideas.

"To be clear, we see breakthroughs in AI usher in a new era of creativity that will enable humans to harness creativity in new ways, creating limitless opportunities and applications," Peretti said. "When it comes to publishing, AI can benefit content creators and audiences, sparking new ideas and inviting audience members to co-create personalized content."

Whether or not readers are really willing to pay for the fun quiz created by AI, the news of this partnership is enough to bring BuzzFeed back to life.

Since its listing through SPAC in December 2021, BuzzFeed's stock has fallen more than 90%, widened its third-quarter net loss to $27 million from $3.6 million a year ago, and even had to lay off about 12% of its workforce to control costs. But as soon as the news of holding hands with OpenAI came out, its estimate jumped more than 300%.

The next cooperation between BuzzFeed and Meta may bring these AI-generated content to a wider range of users. Not long ago, Meta paid millions of dollars to BuzzFeed to generate content for Meta's platform and train creators on the platform. This also means that in the future, on Facebook and Instagram, you may be able to play a lot of AI-generated brainless quizzes.

However, a spokesperson said that BuzzFeed does not currently use artificial intelligence to help write news stories. This decision may be related to the fact that not long ago, another media used artificial intelligence to create content but was tragically overturned.

In the application of artificial intelligence to news writing, CNET is more advanced, but it also eats the "bitter fruit" earlier.

According to CNET, as part of the CNET Money team's "test" project, the newsroom began using an internally developed AI engine in November 2022, generating 77 news stories, or about 1% of the site's total number of articles. The articles, collectively titled "CNET Money Staff," helped editors create "a basic set of explanatory models" around the topic of financial services. These articles, written using AI tools, include, "Do Home Equity Loans Affect Private Mortgage Insurance?" and "How to close a bank account", etc.

"Editors first generate an outline for the story, then expand, add, and edit AI drafts before publishing." CNET editor-in-chief Connie Guglielmo writes.

But soon, CNET Money's editorial team discovered that one of the articles was inaccurate. So they conducted a full audit. As a result, a small number of these AI-generated articles needed a lot of corrections, while others had minor problems, such as incomplete company names, or ambiguous language, or incorrect numbers.

For example, in the article "What is Compound Interest?" "At the end of the article, AI gives some very inaccurate personal finance advice." An earlier version of this article advised savers to put $10,000 into a savings account and earn 3% compounding interest each year, so they could earn $10,300 a year later. In fact, anyone who has studied elementary math knows that a saver can only make $300.

Guglielmo did not say how many of the 77 published stories needed corrections, nor did he specify how many "substantive" and "minor" issues there were, except to list tips for corrections under the articles.

However, because more than half of the stories contained factual errors or improper quotations, CNET has now stopped using the AI engine.

News stories about using AI to automate are not new, the Associated Press began doing so nearly a decade ago, but with the rise of ChatGPT, the issue has gained new attention. When AI is applied to content production at scale, how much plausible content is mixed in?

Despite these issues, Guglielmo has opened the door to resuming the use of AI tools, saying it will start using AI news writing tools again when the problem is resolved.

Education and academia: challenges

Despite its bold use in journalism, Chatgpt-like AI tools have been questioned in more writing scenarios. This includes the most popular but questioned place – schools.

To test ChatGPT's ability to generate answers on exams in four courses, professors at the University of Minnesota School of Law recently asked ChatGPT to take the exam and blindly evaluated the results. After completing 95 multiple-choice and 12 essay questions, ChatGPT's average score was C+ — a low but passing grade in all four courses, "flying low over the passing line."

In Wharton's Business Management Program exam, ChatGPT performed better, scoring a B to B. Wharton professor Christian Terwiesch said ChatGPT did a "fantastic job" of answering basic operations management and process analysis questions, but didn't perform well at handling more advanced prompts and made "surprising mistakes" in basic math, some at the level of elementary math.

What does this mean? If left unchecked, ChatGPT will become the most powerful cheating tool in history — helping students write assignments and even complete exam papers.

So, as the test results came out, more and more schools and teachers expressed concerns about ChatGPT's ability to cheat. For example, public schools in New York City and Seattle have banned students and teachers from using ChatGPT on the district's networks and devices.

Professor Terwiesch also said he agreed that restrictions should be placed on students when they take exams. "A ban is necessary," he said. "After all, when you award a doctor's degree, you want them to really have medical knowledge and not just know how to use a chatbot. The same applies to other skills certifications, including legal and business professions. ”

But Terwiesch believes the technology will eventually still be available in the classroom. "If all we end up with is the same education system as before, then we're wasting the great opportunities that ChatGPT brings." He said.

In academia, ChatGPT has come under greater scrutiny.

Holden Thorpe, editor-in-chief of Science magazine, announced an updated editorial policy banning the use of text from ChatGPT and said ChatGPT could not be listed as a collaborator.

Holdenso said scientific journals require authors to sign a statement pledging to take responsibility for their articles. "But since ChatGPT can't do that, it can't be an author." He believes that using ChatGPT is problematic even at the stage of preparing papers. "ChatGPT makes a lot of mistakes that can make it to the literature." He said.

Not just Science, but other publishers have made similar moves. Springer-Nature, which publishes nearly 3,000 journals, also issued a statement saying it could not list ChatGPT as an author.

Perhaps the harshest of them is Stack Overflow, an online programming question-answering platform. Shortly after ChatGPT launched, it announced a blanket ban on ChatGPT and any non-human-generated answers, and further stipulated that users would be banned if they were found to have violated it.

Note: The cover image is from Pexels, and the copyright belongs to the original author. If you do not agree to the use, please contact us as soon as possible and we will delete it immediately.

Read on