ChatGPT was launched at the end of 2022, but 2023 is undoubtedly the year that generative AI occupies the public consciousness.
Not only has ChatGPT hit new highs (and new lows), but there have been many world-shaking upheavals ranging from incredible competitor products to shocking scandals, and events in between. As the year draws to a close, we take a look back at the 9 most important AI stories from the past 12 months. It's been an unusual year for AI – here are all the things we've had in mind at the start of 2023.
01
ChatGPT's competitors are flooding the market
Such a list would not have emerged without the unprecedented rise of ChatGPT. OpenAI's free chatbots have taken the world by storm and are growing rapidly, appealing to everyone from technology leaders to ordinary citizens.
The service was initially launched in November 2022, but it only really started to grow in the first few months of the year. ChatGPT's unexpected success has its competitors scrambling to respond. No one seems to be more scared than Google, which worries that AI could make its lucrative search business obsolete. In February, a few months after ChatGPT hit the market, Google launched its own AI chatbot, Bard, to fight back. Then, just a day later, Microsoft revealed its own experimentation with Bing Chat.
Bing Chat is doing even worse. The chatbot is prone to so-called "hallucinations" – it lies, fabricates false facts, and is generally completely unreliable. It told a journalist that it spied on its developer, fell in love with him, and then murdered him. When we tested it, it claimed to be perfect, said it wanted to be human, and argued with us. In other words, it's insane.
Google Bard performs slightly better – it's hardly controversial and completely avoids Bing Chat's heavy use of emojis – but still has a tendency to be unreliable. Google and Microsoft's rapid response to ChatGPT shows how hasty their attempts were – and how dangerous the AI's propensity for misinformation is.
02
GPT-4 caused a stir
When ChatGPT was launched, it was powered by a large language model (LLM) called GPT-3.5. It is very powerful, but it also has some limitations, such as only being able to use text as an input method. That changed a lot with the arrival of GPT-4 in March.
OpenAI, the developer of ChatGPT, says the new LLM performs better in three key areas: creativity, visual input, and longer context. For example, GPT-4 can use images as input and can also collaborate with users on creative projects such as music, scripts, and literature.
Currently, GPT-4 is locked behind OpenAI's ChatGPT Plus paywall and costs $20 per month. But even with its limited reach, it still has a huge impact on AI. Google claimed that it could beat GPT-4 in most tests when it announced the Gemini LLM in December, but the truth is that nearly a year after GPT-4 was launched, it was only a few percentage points better than GPT-4. See how advanced OpenAI's models are.
03
AI-generated images are starting to deceive the public
Nothing illustrates the power of AI deception and misdirection more than an image released in early 2023: Pope Francis wears a large white down jacket.
The incredible image was created by a man named Pablo Xavier using an artificial intelligence generator called Midjourney, but it was so realistic that it could easily fool a large audience on social media, including celebrities like Chrissy Teigen. It highlights the persuasive power of AI image generators, as well as their ability to trick people into believing things that aren't real.
In fact, just a week before the Pope's photo appeared, another set of photos made the news for a similar reason. The photos depict scenes of former President Donald Trump being arrested, fighting with police and serving time in prison. When a powerful image generator is mixed with a sensitive subject, whether it has to do with politics, health, war, or anything else, the stakes can be very high. As AI-generated images become more realistic, Pope Francis' creation is a light-hearted example of how quickly we need to improve media literacy.
AI-generated images have become more common throughout the year, appearing in Google search results even before real images.
04
The petition is beginning to ring alarm bells
Apple co-founder Steve Wozniak
The speed at which AI is evolving is alarming and has already brought with it worrying consequences, so much so that many people have serious concerns about the consequences it can bring. In March 2023, some of the world's most prominent tech leaders voiced these concerns in an open letter.
The report calls for an "immediate moratorium on all AI labs training AI systems more powerful than GPT-4 for at least six months" to give society as a whole time to assess the risks. Otherwise, the full development of AI could "pose far-reaching risks to society and humanity", including potential employment destruction, the demise of human life, and "the loss of control of our civilization."
The letter was signed by Apple co-founder Steve Wozniak, Tesla boss Elon Musk, and several researchers and academics. When the potential short-term profits are so large, whether any AI companies have really noticed this is another matter – look at Google Gemini, whose creators say it outperforms GPT-4. Hopefully, this letter is not prophetic.
05
ChatGPT is connected to the internet
When ChatGPT first launched, it relied on a lot of training data to help people provide answers. The problem is that it can't be kept up to date, and it's not much use if someone wants to use it to book a restaurant or find a link to buy a product.
Everything changed when OpenAI announced a series of plugins to help ChatGPT plugins connect to the internet. Suddenly, a new way to get work done using artificial intelligence was opened. This change also greatly updates and expands the capabilities of the chatbot compared to the previous ones. It's a huge upgrade in terms of functionality and practicality.
But it wasn't until May, when the Bing Browse plugin was announced at Microsoft's Build developer conference, that its browsing capabilities were expanded. The plugin was slow to roll out and wasn't available to all ChatGPT Plus users until September.
06
Windows 推出了一款新 Copilot
Microsoft launched Bing Chat earlier this year, but the company didn't stop there. This was followed by Copilot, a broader AI application that has been incorporated into Microsoft products and was originally released for Microsoft 365 Copilot.
Bing Chat is a simple chatbot, while Copilot is more of a digital assistant. It's embedded in a range of Microsoft applications, such as Word and Teams, as well as Windows 11 itself. It can create images, summarize meetings, find information and send it to your other devices, and much more. The idea is that it automates lengthy tasks for you, saving you time and effort. In fact, Bing Chat was even incorporated into Copilot in November.
By tightly integrating Copilot into Windows 11, Microsoft is not only demonstrating its comprehensive philosophy on AI, but also challenging Apple and its rival macOS operating system. So far, Microsoft has the upper hand, especially as we look ahead to the release of Windows 12 in 2024.
07
Academia is grappling with AI
Given the rapid development of artificial intelligence, it's understandable that you don't know much about how it works. But this lack of knowledge had a real impact on students at Texas A&M University's College of Business, where a professor failed an exam for using ChatGPT to write an essay — though there is no evidence that students did.
The problem arose when Dr. Jared Mumm copy-pasted a student's paper into ChatGPT and then asked if the chatbot could generate text, to which ChatGPT replied "yes." What is the problem? ChatGPT can't actually detect AI plagiarism this way.
To illustrate this, a Reddit user pasted Dr. Mumm's letter to students into ChatGPT, accusing them of cheating and asking if the chatbot might have written the letter. The answer is "yes, I wrote what you shared", and ChatGPT lied. If you want a perfect example of the illusion of artificial intelligence and the confusion of humans about its capabilities, this is it.
08
Hollywood and artificial intelligence
It's clear that AI is incredibly capable, but it's a reason why many people around the world are concerned about it. For example, one report says that as many as 300 million jobs could be destroyed by AI if it goes unchecked. This concern has prompted Hollywood screenwriters to worry that studio owners will eventually replace them with artificial intelligence.
The Writers Guild of America (WGA) went on strike on the issue for nearly five months, beginning in May and ending in September, eventually winning significant concessions from the studios. These concessions include stipulating that AI cannot be used to write or rewrite material, and that writers' content cannot be used to train AI. This is a major win for the affected workers, but given the continued rapid development of AI, this may not be the last conflict between AI and the people it affects.
09
The Legend of Sam Ultraman
Ever since ChatGPT caught the world's attention, OpenAI CEO Sam Altman has been a well-known figure in the entire AI industry. However, one day in November, everything fell apart and he was unceremoniously fired from OpenAI – much to his own surprise and the world.
OpenAI's board accused him of being "not candid enough enough" in his dealings with the company. However, the backlash was swift and strong, with most of the company's employees threatening to strike if Altman was not reinstated. OpenAI investor Microsoft offered work to Altman and anyone in OpenAI who wanted to join, and for a while, Altman's company seemed on the verge of collapse.
Then, as soon as Altman left, he was reappointed, and many members of the board expressed regret over the whole incident. As the internet watched the events unfold in real time with bated breath, there was a curious question behind the whole farce: Why? Did Altman really stumble upon an AI development that raises serious ethical questions? Q* Is the project about to achieve AGI? Is there a Game of Thrones-style power struggle, or is Altman really just a bad boss?
We may never know the whole truth. But in 2023, there is no moment like this year, with the hysteria, obsession, and conspiracy theories that AI has induced.