laitimes

Open source criticism is "victimized" again, Google and OpenAI compete to be the "model worker" of the basic model

author:InfoQ

Author | Huawei Wing

Finishing|Huawei

"Ten years ago, none of the world's best AI systems could classify objects in images at a human level. Artificial intelligence is difficult to understand language, let alone solve the field of mathematics. Today, AI systems have broadly outperformed humans on standard benchmarks. ”

This year, the Stanford HAI Institute's AI Index report arrived as scheduled. According to Ray Perrault, the AI space is advancing rapidly in 2023, with tech companies racing to build products, and advanced tools such as GPT-4, Gemini, and Claude 3 are increasingly being used by the public with impressive multimodal capabilities, but current AI technologies still have significant problems, such as the inability to reliably process facts, make complex inferences, and interpret conclusions.

In the 393-page 2024 AI Index Report, the Stanford HAI Institute not only covers fundamental trends more broadly, such as the technological advancements in AI, the public's perception of technology, and the geopolitical dynamics surrounding its developments, but also analyzes in detail more raw data than ever before.

Among them, the following 15 charts reflect the state of the entire AI field in 2023 and the situation in 2024.

1. Generative AI investment is surging

While private investment in AI fell last year and overall global investment in AI fell for the second year in a row, private investment in generative AI surged and nearly eightfold from 2022 to $25.2 billion. And, most of the private investment in generative AI takes place in the United States.

Open source criticism is "victimized" again, Google and OpenAI compete to be the "model worker" of the basic model

Nestor Maslej, editor-in-chief of the report, said, "The capital landscape of the last year has been representative of the reaction to generative AI, both in policy and public opinion, and in industry investments. ”

2. Google dominates the base model race

In 2023, industry produced 51 notable machine learning models, while academia contributed only 15. Among them, Google has the largest number of base models released in .

Open source criticism is "victimized" again, Google and OpenAI compete to be the "model worker" of the basic model

Tech companies release foundational models both to push advanced technologies forward and to give developers a foundation on which to build products and services. Since 2019, Google has been leading the way in releasing foundational models, followed by OpenAI.

3. Closed models are better than open source models

Currently, one of the hot debates in the field of AI is whether the underlying model should be open source or closed, with some arguing that open source models are dangerous, while others say it is open source models that drive innovation. The report doesn't weigh them up, but rather looks at the respective launch trends and benchmark performance.

The number of new large language models released globally in 2023 doubled from the previous year, with 98 of the 149 foundational models released being open-source, 23 providing partial access via APIs, and 28 being closed. While two-thirds are open-source, the highest-performing models come from industry players with closed systems. In many commonly used benchmarks, closed models outperform open source models.

Open source criticism is "victimized" again, Google and OpenAI compete to be the "model worker" of the basic model

4. The base model has become super expensive

How deep is it to train a large model?According to the report, the cost of training AI models has increased dramatically over time, and the cost of training advanced AI models is now at an unprecedented level. Among them, OpenAI's GPT-4 and Google's Gemini Ultra require $78 million and $191 million in training costs, respectively.

Open source criticism is "victimized" again, Google and OpenAI compete to be the "model worker" of the basic model

Interestingly, Google's Transformer model, released in 2017, introduced the architecture that underpins almost all of today's large language models, and its training cost was only $930.

5. Massive release of carbon footprint

The environmental impact of training an AI model is not negligible, and while the per-query emissions for inference may be relatively low, the total impact is enough to outperform training when the model is queried thousands or even millions of times per day.

Moreover, due to factors such as model size, data center energy efficiency, and carbon intensity of energy grids, the carbon emission data of different models varies greatly. For example, Meta's Llama 2 70B model releases about 291.2 tons of carbon, which is almost 291 times the carbon emissions of a traveler on a round-trip flight from New York to San Francisco, and 16 times the average American's total carbon emissions in a year. However, Llama 2's carbon emissions are still less than the 502 tons released during OpenAI's GPT-3 training.

Open source criticism is "victimized" again, Google and OpenAI compete to be the "model worker" of the basic model

6. The United States is leading the way when it comes to base models

In 2023, the majority of the global base models are from the United States (109), followed by China (20) and the United Kingdom. Since 2019, the U.S. has led the way in both the number of foundational models released and the number of AI systems considered significant technological advancements. In addition, the report states that China is leading the way in the granting of AI patents and the installation of industrial robots.

Open source criticism is "victimized" again, Google and OpenAI compete to be the "model worker" of the basic model

7. The concentration of PhDs in industry is higher

Where do new AI PhDs choose to work after graduation? According to the report, more and more AI PhD graduates are entering industry. In 2011, the proportion of employment in industry (40.9%) and academia (41.6%) was about the same. In 2022, the percentage of graduates who chose to join industry was much larger, at 70.7%. But over the past five years, the percentage of AI PhD graduates entering government jobs has been relatively low, stable at around 0.7%.

Open source criticism is "victimized" again, Google and OpenAI compete to be the "model worker" of the basic model

8. Increased diversity of applicants

Similar to the trend in higher education CS, the racial diversity of AP CS test takers is increasing. While white students remain the largest group, Asian, Hispanic/Latino/Latino, and Black/African-American students have seen an increase in the number of students taking AP CS exams over time.

Open source criticism is "victimized" again, Google and OpenAI compete to be the "model worker" of the basic model

9. Increased number of mentions in earnings calls

Over the past year, there has been a significant increase in the number of people mentioning AI on earnings calls from Fortune 500 companies. In 2023, AI was mentioned on 394 earnings calls (nearly 80% of all Fortune 500 companies), up from 266 in 2022. Since 2018, the number of mentions of AI on Fortune 500 earnings calls has nearly doubled.

Open source criticism is "victimized" again, Google and OpenAI compete to be the "model worker" of the basic model

Generative AI was the most frequently cited topic across all earnings calls, accounting for 19.7% of all earnings calls, followed by AI investments, AI capability expansion, and AI growth initiatives (15.2%), and lastly, company/brand AI (7.6%).

10. Costs fall and revenues increase

According to the report, AI has actually helped businesses improve their bottom line, with 42% of respondents saying they have seen a reduction in costs and 59% claiming an increase in revenue, reflecting increased productivity and increased worker productivity.

Open source criticism is "victimized" again, Google and OpenAI compete to be the "model worker" of the basic model

In addition, several studies in different fields have shown that AI enables workers to complete tasks faster and produce higher-quality work, but AI helps low-skilled workers more than high-skilled workers. There are also studies that warn that the use of AI without proper supervision can lead to degraded performance.

11. Businesses perceive risk

The report conducted a global survey of 1,000 companies with at least $500 million in revenue to understand how businesses view responsible AI. The results show that privacy and data governance are considered the biggest risks globally, while fairness (often discussed in terms of algorithmic bias) is still not recognized in most companies. Currently, businesses are taking action on their perceived risks: most organizations in all regions have implemented at least one responsible AI measure to address the associated risks.

Open source criticism is "victimized" again, Google and OpenAI compete to be the "model worker" of the basic model

12. Artificial intelligence hasn't completely defeated humans

In recent years, AI systems have outperformed humans on a range of tasks, including some benchmarks in image classification, visual reasoning, and English comprehension. However, it lags behind on more complex tasks, such as competition-level math, visual common sense reasoning, and planning.

Open source criticism is "victimized" again, Google and OpenAI compete to be the "model worker" of the basic model

13. Lack of standardized assessment of AI

Its latest research shows that there is a serious lack of standardization in responsible AI reporting. For example, leading developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks, a practice that makes it difficult to systematically compare the risks and limitations of top AI models.

Open source criticism is "victimized" again, Google and OpenAI compete to be the "model worker" of the basic model

14. Laws both promote and restrict AI

Between 2016 and 2023, 33 countries passed at least one AI-related law, with the majority of the action taking place in the United States and Europe. During this period, a total of 148 AI-related bills were passed, categorized as expansive laws aimed at enhancing a country's AI capabilities and restrictive laws restricting the application and use of AI. While many bills are pushing for the development of AI, restrictive legislation is a global trend.

Open source criticism is "victimized" again, Google and OpenAI compete to be the "model worker" of the basic model

15. The public is more anxious about AI

In a survey by LPSOS, 52% of people in 2023 said they were nervous about AI products and services, up 13 percentage points from 2022, and two-thirds now expect AI to profoundly transform their daily lives in the coming years. In addition, the report notes that there are significant differences in views between different groups of people, with young people more inclined to be optimistic about how AI will change their lives.

Open source criticism is "victimized" again, Google and OpenAI compete to be the "model worker" of the basic model

Interestingly, a lot of the pessimism about AI comes from developed Western countries. Respondents in places such as Indonesia and Thailand said they expect the benefits of AI to outweigh its harms.

Original link: Stanford's 15 pictures reveal the latest AI trends: open source wind reviews and "victimization", Google and OpenAI strive to be the basic model "model worker" _AI & large model_Huawei _InfoQ selected articles

Read on