laitimes

CBN Weekly|Microsoft Office Suite launches AI function, up to $87 per month; Baidu launched the paid version of Wenxin Yiyan, 59.9 yuan per month, and Alibaba Cloud released the second generation of Tongyi model...

author:第一财经YiMagazine

Written by | Neocortex Group

Edit | Wu Yangyang

As the end of the year approaches, the "GPT-3.5 moment" promised by many Chinese companies is beginning to count down – almost every Chinese company that has developed its own Large Language Models (LLMs) has more or less promised to catch up with GPT-3.5 by the end of this year.

The time for inspection is coming soon, and many companies are rushing to work. This week, both Baichuan Intelligence and Alibaba Cloud updated their large models. Among them, Baichuan has opened the context window wider, which means that users can enter more content at once for the model to interpret (such as 300,000 Chinese characters), and the model can also remember more, and it is not easy to forget what was said earlier while chatting. Alibaba Cloud has increased the model parameters from 10 billion to 100 billion, and according to the "law of scale" of large models, when the model becomes larger, its learning ability and intelligence level will be better.

As the first domestic tech company to release a ChatGPT-like product, Baidu is also continuing to consolidate its leading position (in China). After releasing the latest model "Wenxin 4.0" that claims to be benchmarked against GPT-4 last week, this week, Baidu loaded it into the paid version of Wenxin Yiyan, priced at 59.9 yuan / month. This price is only about 1/4 of the price of similar features of Microsoft and Google ($30), but it is still not a low price in the domestic market. Aside from paying for entertainment platforms such as iQiyi and Tencent Video when popular movies and TV are released, Chinese users have not paid much for productivity tools, including Microsoft's Office suite.

This week, the keyword of overseas generative AI is also cost. Microsoft has finally brought the AI features of its long-promised office suite online, though it's unknown how many paying users it will get. With the addition of $30 for AI features, the total cost of the Microsoft Office suite will reach up to $87 per person per month. This figure is not low for both individuals and businesses. And Microsoft hasn't given enough evidence that the spend is worth it for businesses.

Microsoft's pricing is based on a survey of users' willingness to pay, but it's essentially a cost. Last month, GitHub Copilot, a coding tool owned by Microsoft that also loads AI functions, was exposed to losses ranging from $20 to $80 per user. The cost of generative AI is now an industry challenge, and when Meta's large language model Llama2 was first released and open-sourced, it was thought to disrupt the closed-source model market, but this month, a startup called Cypher complained that the cost of computing power using Llama2 is hundreds of times that of using GPT-3.5 Turbo. If the computing power problem is not solved, the dream of open source will be shattered. However, this does not mean that the closed-source model is worry-free.

OpenAI will hold a press conference on November 6 (2 a.m. Beijing time on November 7), and one of the topics is how to reduce the cost of models. The neocortex will be tracked and reported, so stay tuned.

The following was produced by the Neocortex team:

Key Points

Financing & Commercial

Anthropic receives $2 billion investment from Google;

Baidu launched a paid version of Wenxin Yiyan, 59.9 yuan per month;

Microsoft Office Suite launches AI features at a maximum of $87 per month;

DingTalk public beta generative AI function;

Meta's open-source large language model Llama2 also has a cost, and it's not cheap.

Models

Baichuan Intelligence's new model supports 192k context windows;

Alibaba Cloud released the second-generation Tongyi model.

Financing & Commercial

Anthropic receives $2 billion investment from Google

On Oct. 28, Google said it had agreed to invest up to $2 billion in AI startup Anthropic. After this round of financing, Anthropic's valuation may be as high as $30 billion, and the stock price will be higher than OpenAI.

Anthropic raised $4 billion from Amazon just a month ago

In September, Amazon reached an investment agreement with Anthropic, saying it would invest nearly $4 billion and use $1.25 billion of that as an initial investment to buy a minority stake. To do this, Anthropic will use Amazon Web Services (AWS) as its primary cloud provider.

Founded in January 2021, Anthropic is the developer of the chatbot Claude 2, and its founder, Dario Amodei, was previously OpenAI's vice president of research and security. At present, Anthropic is the second-most valuable company in the large language model track, after OpenAI.

Why does Anthropic need so much money?

Together with Google's $2 billion, Anthropic has raised $6 billion since September, which will be used to compete with OpenAI.

According to an internal filing in April, Anthropic plans to raise no less than $5 billion to directly confront OpenAI, including spending $1 billion by the end of 2024 to build its next-generation model, Claude-Next, which is said to be 10 times more powerful than today's most powerful models. In July, Anthropic said it had invested at least two months in developing its latest chatbot, with 30 to 35 people directly involved in the development of the AI model, with a total of 150 people supporting it.

Google has previously invested in Anthropic

In fact, as early as half a year ago, Google had invested $300 million in Anthropic and acquired a 10% stake in the company. At that time, Anthropic was also invited to participate in the discussion on the development of "responsible AI" at the White House, along with OpenAI, Google, Microsoft, etc.

In addition to Google, SK Telecom Co. Ltd. invested $100 million in Anthropic in August, and Google, Salesforce Ventures and Zoom Ventures participated in a $450 million funding round in May.

Reference Links:

https://techcrunch.com/2023/10/27/ais-proxy-war-heats-up-as-google-reportedly-backs-anthropic-with-2b/

Baidu launched a paid version of Wenxin Yiyan, 59.9 yuan per month

On November 1, Baidu's large model Wenxin Yiyan officially launched the professional version (based on Wenxin 4.0), priced at 59.9 yuan/month, and the continuous monthly preferential price was 49.9 yuan/month. This price is about half of ChatGPT, and currently, the price of ChatGPT membership is $19.9/month (about 140 yuan/month). In addition to the paid version, Wenxin Yiyan Basic Edition (based on Wenxin 3.5) is still free to use.

Compared with the basic version, Wenxin Yiyan Professional Edition is based on the fourth-generation large model "Wenxin 4.0" released by Baidu on October 17. Baidu said that the model is benchmarked against GPT-4, and Wenxin Yiyan Professional Edition will have "stronger model capabilities and image generation capabilities, support for various plug-ins, suitable for users who need to use Wenxin Yiyan programming, copywriting, painting design and other professional work needs".

Wenxin Yiyan was launched on August 31 this year, and the official said that its current user scale is about 45 million.

Microsoft Office Suite launches AI features for up to $87/month

On November 1, Microsoft announced that Copilot, a generative AI assistant, was officially launched in its office suite Microsoft 365 (including Word, Excel, Powerpoint, OneNote, Teams, OneDrive, etc.) for enterprise users at $30 per person per month.

There are requirements for the scale of the enterprise, and not all the functions are online

Microsoft released the latest version of its office suite, Microsoft 365 Copilot, in March this year, adding a generative AI assistant Copilot, which can provide features such as document summarization, email generation, plan creation from notes, and improved Excel analysis.

The launch of Microsoft 365 Copilot is somewhat unmarket-friendly: first, it requires enterprise customers to have at least 300 users; At the same time, the generative AI features in many products are not yet available, such as the Copilot feature in Excel products is still in preview, the Copilot feature in OneNote products is only available on Windows platforms, and the Copilot feature in SharePoint and OneDrive products is not even in preview.

With the base subscription fee, using Microsoft 365 Copilot costs up to $87/month

In July, Microsoft disclosed that the Microsoft 365 Copilot subscription cost $30 per person per month. This means that Microsoft enterprise customers who subscribe to Microsoft 365 Copilot will pay up to $87/month — including the subscription fee that was previously paid for Microsoft 365 Office Suite (available in $36 and $57 versions), and now add Copilot's generative AI capabilities of $30.

Microsoft revealed the $30 pricing logic for the first time

Following Microsoft's lead, Google also prices the generative AI (Duet AI) used in the office suite at $30.

Regarding the pricing strategy, Jared Spataro, Microsoft's corporate vice president for Microsoft 365, said the $30 a month was determined after studying how much customers are willing to pay for the help provided by artificial intelligence. At the same time, Microsoft also looked at the "cost per capita math", that is, how much $30 per month is equivalent to the cost of hiring a knowledge worker, and how much revenue AI needs to create to justify the additional cost of it.

According to Spataro, Microsoft has gathered enough data to show that Copilot can dramatically improve productivity. It plans to disclose the findings at its annual Ignite conference on Nov. 14.

Previously, analysts said that Microsoft 365 Copilot could cover about 150 million enterprise employees, but Wall Street seems to have low expectations. Derrick Wood, an analyst at investment bank TD Cowen, believes that Microsoft customers may be reluctant to use the product across the enterprise. According to his calculations, even if the rollout is successful, Microsoft's revenue in fiscal 2025 could only grow by 1%, about $2 billion to $2.5 billion.

Reference Links:

https://techcommunity.microsoft.com/t5/microsoft-365-copilot/microsoft-365-copilot-is-generally-available/ba-p/3969331

https://www.ft.com/content/81db7c36-f9ae-496b-9dd4-971aefe6f9a9

DingTalk beta generative AI features

On November 3, after the internal testing of more than 500,000 enterprises, DingTalk's generative AI function, the AI magic wand, launched a public beta. All users can directly use chat AI, document AI, Yida AI and other functions in a conversational way at the "Magic Wand" entrance on the DingTalk homepage, or click the magic wand button to provide suitable skills in the respective interfaces of 17 products such as documents, knowledge base, brain map, flash, teambition, etc.

Meta's open-source large language model Llama2 also has a cost, and it's not cheap

The Information reported that the founding team of Cypher, a product that creates virtual chat characters, found that they needed more cloud computing power to use Meta's open-source large language model Llama 2 in their product.

In August of this year, Cypher using Llama 2 consumed $1,200 worth of Google Cloud computing power that month, and after switching to GPT-3.5 Turbo, the monthly cloud computing cost dropped to $5 per month.

Llama 2 is the second generation of large language models released by Facebook's parent company Meta in February this year, and is open source to all developers. At the time, Meta claimed that the model required significantly less computing power than other models.

The reason for the higher cost of computing power using open-source models may be related to the "roughness" of the model itself, because it is free, so it does not greatly reduce the complexity of users to call the model like the closed-source GPT-3.5, and does not provide a variety of models with comparable capabilities but smaller specifications for adapting to different scenarios.

Open-source models may indeed be more computing power-intensive, but please note that the cost here is only calculated for computing power, not the cost of using the model itself.

Reference Links:

https://www.theinformation.com/articles/metas-free-ai-isnt-cheap-to-use-companies-say

https://www.theinformation.com/articles/what-it-takes-to-make-open-source-ai-cheaper-than-openai-microsoft-goes-multimodal

Models

Baichuan Intelligent Publishing Model supports 192k context windows

On October 30, Baichuan Intelligent released the Baichuan2-192K large model, whose context window length can support up to 192k tokens (note: equivalent to 144,000 words, but Baichuan Intelligent said that the model can process about 350,000 Chinese characters), which is currently the longest context window in the world.

The context window length of this model is 6 times longer than GPT-4

Contextual window length is one of the core metrics of large language models, and a longer window length means that the model is better able to understand text, so that it can generate content more accurately and fluently. If the window is not long enough, it limits the use of the model by lawyers, analysts, consultants, and other workers who need to analyze and process longer texts.

However, ultra-long context processing also requires higher computing power and larger video memory, which is difficult for the average user to support such a model cost. Baichuan Intelligence said that the new model will be provided to enterprise users in the form of API calls and privatized deployment.

• OpenAI's GPT-4: supports 32k contextual windows and can process 25,000 words;

• Anthropic's Claude 2: has a context window of 100k and is capable of processing about 75,000 words;

Moonshot AI's Kimi Chat: 200,000 words in the context window (Note: If the Baichuan 2-192K context does reach 350,000 words, it will surpass Kimi Chat's record).

CBN Weekly|Microsoft Office Suite launches AI function, up to $87 per month; Baidu launched the paid version of Wenxin Yiyan, 59.9 yuan per month, and Alibaba Cloud released the second generation of Tongyi model...

Baichuan Intelligence

The model targets media, finance, law, and other scenarios

The Baichuan2-192K model is the 7th model released by Baichuan Intelligence, and it is also the first model named after the length of the context window instead of the number of model parameters. As can be seen from the naming method, it is based on Baichuan2, the second-generation large model of Baichuan Intelligence.

Baichuan Intelligence said that Baichuan2-192K has officially started internal testing, and has reached cooperation with financial media and law firms and other institutions for use in media, finance, law and other scenarios.

Baichuan Intelligence is a large-scale model company founded by Wang Xiaochuan, the founder of Sogou, in April this year, and has raised a total of 350 million US dollars since the establishment of the company. At present, Baichuan is valued at more than $1 billion, making it the shortest startup in China to become a generative AI unicorn. Baichuan Intelligent currently has a team size of more than 170 people and has released two generations of large models, of which the models with parameter sizes of 7 billion and 13 billion (Baichuan-7B/13B, Baichuan2-7B/13B) have been open sourced, and the two models with parameter sizes of 53 billion (Baichuan-53B and Baichuan2-53B) are closed-source models, which are also the company's largest models at present.

Reference Links:

https://mp.weixin.qq.com/s/lAJh6qGG27u_qCl0kI-0lA

Alibaba Cloud released the second-generation Tongyi model

On October 31, Alibaba Cloud released the Tongyi Qianwen 2.0 model at the 2023 Apsara Conference, with a parameter volume of 100 billion.

What are the improvements in 2.0 compared to 1.0?

At the Alibaba Cloud Summit on April 11 this year, Alibaba Cloud launched the first-generation large language model "Tongyi Qianwen". Alibaba Cloud did not disclose the size of the model's parameters at the time, but according to public reports, the model parameters are about 20 billion to 30 billion. Compared with Tongyi Qianwen 1.0, Tongyi Qianwen 2.0 has a significant jump in parameter scale, which is comparable to Tencent's hybrid model, but it is still not the model with the largest number of parameters in the industry:

CBN Weekly|Microsoft Office Suite launches AI function, up to $87 per month; Baidu launched the paid version of Wenxin Yiyan, 59.9 yuan per month, and Alibaba Cloud released the second generation of Tongyi model...

The parameter size constitutes the ceiling of the model's capabilities. In general, the larger the parameters, the greater the potential of the model and the more things it can learn. Alibaba Cloud said that compared with version 1.0, Tongyi Qianwen 2.0 has significantly improved its ability to understand complex instructions, literary creation, general mathematics, knowledge memory, and hallucination resistance. In terms of English tasks, 2.0 is better able to understand and process complex language structures and concepts; In terms of Chinese tasks, model comprehension and expression skills have been strengthened.

In addition to the dialogue function, the official website of Tongyi Large Model also launched multi-modal and plug-in functions to support image understanding and PDF document analysis. Alibaba Cloud said that at present, the comprehensive performance of Tongyi Qianwen has surpassed GPT-3.5 and is accelerating to catch up with GPT-4.

In addition to the basic model, Alibaba Cloud has also released 8 industry models:

•Coding assistant "Tongyi Spiritual Code"

•Reading assistant "Tongyi Zhiwen"

•Investment Research Assistant "Tongyi Dianjin"

•Intelligent customer service "Tongyi Xiaomi"

•Personal Health Assistant "Tongyi Renxin"

•Legal assistant "Tongyi Farui"

•Personalized character creation platform "Tongyi Stardust"

•Audio transcription assistant "Tongyi Tingwu" (released in June)

According to Alibaba Cloud, the eight vertical models were specially trained using vertical domain data. In the future, developers can integrate model capabilities into their AI applications through web page embedding and API/SDK calls.

In addition, Alibaba Cloud plans to open-source the 72B version of Tongyi Qianwen in the near future. August and September. The company has open-sourced the 7B (7 billion parameters) and 14B (14 billion parameters) versions of the model, with a cumulative download of more than 1 million.

Zhou Jingren, CTO of Alibaba Cloud, said that at present, half of China's large model companies are running on Alibaba Cloud, including the large models of Baichuan Intelligence, Zhipu AI and other companies; Alibaba Cloud does not want to make a terminal application to C, but rather wants to open up model capabilities and serve developers well.

-END-

CBN Weekly|Microsoft Office Suite launches AI function, up to $87 per month; Baidu launched the paid version of Wenxin Yiyan, 59.9 yuan per month, and Alibaba Cloud released the second generation of Tongyi model...

We are a new content IP born in the GPT wave and incubated by YiMagazine.

Like every reader who cares about technology and the fate of mankind, we hope to better understand the rapidly changing world of technology and ourselves as "advanced intelligences" in this era of uncertainty.

Under this goal, we plan to report and discuss issues related to "intelligence" from multiple perspectives such as academic, business, ethical, and regulatory. Note that when we say intelligence, we don't just talk about AI.

To communicate with reporters, you can add WeChat (please note the company name and name):

Jeff Wang's WeChat: wjfsty

Zhang Siyu WeChat: helianthus351

Wu Yangyang WeChat: qitianjiuye

The copyright of this article belongs to Yicai.

It may not be reproduced or translated without permission.

CBN Weekly|Microsoft Office Suite launches AI function, up to $87 per month; Baidu launched the paid version of Wenxin Yiyan, 59.9 yuan per month, and Alibaba Cloud released the second generation of Tongyi model...

You can purchase the November 2023 issue of Yicai magazine

Read on