laitimes

These 13 people in the field of artificial intelligence are about to change the future

author:Fortune Chinese Network
These 13 people in the field of artificial intelligence are about to change the future

From left to right: Rummann Chowdhury, Ali Ghodsi and Fei-Fei Li. COURTESY OF RUMMANN CHOWDHURY; DATABRICKS; MATT WINKELMEYER—GETTY IMAGES FOR WIRED25

Just like a spaceship full of aliens landing on Earth, artificial intelligence technology came out of thin air and changed everything in an instant.

From AI-generated music (which can mimic your favorite singer) to virtual lovers, AI technology is fascinating, yet scary, and increasingly accessible.

Companies were quick to pour money into the technology. In addition to Microsoft's $13 billion investment in ChatGPT developer OpenAI, startups such as Anthropic, Cohere, Adept AI, Character.AI and Runway have each raised hundreds of millions of dollars in recent months.

As with many tech companies, AI innovation project leaders are as central to the story, as are the technology itself. Today's AI innovators aren't as well-known as tech industry celebrities, but the influence of these computer scientists and technologists is rapidly expanding because of their work.

Given the far-reaching societal impact of their work and the potential risks it can bring, many of these AI innovators hold their views strongly about the technology's future, its power, and its dangers, and their views are often conflicting.

By learning about their work and perspectives, Fortune surveyed some of the key people who set the AI agenda. Some work in large companies, some in startups, some in academia; Some have been working on specific sub-areas of AI for years, while others are new to the game. If there's one thing they have in common, it's their extraordinary ability to change the way this powerful technology impacts the world. Here are 13 of today's most important AI innovators, in no particular order.

Daniela Amodei

Co-founder of Anthropic

These 13 people in the field of artificial intelligence are about to change the future

CREDIT: COURTESY OF ANTHROPIC

"Given the potential reach of AI, I was a bit shocked that it remains largely unregulated."

Daniela Amoudi and her brother Dario reportedly quit their jobs at OpenAI in late 2020 to co-found Anthropic, allegedly fearing that OpenAI's partnership with Microsoft would increase pressure, leading OpenAI to quickly release products at the expense of security protocols.

The company's chatbot, Claude Claude, is similar to OpenAI's ChatGPT, but is trained on a technique known as constitutional AI. According to the company, the technology sets principles such as choosing responses that are "least racist and sexist" and encourages people to put life first and pursue freedom. The approach is based on what 35-year-old Amoudi calls the 3H framework (helpful, honest, and harmless) for Anthropic's AI research: beneficial, sincere and harmless.

"Given the potential reach of AI, I was a bit shocked that it remains largely unregulated." Amoudi said in an interview last year. She hopes that the organizations, industry groups and industry associations that develop the relevant standards will step in and provide guidance on the security model. "We need all participants to work together to achieve positive outcomes (which is our common aspiration)."

In addition to developing "next-generation algorithms" for the chatbot Claude, Anthropic has struggled to raise money. Recently, the company raised $450 million from backers such as Google, Salesforce, and Zoom Ventures (notably, Anthropic's previous $580 million was led by notorious cryptocurrency entrepreneur Sam Bankman-Fried's Alameda Research Ventures). Anthropic has not said whether it will return the money).

Yann LeCun

Chief AI Scientist at Meta

These 13 people in the field of artificial intelligence are about to change the future

CREDIT: MARLENE AWAAD—BLOOMBERG/GETTY IMAGES

"Upcoming AI systems will augment human intelligence in the same way that mechanical machines can amplify physical strength. They will not be a replacement. ”

French-born Yang Likun said during an upcoming debate rehearsal: "The doomsday prophecy triggered by artificial intelligence is nothing more than a new form of obscurantism. In this debate, he will debate with a researcher at the Massachusetts Institute of Technology (MIT) about whether artificial intelligence poses an existential threat to humanity.

Yang Likun, 62, bluntly said that artificial intelligence can help enhance human intelligence. He is recognized as one of the leading experts in the field of neural networks, where research has led to breakthroughs in computer vision and speech recognition. His work on fundamental neural network design known as convolutional neural networks broadened the perspective of neural networks, leading him to win the Turing Award, known as the "Nobel Prize of computer science" in 2018, together with deep learning pioneers Jefrey Hinton and Joshua Bengio.

Needless to say, Yang Likun is not one of the more than 200 signatories of the open letter. The signatories recently warned in an open letter that AI poses an extinction-level risk to humans.

Yang Likun, a longtime computer science professor at New York University, joined Facebook (now Meta) in 2013 and currently oversees various AI projects for the $700 billion company. That hasn't diminished his interest in participating in debates, and he's also involved in major AI-related debates, such as fears that the technology will take away their jobs. In Martin Ford's 2018 book, Architects of Intelligence: The Truth About AI from the People Building it, Yang disputed one of Hinton's famous predictions, such as Hinton's belief that radiologists would lose their jobs because of AI, and conversely. He explained that this will give radiologists more time to communicate with patients. He went on to say that he thinks some activities will become more expensive, such as eating at a restaurant (where waiters bring food prepared by human chefs). He told Ford: "The value of things will change, and when valuing value, people value human experience more than things that are automated." ”

David Luan

CEO and co-founder of Adept

These 13 people in the field of artificial intelligence are about to change the future

IMAGE CREDIT: COURTESY OF ADEPT

"The pace of development of artificial intelligence is amazing. First there was text generation, then image generation, and now computer applications. ”

Before co-founding Adept in 2022, Luan worked for some of the most important AI companies, including OpenAI and Google (he also briefly worked as director of AI at Axom, a maker of tasers and police body cameras). He said that AI's current moment is what he is most excited about. "We have entered the industrial era of artificial intelligence. Now it's time to build a factory. Luan said at the Cerebral Valley A.I. Summit earlier this year.

Adept's idea is to provide people with "AI teammates" who can perform computer-assisted tasks with a few simple text commands. For example, build a financial model in a spreadsheet. In March, the company raised $350 million, which Forbes valued at more than $1 billion.

Luan, 31, said he spent a lot of time pondering a common concern about whether AI might replace human jobs, but that concern was overstated for "knowledge workers" — customers that generative AI tools like Adept focus on. Luan said at the Brain Valley AI Summit: "Instead of spending 30 hours a week updating SAF CRM records, you spend 1% of your time a week having Adept do these things for you, and you spend 99% of your time talking to customers. ”

Emad Mostaque

CEO of Stability AI

These 13 people in the field of artificial intelligence are about to change the future

In December 2022, Emad Mostark, founder and CEO of Stability AI, attended the Fortune Magazine AI Brainstorming Conference. IMAGE CREDIT: NICK OTTO FOR FORTUNE

"What does it mean if we have agents that are more capable than we have control, that are interconnected on the Internet and have some degree of automation?"

Born in Jordan but raised in Bangladesh and the UK, Mostark earned a bachelor's degree in computer science from Oxford University in 2005. According to the New York Times, he spent more than a decade in hedge funds before founding Stability AI in 2020. His work experience in the financial industry seems to have laid a good foundation for him to start Stability AI. He reportedly funded the company himself, and later received investment from investors such as Coatue and Lightspeed Venture Partners.

The company helped create a text-to-image "Stable Diffusion" model that was used to generate images, but gave little regard to whether it constituted intellectual property infringement or concerns about violent content (like some other AI tools, the product has been criticized for amplifying racial and gender bias). For Mostark, the first priority was to keep the model open source and not put guardrails that limit what the model could generate—although to make Stability's AI more commercially appealing, he did later train a version of the "steady diffusion" model with a dataset filtered out of pornographic images. "We trust our users, and we trust the community." He told The New York Times.

That attitude (and accusations that Mostark exaggerated some of his accomplishments, as Forbes recently detailed in detail) drew backlash from other figures in the AI community, government officials and companies like Getty Images, which sued Stability AI in February for copyright infringement, claiming that the company copied 12 million images without permission to train its AI models.

However, Stability AI's tools have become one of the most popular and well-known representatives of generative AI. Mostall, 40, who works in London, is hard to classify. In March, he and others signed an open letter calling for a moratorium on development of more advanced AI than OpenAI's AI chatbot, GPT-4. His views on the development of AI seem to go to two extremes: he recently commented that in the worst-case scenario, AI could control humans, and on another occasion, he said that AI would not be interested in humans.

"Because we can't imagine anything more capable than us, but we all know someone who is more capable than us." So, my personal opinion is that this situation will be like the movie "Her" starring Scarlett Johansson and Jacques Phoenix: humans are a little bored, so AI will say: 'Goodbye', 'You're a little bored'. ”

Li Feifei

Co-director of the Stanford Institute for Human-Centered Artificial Intelligence

These 13 people in the field of artificial intelligence are about to change the future

CREDIT: DAVID PAUL MORRIS—BLOOMBERG/GETTY IMAGES

"To be born in this historical era and devote myself to this technology still feels surreal to me."

When Li Feifei immigrated to the United States from China with her family at the age of 16, she said she had to learn English from scratch while trying to get good grades. Today, the co-director of Stanford's Institute for Human-Centered AI is considered one of the leading figures in the ethical application of AI. She has written articles such as "How to make AI that good for people," and she is an advocate for diversity in AI.

Early in her career, she built ImageNet, a large dataset that contributed to the development of deep learning and artificial intelligence. Today, at Stanford, she has been working on "environmental intelligence," the use of artificial intelligence to monitor activity in homes and hospitals. At Fortune's AI brainstorming conference last December, she discussed her work and why bias is a key factor to consider.

"I do a lot of work in healthcare. What is clear is that if our data comes from a specific demographic or socioeconomic class, it will have quite far-reaching potential impacts. She said.

According to Feifei Li, 47, Stanford now conducts ethical and social reviews of AI research projects. "It makes us think about how we can design technology to reflect fairness, privacy, and human well-being and dignity."

In order to promote inclusivity in the field of artificial intelligence, Li Feifei co-founded a non-profit organization called "AI4ALL" to promote the diversified development of artificial intelligence education.

One of the major controversies in Li's career occurred during her tenure as chief scientist for artificial intelligence/machine learning at Google Cloud: In 2018, Google signed a contract to provide AI technology support to the U.S. Department of Defense, which sparked controversy among some employees. While Li Feifei did not sign the contract, critics say her association with it — particularly some of her comments in leaked emails about how the contract is described to the public — contradicts her as an ethical advocate for artificial intelligence.

Ali Ghodesi

CEO of Databricks

These 13 people in the field of artificial intelligence are about to change the future

CREDIT: COURTESY OF DATABRICKS

"We should embrace AI technology because it's here to stay. I do think it's going to change everything, and the impact is mostly positive. ”

Spanning academia and business, Ali Goldsey is an adjunct professor at UC Berkeley and co-founder and CEO of Databricks. One of the core principles of the dual-Swedish Iranian technology executive is his commitment to open source development.

Godsey's work on Apache Spark, an open-source data processing tool, laid the groundwork for Databricks, which is valued at $38 billion. In April, Databricks released Dolly 2.0, an open source competitor to ChatGPT, which uses a question-and-answer instruction set created entirely from interactions between Databricks' 5,000 employees. This means that any company can embed Dolly 2.0 into their own commercial products and services without being limited by usage caps.

Dolly is more of a proof of concept than a viable product – the model is error-prone, hallucinatory and generates toxic content. However, Dolly's importance lies in the fact that it shows that AI models can be much smaller and cheaper to train and run than the large proprietary language models that underpin OpenAI's ChatGPT or Anthropic's Claude. Goldsey defended Dolly's freedom and accessibility. "We are committed to developing AI safely and responsibly, and by opening up models like Dolly for the community to work with, we believe we are moving in the right direction (in the AI industry)."

While generative AI is now getting a lot of attention, Goldsey, 45, believes that other types of AI, especially those used for data analytics, will have a profound impact across industries. In March, he told Fortune: "I think this is just the beginning, and our research on the role that AI and data analytics can play has yet to be deepened."

Sam Altman

CEO of OpenAI

These 13 people in the field of artificial intelligence are about to change the future

CREDIT: ERIC LEE—BLOOMBERG/GETTY IMAGES

"If someone actually cracks the code and develops a super AI (no matter how you want to define it), it probably makes sense to have some global rules."

Concerned that Google would become too powerful and control AI, Altman founded OpenAI in 2015 with Elon Musk, Ilya Sutzkewo, and Greg Brockman.

Since then, OpenAI has become one of the most influential companies in the field of artificial intelligence and a leader in "generative AI": the company's ChatGPT is the fastest-growing app in history, attracting more than 100 million monthly active users in just two months of launch. Another product from OpenAI, DALL-E 2 is one of the most popular text-to-image generators, capable of generating high-resolution images with shadows, shading, and reflective depth of field effects.

While he's not an AI researcher or a computer scientist, Altman, 38, sees these tools as stepping stones to his mission with others in the field: developing computer superintelligence known as General Artificial Intelligence (AGI). He believes that "general artificial intelligence may be necessary for human survival," but said he will be cautious in achieving this goal.

The quest for general AI hasn't blinded Altman to the risks: He was one of the prominent people who signed the Center for AI safety's open letter warning about the threat AI poses to humanity. At a U.S. Senate hearing in mid-May, Altman called for regulation of AI, saying rules should be put in place to encourage companies to develop safely "while ensuring that the benefits of the technology are available to people." (Some critics speculate that the regulation he calls for could also create roadblocks for OpenAI's growing number of open-source competitors.) )

According to Jeremy Kahn of Fortune magazine, Altman was the president of Y Combinator, a startup incubator, which excelled at financing. The trick seems to have paid off heavily: OpenAI has a $13 billion partnership with Microsoft.

While Musk has resigned from OpenAI's board and is reportedly setting up an AI lab to compete with OpenAI, Altman still sees him as his mentor, saying Musk taught him how to push the limits of "hard research and hard technology." However, he doesn't plan to follow Musk to Mars: "I don't want to live on Mars, it sounds scary." But I'm happy that other people want to go live on Mars. ”

Margaret Mitchell

Chief Ethics Scientist at Hugging Face

These 13 people in the field of artificial intelligence are about to change the future

COURTESY OF CLARE MCGREGOR/PARTNERSHIP ON AI

"People say or think, 'You can't program, you don't know statistics, you don't matter.' Sadly, usually people don't take me seriously until I start talking about technical things. There are huge cultural barriers in the field of machine learning (ML). ”

Margaret Mitchell's interest in AI bias began with several disturbing things that happened while working at Microsoft. For example, she recalled in an interview last year that the data she processed [used to train the company's image-annotation software, Seeing AI-assisted technology] described race very bizarrely. Another time, she entered images of explosions into the system, and the output described the wreckage as beautiful.

She realized that simply getting an AI system to perform well in benchmarks wouldn't satisfy her. "I want to fundamentally change the way we look at these issues, the way we process and analyze data, the way we evaluate, and all the factors that are missing in these direct processes," she said. ”

There is a personal cost to this mission. Mitchell made headlines in 2021 when Google fired her and Timit Gerbru, who are co-heads of the company's AI ethics division. The two published a paper detailing the risks of large language models, including environmental costs and the inclusion of racist and sexist language in training data. They also bluntly criticized Google for not doing enough to promote diversity and inclusion and clashed with management over company policies.

Mitchell and Gerbru have already made major breakthroughs in the ethics of AI, such as publishing a paper with several other researchers on so-called "model cards" (encouraging greater transparency of models by providing ways to document performance and identify limitations and biases).

Mitchell, who joined Hugging Face, an open-source platform provider for machine learning technology, left Google, where she has been working hard, delving into assistive technologies and deep learning, and focusing on coding to help build protocols for matters such as ethical research on AI and inclusive hiring.

Mitchell said that despite her background as a researcher and scientist, her focus on ethics makes people think she doesn't know how to program. Mitchell said on Embrace Face's blog last year: "It's sad that usually people don't take me seriously until I start talking about technical things. ”

Mustafa Suleyman

Co-founder and CEO of Inflection AI

These 13 people in the field of artificial intelligence are about to change the future

CREDIT: MARLENE AWAAD—BLOOMBERG/GETTY IMAGES

"There is no doubt that in the next 5 to 10 years, many jobs in the white-collar class will change significantly."

Known as "Moose" by friends and colleagues, Suleiman served as vice president of AI products and AI policy at Google and co-founded research lab DeepMind, which was acquired by Google in 2014. After leaving Google, Suleiman worked at venture capital firm Greylock and founded a machine learning startup called Inflection AI.

Earlier this month, Inflection released its first product, a chatbot called Pi that stands for "personal intelligence." The current version of the bot can remember conversations with users and provide empathetic answers. Eventually, Suleiman said, it will be able to act as a personal "chief of staff" who can book restaurants and handle other daily tasks.

Sulaiman, 38, is enthusiastic about what languages we will begin to use to interact with computers. He wrote in Wired that one day we will have "truly fluid conversational interactions with all devices" that will redefine human-computer interaction.

In Suleiman's vision of a future in which AI will make significant changes to white-collar jobs, he also sees the potential of AI to tackle major challenges. Regarding the latter, he believes the technology could reduce the cost of housing and infrastructure materials and could help allocate resources such as clean water. Still, he advocated for avoiding harm in the process, warning in a 2018 article in The Economist:

"From the ubiquity of drone facial recognition to biased predictive policing, the risk is that individual and collective rights are left aside in the race for technological superiority."

Sara Hooker

Director of AI at Cohere

These 13 people in the field of artificial intelligence are about to change the future

COURTESY OF COHERE FOR AI

"I think what's really important is that we need to improve the traceability system, especially when you consider the ability of AI to generate misinformation or text that could be used for nefarious purposes."

Sarah Hooke, a former researcher at Google Brain, joined Cohere, a Toronto startup founded by Google Brain alumni that studies hyperlinguistic models, last year and reunited with former colleagues. The reunion kept its distance — Hooke is leading a nonprofit AI research lab called Cohere for AI, which is funded by Cohere but operates independently.

Cohere for AI aims to "solve complex machine learning problems." In practice, this means from publishing research papers to improve the safety and efficiency of large language models, to launching the Scholars Program, which aims to expand the talent pool in the field of AI by recruiting talent from around the world.

One of the conditions for inclusion in the Scholars Program is that no previous research paper on machine learning has been published.

"When I talk about improving geographical representation, people think it's a cost that we bear," Hook said. They think we're sacrificing the progress we've made. But the opposite is true. "Hooke knows better. She grew up in Africa and helped Google set up a research lab in Ghana.

Hooke also seeks to improve the accuracy and interpretability of machine learning models and algorithms. In a recent interview with Global News, Hooke shared her take on "model traceability," which is tracking when text is generated by models rather than humans, and how improvements can be made. "I think the really important point is that we need to improve the traceability system, especially when you consider the ability of AI to generate misinformation or text that could be used for nefarious purposes," she said. ”

Thanks to Cohere's recent $270 million in funding from Nvidia, Oracle and Salesforce Ventures, Hook's nonprofit lab has teamed up with a startup with high-profile backers.

Rummann Chowdhury

Scientist at Parity Consulting

Responsible AI researcher at Harvard University's Berkman Klein Center

These 13 people in the field of artificial intelligence are about to change the future

CREDIT: COURTESY OF RUMMAN CHOWDHURY

"Few people ask the fundamental question: Should AI itself exist?"

Chaudhry's career in AI began as the head of Accenture's Responsible Artificial Intelligence division, where she was responsible for designing an algorithmic tool for identifying and reducing bias in AI systems. She left to found an algorithmic auditing firm called Parity AI, which was later acquired by Twitter. There, where she led the Machine Learning Ethics, Transparency and Accountability team, a team of researchers and engineers working to mitigate the harm caused by algorithms on social platforms, she said the work became challenging after Elon Musk's acquisition of Twitter.

At the DEF CON 31 cybersecurity conference in August, a group of top AI developers, backed by the White House (in which she took the lead), will host a generative AI "red team" test campaign designed to improve security by evaluating anomalies and limitations in models from companies such as Anthropic, Google, Hugging Face, OpenAI, and others.

Another AI expert on regulation, Chowdhury, 43, recently wrote in Wired that a global governing body for generative AI should be established. She cited Facebook's oversight board as an example of how the organization should be formed. The Council is an interdisciplinary global organization focused on accountability.

"Organizations like this should, like the International Atomic Energy Agency (IAEA), continue to strengthen their position through expert advice and cooperation, rather than providing side hustles to others in full-time jobs," Chowdhury wrote. Like Facebook's oversight board, it should receive advice and guidance from the industry, but also have the ability to make binding decisions independently, and companies must abide by them. ”

She also promotes what she calls comprehensive bias assessments and audits during product development, which will allow for checks on things that have already been developed, but also mechanisms can be established at an early stage to decide whether something should move on to the next stage through the evaluation of the idea phase.

"Few people ask the fundamental question: Should AI itself exist?" She said during a panel discussion on responsible AI.

Cristóbal Valenzuela

Co-founder and CEO of Runway ML

These 13 people in the field of artificial intelligence are about to change the future

CREDIT: COURTESY OF RUNWAY

"The history of generative art does not begin recently. Outside of the recent AI craze, the idea of introducing autonomous systems into the artistic creation process has been around for decades. The difference is that now we are entering the era of synthesis. ”

Valenzuela entered the field of artificial intelligence after learning about neural networks through the work of artist and programmer Jean Kogan. He was so fascinated by AI that he left his home in Chile to become a researcher in New York University's Tisch Interactive Telecommunications Program.

He was working on making machine learning models accessible to artists, and it was there that he got the idea for Runway. In an interview with cloud computing company Paperspace, he said: "I started brainstorming around this issue, and then I realized that 'modeling platform' already had a name: the runway. ”

While many artists have embraced AI, using tools like Runway to create visual effects or photos in films, Valenzuela, 33, hopes more artists will embrace AI.

As a result, the company helped develop a "stable diffusion" model for text-to-image. It has also achieved amazing results with its artificial intelligence video editing model, Gen-1, which improves existing videos provided by users. Gen-2 was launched this spring, giving users the opportunity to generate video from text. Considering that entertainment companies like Weezer used Runway's model to create a touring promotional video for a rock band, and another artist used Runway's model to make a short film, tools like Runway have sparked a craze for their potential to change the way Hollywood makes movies.

In a conversation with MIT, he said the company is working to help artists find use cases for their work and reassure them that their work won't be taken away. He also believes that, although we do not realize it, in many cases we are already using artificial intelligence for artistic creation, since a photo taken with an iPhone may involve the use of multiple neural networks to optimize the image.

"It's just another technology that will help you create better and express your ideas better." He said.

Demis Hassabis

CEO of Google DeepMind

These 13 people in the field of artificial intelligence are about to change the future

CREDIT: COURTESY OF GOOGLE DEEPMIND

"At DeepMind, we are very different from other teams because we are focused on achieving the goal of general artificial intelligence. We prepare around a long-term roadmap (i.e., our neuroscience-based paper that discusses what intelligence is and what needs to be done to get there). ”

Hassabis holds a PhD in cognitive neuroscience from University College London, and he made a splash more than a decade ago when he co-founded neural network startup DeepMind. The company aims to build powerful computer networks that mimic the way the human brain works (acquired by Google in 2014). In April, after all of the internet giant's AI teams were reorganized, Hassabis took over Google's overall AI efforts.

Hassabis says his love of chess led him into programming. The former chess prodigy even bought his first computer with chess tournament winnings. Now, applying the problem-solving and planning skills required by chess matches, as well as his neuroscience background, to his work with artificial intelligence, he believes it will be "the most beneficial thing for humanity."

He believes that general artificial intelligence could be achieved within a decade, and describes DeepMind as an artificial intelligence inspired by neuroscience as one of the best ways to solve complex problems about the brain. He told Ford: "We can begin to unravel certain esoteric brain mysteries, such as consciousness, creativity and the nature of dreaming. When asked about whether machine consciousness is possible, he said he's open to it, but thinks "the result is likely to be that there is something special about biological systems" that machines can't match.

In 2016, DeepMind's artificial intelligence system, AlphaGo, defeated the world's top human chess player, Lee Sedol, in a best-of-5 match. More than 200 million people watched the game online. (In Go, both players play on a 19-by-19 board.) Mr. Lee's defeat to AlphaGo was particularly shocking, as experts say no such outcome is expected for the next decade.

Moments like these have made DeepMind a leader in general artificial intelligence. But this is not the case for all games. The AlphaFold 2 artificial intelligence system, of which DeepMind is behind it, predicts the three-dimensional structure of almost all known proteins. DeepMind already provides these predictions in a public database. The discovery could accelerate drug discovery, and Hassabis and senior research scientist John Jumper won a $3 million Life Science Breakthrough Award. Hassabis also co-founded and runs a new Alphabet company, Isomorphic Labs, dedicated to using artificial intelligence to aid drug discovery. (Fortune Chinese Network)

Translator: Zhong Huiyan - Wang Fang

At Fortune Plus, netizens have expressed many in-depth and thoughtful views on this article. Let's take a look. You are also welcome to join us and share your ideas. Other hot topics today:

Check out the highlights of "Last year's number of marriages registered in Shanghai was the lowest since 1985"

Check out the highlights of "Messi Appears in the Taobao Live Room"

These 13 people in the field of artificial intelligence are about to change the future
These 13 people in the field of artificial intelligence are about to change the future

Read on