laitimes

Hugging Face: Quietly build an ark in the age of great exploration of artificial intelligence

author:ChinaIT digitally transforms the future
Hugging Face: Quietly build an ark in the age of great exploration of artificial intelligence
It's an overstatement to say that ChatGPT is an AI iPhone, and Hugging Face is indispensable on the road to AGI.
Hugging Face: Quietly build an ark in the age of great exploration of artificial intelligence

Text | Ben

Editor|Wang & Tong

Source|Digital Krypton (ID: digital36kr)

Cover source: IC photo

A few days ago, Hugging Face surpassed 100,000 Stars on GitHub.ChinaIT.com

Hugging Face: Quietly build an ark in the age of great exploration of artificial intelligence

And, Hugging Face should be the fastest open source project on GitHub to reach this milestone.

Needless to say, Hugging Face is definitely one of the brightest AI companies of this era.

In the past year, Hugging Face's search index has even far surpassed that of Stability AI, a star company in the AIGC space.

Hugging Face: Quietly build an ark in the age of great exploration of artificial intelligence

Prior to this, the highlights in Hugging Face's history were mostly concentrated a year ago in mid-2022.

Simply put, Hugging Face is an open source model library company.

On March 14, 2022, the Twitter account @BigScienceLLM began daily updates on the training progress of a large model called BLOOM. Of course, at this time, the public's perception of the big model is almost non-existent.

Led by Hugging Face, the BLOOM project involves more than 1,000 researchers and engineers from more than 200 institutions in more than 60 countries around the world, including a large number of employees from Microsoft, Meta, Google and other technology giants participating in their personal capacity.

On May 10, Hugging Face announced a $100 million Series C financing, led by Lux Capital, followed by Sequoia Capital, Coatue, Betaworks, NBA star Kevin Durant and others, and its valuation increased to $2 billion.

On May 16, Hugging Face debuted on the Forbes North American Top 50 Artificial Intelligence list.

On July 2, the BLOOM large model with a total of 176 billion parameters was completed after 117 days of training, and its total number of parameters happened to be 1 billion more than OpenAI's GPT-3 that has been released for nearly three years.

When the time came in August 2022, an AI-generated painting called "Space Opera" won the gold medal in an art competition held at the Colorado State Fair in the United States.

So since then, a new generation of artificial intelligence companies such as Midjourney, Stability AI, and OpenAI have appeared in turn, becoming the focus of the spotlight on the AI stage.

Hugging Face hasn't received any new funding over the past year, and its valuation is far from Super Unicorn.

This year, Hugging Face also released HuggingChat, an open-source conversation tool that benchmarks ChatGPT, but it was drowned in a large number of similar applications.

In Hugging Face, you can't see the ups and downs that most star startups go through, as if everything is falling into place.

As the industry's focus on large models has shifted from rhetoric to deep water areas that need to be clarified, the value of Hugging Face's existence has begun to receive close attention from the industry.

Hugging Face maintains its usual low profile, after all, they don't have the pressure to expand aggressively, and they don't need to constantly draw pies to attract the attention of capital.

This is contrary to the public perception that the big model track must burn unlimited money to invest massive AI computing power, and in public reports, when Hugging Face completed a $40 million Series B financing in September 2021, 90% of the money raised in the previous round of financing in its bank account had not yet been spent.

In a recent interview with tech broadcaster Harry Stebbings, Hugging Face co-founder and CEO Clément Delangue humbly said: If there was no open source, if Google had not shared the epoch-making "Attention is all you need" paper, if they did not share their BERT paper, Without sharing their diffusion model papers, we may have to wait another 30, 40 or even 50 years to get to where we are today.

Hugging Face: Quietly build an ark in the age of great exploration of artificial intelligence

Prequel - Transformer: A paradigm shift in artificial intelligence

Before 2017, when they saw the word Transformer, most people thought of Transformers, Autobots, and Hasbro.

In June 2017, in the Google team's paper "Attention Is All You Need", Transformer first appeared in artificial intelligence papers, if taken literally, Transformer can be called an attention-based encoder/decoder model.

Today, all kinds of GPT, all kinds of BERT, all kinds of alpacas (LLaMA) occupy the C position of artificial intelligence, big models, AIGC shows, and we will finally never confuse Transformers and Transformers again.

In 2018, Transfomer, which was just a year old, brought a key paradigm shift to NLP, from only initializing the first layer of the model in the past, to pre-training the entire model through hierarchical representations, Transfomer opened up a whole new way of working for AI to reach today's heights, allowing information to move from pre-trained language models to downstream tasks and applications.

Opportunities are always left to those who are prepared, and even more so in the field of artificial intelligence.

Hugging Face, a chatbot company founded in 2016, quietly turned around and began to try to do something different this year, and since then, Hugging Face is no longer an emo that appears on a dialog box or plastic bag, but has gradually become a belief in the field of large models.

In October 2018, Thomas Wolf, chief scientist at Hugging Face, spent a few days creating a project on GitHub called pytorch-pretrained-BERT, which was more popular than anyone expected.

Hugging face did not improve the Transformer model itself, but packaged it with a series of derivative models into a new "open source product", that is, Transformers, for researchers and developers, so that everyone can easily use the Transformer model

Through open-source model libraries, Hugging Face can provide more possibilities for continuous iteration of AI algorithm architectures and model libraries. It also continues to prove that startups are better suited to empowering the developer community in new ways in many frontier areas, and the value created by open source is extremely divergent, significantly more efficient than traditional privatization tools, and the value is more than orders of magnitude.

The opportunity did come suddenly, and in just a few days Hugging Face was ready to forge a whole new path, which also saved it from being one of the many Siri knockoffs.

Since last year, most people tend to attribute Hugging Face's success to chance.

However, if a series of contingencies occurs one after another, there must be some inevitable connection.

Clément Delangue, founder of Hugging Face, made the underlying logic of this shift clear in a 2021 interview: we don't need to be distracted by competition now, but instead empower both open source and scientific research.

In fact, today, Hugging Face is not a great company, at least based on the public perspective and commercial judgment will come to this conclusion.

But the importance of Hugging Face in machine learning needs no longer be questioned.

As of June 1, 2023, Hugging Face has shared 215,693 training models and 38,085 datasets, covering almost all fields such as NLP, speech, biology, time series, computer vision, reinforcement learning, etc., building the world's most complete AI developer ecosystem.

Hugging Face: Quietly build an ark in the age of great exploration of artificial intelligence
Hugging Face: Quietly build an ark in the age of great exploration of artificial intelligence

Especially at present, AI2.0 relies on the important time node of the big model to successfully get out of the circle, Hugging Face definitely has its insurmountable irreplaceability.

This article attempts to use a montage-like way to discuss from the perspective of some fragmentation, even if the "time, place, people" are all put together, it is difficult to create a Hugging Face.

Hugging Face: Quietly build an ark in the age of great exploration of artificial intelligence

AI ethics: What is responsible AI licensing

On May 30, 2023, the non-profit organization Center for AI Safety released a joint open letter on its official website, saying that AI is a technology comparable to "epidemics and nuclear war" and may pose an existential threat to mankind in the future.

The letter has only one sentence, a total of 22 words: Mitigating the risk of extinction from AI should be a global priority, on an equal footing with other large-scale risks affecting society, such as pandemics and nuclear war.

This time, representatives of AI star companies such as OpenAI founder Sam Altman, DeepMind CEO Demis Hassabis and Anthropic CEO Dario Amode all signed, and a total of more than 350 well-known people in the AI field signed the joint letter.

There are also many Chinese scholars in the list, including Zhang Yaqin, academician of the Chinese Academy of Engineering, Zeng Yi, director of the Research Center for Artificial Intelligence Ethics and Governance of the Institute of Automation of the Chinese Academy of Sciences, and Zhan Xianyuan, associate professor of Tsinghua University.

The worries of technology bigwigs about AI are not empty or grandstanding, but will really affect all aspects of everyone's life in the future.

Two months ago, Musk, Apple co-founder Steve Wozniak, Stability AI founder Emad Mostaque and other bigwigs also jointly issued an open letter, calling for a six-month pause in training AI systems more powerful than GPT-4.

Yann LeCun, one of the three giants of deep learning in the era of CNN architecture, once said: Before we can make human-level AI, we need to make cat/dog-level AI. And now we can't even do that. We are missing something very important. You know, even a pet cat has more common sense and understanding of the world than any large language model.

Although the ethical issues of artificial intelligence have a long history, in the context of AI 2.0 being regarded as the core of the fourth industrial revolution, it has brought completely unpredictable challenges to social governance and ethics.

The big model refers to AGI (General Artificial Intelligence), and when AI has the ability to generalize, it means that it holds a double-edged sword, so that the world is wary of the huge social risks it may bring.

AI ethics is by no means metaphysics, more and more users trust Hugging Face, and it is no accident that the model is open source on Hugging Face, and it is a community that is highly recognized at the ethical level.

Of course, the right time and place are important, but people (let's understand people and AI ethics) are the keys to AI's path to a higher level.

For Hugging Face, a focus on AI ethics is in its DNA.

Its BigScience project, which trains the large model BLOOM, has taken ethics into account from the beginning and has developed strict ethical guidelines. That's because large models are trained using datasets from the internet that contain a lot of personal information and often exhibit dangerous biases.

Giada Pistilli, an AI ethicist at Hugging Face, drafted BLOOM's ethical guidelines as a basic principle for model development training. The guidelines highlight details such as recruiting volunteers from different backgrounds and locations, ensuring that ordinary people can easily reproduce the project's findings, and making their findings publicly available.

At the same time, Hugging Face also unveiled the new concept of "Responsible AI Licensing" as a terms of service agreement to use BLOOM, which is intended to prevent high-risk sectors such as law enforcement or health care from using its technology to harm, deceive, exploit or impersonate the public.

Moving the timeline forward, the huge changes in Google's AI ethics team in 2021 may be considered a watershed moment affecting the future AI landscape.

In August 2021, Margaret Mitchell, former head of Google's Ethical AI AI Research Group, joined Hugging Face to help it develop tools to ensure the fairness of its algorithms.

Previously, after four years of hard work to organize and recruit employees, Margaret Mitchell established Google's AI ethics team from scratch and established a good moral image for Google AI on a global scale.

The "Don't be evil" image that Margaret Mitchell spent years building has also been an important part of Google's AI ethics, and in the period after Margaret Mitchell, the slogan was even more ironic for Google AI.

After joining Hugging Face, Margaret Mitchell still carries out the AI ethics of "don't do evil" to the end.

Commenting on what it's like to feel at Hugging Face, Margaret Mitchell said: "There are already a lot of basic ethical values here. Obviously, I don't have to force my hands or improve the ethical process. ”

In fact, Hugging Face has always maintained a rigorous attitude towards a series of issues related to AI ethics, as an open source model library, hoping to reach a consensus with developers and users on AI ethics.

Hugging Face's ongoing update of the AI Ethics Brief provides a clear picture of Hugging Face's development of AI ethics-related tools and safeguards to ensure that open source science empowers individuals and continuously minimizes potential harm.

No matter whether AI can change the world in the future, or in what form and to what extent, the big model is already an irreversible reality. It can be a productivity tool that will bring the Fourth Industrial Revolution to a climax, and it cannot be ruled out that it will be a stumbling block to continued human progress.

Without AI ethics based on expertise, it is likely that there will be a huge backlash in the future. Humans must plan ahead and make the most careful arrangements in advance to withstand this hurricane.

In the past few years, more than 100 documents related to the ethical governance of artificial intelligence have been released around the world, and the four forces of governments, international organizations, academia and industry have paid close attention to the ethical governance of artificial intelligence.

In recent years, the mainland has successively issued policy documents such as "New Generation AI Governance Principles" and "New Generation Artificial Intelligence Ethics Code", which clearly put forward eight principles and emphasized the integration of ethics and morality into the whole life cycle of artificial intelligence.

However, the regulation of big models and artificial intelligence cannot stop at the policy level, because large models are particularly good at imitating real human language, so they are also more likely to be used to deceive humans.

From humans deceiving humans with artificial intelligence, to artificial intelligence actively deceiving humans, there may be only a layer of paper.

Humans must find the perfect technological means to harness it.

Hugging Face: Quietly build an ark in the age of great exploration of artificial intelligence

Non-commercial: Do not overestimate the short term, do not underestimate the long term

In the long run, Hugging Face's position in AI is likely to be as solid as Switzerland's.

Neutral Switzerland relies on sturdy safes to navigate between major powers.

Hugging Face, on the other hand, relies on openness and is completely open source, and it can't even find any competitors to benchmark Hugging Face.

In the future, after making enough money for shareholders, OpenAI also has the opportunity to become a public welfare organization.

But now, Hugging Face may be the only unicorn in the world that has broken away from "low-grade fun."

Hugging Face is not only free, but also helps users save money.

Hugging Face solves the core pain point of many artificial intelligence companies, that is, it reduces the difficulty of building an engineering team larger than the algorithm team, in other words, it undertakes most of the dirty and tiring work, so it is welcomed by most algorithm experts.

Hugging Face is changing the world, and in the process of changing the world, it will not necessarily realize the business value in the traditional sense.

Therefore, Hugging Face is likely not applicable to the investment logic of the past.

Considering that the technology field is likely to have a "AI native" era similar to cloud native, many business logics from the industrial age, Internet era, and mobile Internet era may face challenges in the future.

In this regard, Sequoia partner Pat Grady also said: Hugging Face prioritizes applications, not monetization, which I think is the right thing to do. They saw how the Transformer model could be applied beyond NLP, and they saw an opportunity to become GitHub, not just for NLP, but for every area of machine learning.

The commercialization process of AI is not simple, and the last wave of AI 1.0 represented by CV and CNN has not found a better breakthrough, and it is still struggling in the quagmire of automatic driving. Despite the logic that makes sense, and despite the fact that it allows the market to see a concrete future from the start, the timeline is out of anyone's control.

Large models are likely to encounter the same problems that are difficult to truly commercialize.

On May 4, Microsoft officially announced that the Bing chatbot is fully open to all users, users no longer need to join the waitlist, just need to log in to the Microsoft account, and then open Bing or Edge browser, you can directly experience New Bing.

In the eyes of the outside world, Microsoft's integration of ChatGPT's capabilities into Bing, so that GPT-4 brings users a more powerful generative search experience, is undoubtedly a heavy blow to the search giant Google, and will definitely subvert its market situation in the past 20 years.

But the current changes in the global search market share have surprised everyone. According to data service provider StatCounter, Microsoft Bing's desktop search market share is 7.1%, even lower than Bing's all-time high of 9.9% last October when OpenAI had not yet released ChatGPT.

In contrast, Google's share of the desktop market reached 86.7%, up nearly 3 percentage points from October last year. If the sample is expanded to include various mobile devices, the data is even more unfavorable to Microsoft, Bing's overall market share is only 2.8%, while Google's market share is still 92.6%, still occupying an absolute dominant position in the search market.

Despite high hopes, it's still too early to tell whether AI's iPhone moment has arrived.

At least in the short term, large models are likely to be overestimated.

It is likely that the large model will really have to become the OS (operating system) of the next generation of computing platforms before it can cause real quantitative changes.

Perhaps because ChatGPT's multi-turn dialogue logic is not suitable for alternative search, and the logic of rewriting all software with large models is not reliable.

The muscle memory that has been developed over the past two decades is clearly not friendly to the existing capabilities of artificial intelligence.

Moreover, considering that the high computing power cost of applications based on large models will eventually be passed on to end users, its business prospects are even more doubtful.

In this context, Hugging Face's "cash is king" non-commercial strategy is even more valuable.

In Clément Delangue's view, Hugging Face's business model is much simpler than most artificial intelligence companies, he introduced: Hugging Face is first of all a platform, so it has accumulated a relatively large number of users, and most open source service providers are similar, using a free model model in order to grow rapidly, whether individual developers or companies can use most of the platform's services for free, as of 2022 there have been more than 15, 000 company users.

Of those, 20 percent, or nearly 3,000 companies, use Hugging Face's paid services, including well-known companies in different fields like Intel, Qualcomm, Pfizer, Meta, Bloomberg, Grammerly, etc., and Hugging Face provides various advanced features for these paying users.

After the $100 million Series C round, Hugging Face opened a small number of positions, and the team expanded from 30 people the year before to 130 people, which is indeed a lot more than Midjourney, but still far lower than other unicorn companies.

Hugging Face's recruitment method is also different from other companies, and even does not set specific job titles and job responsibilities, in its opinion, an open source platform needs people who are in line with the company's culture and can expand the company's value.

Clément Delangue has also publicly stated that Hugging Face's goal is to make natural language processing tools more people use through the tool and developer community, achieve their innovative goals, and make natural language processing technology easier to use and access.

He added that no company, including tech giants, can "solve AI problems" alone, and the only way we can do that is through community-centric sharing of knowledge and resources.

As Clément Delangue puts it, if Google hadn't shared Attention is all you need, it might have been another 50 years before AI reached where it is today. Hugging Face allows the world's most NLP-savvy and ML-savvy people to work together without barriers, which can never be achieved under a purely commercial company structure.

Even if the current big model is overestimated, AI must represent the future.

According to Straits Research, the global NLP market was valued at $13.5 billion in 2021 and is expected to expand to $91 billion by 2030, with a CAGR of 27%. Meanwhile, the ML market is expected to reach $209.9 billion by 2030.

According to Bloomberg Industry Research Report: By 2032, the operating revenue of the generative AI market will be 32.5 times the revenue of 2022, and ChatGPT will bring a decade of generative AI prosperity, with a market size of $1.3 trillion in 2032. Amazon, Google's parent company Alphabet, Nvidia and Microsoft could all be big winners in the AI boom.

It's just that for Hugging Face, it is impossible to simply judge its upper limit from the market share, but without Hugging Face, all NLP, ML-related research and development may be delayed.

Hugging Face: Quietly build an ark in the age of great exploration of artificial intelligence

Moments: Does China need and can Hugging Face appear?

In some analyses of the nature of Hugging Face's research reports, it will always be bluntly matched with some inexplicable competitors.

These typically include OpenAI, DataRobot, and even several major cloud vendors in North America.

Moving out of these companies may raise the value of Hugging Face, but in fact, Hugging Face not only does not have a clear benchmark, but also has no real competitors.

Looking at the circle related to AI, it seems that all eyes can see are Hugging Face's circle of friends.

Hugging Face: Quietly build an ark in the age of great exploration of artificial intelligence

And interestingly, not only the institutional lineup that has invested in Hugging Face is lux, but also includes: Lux Capital, Sequoia Capital, Addition, Coatue, Betaworks, A.Capital, SV angel.

The individual investors behind it are almost all North American technology companies: OpenAI co-founder and CTO Greg Brockman, Salesforce chief scientist Richard Socher, MongoDB CEO Dev Ittycheria, Dataiku CEO Florian Douetteau, Datadog CEO Olivier Pomel, and Kong CEO Augusto Marietti。

And, of course, NBA star Kevin Durant, who shot multiple rounds in a row, and Clément Delangue, as a Frenchman who doesn't watch American basketball, was completely unaware of Kevin Durant's aura on the court when he met him.

In the ever-changing segment of machine learning (ML), as a startup, it is not only very difficult to compete with industry giants, as well as industry leaders in the scientific and open source communities, but also the pressure may come from all sides.

Technology giants or several famous universities have hundreds of artificial intelligence research centers in North America, and although different laboratories may have different emphases, each one is constantly metabolizing.

Startups may be able to surpass their competitors for a period of time and within a certain range, but the iteration speed of artificial intelligence is too fast, and any single breakthrough may be quickly overtaken.

Clément Delangue said: So instead of trying to compete, we are choosing to empower the open source community and the scientific community. With an open source model, you can provide inspiration for schema and database improvements. Elastic and MongoDB are great examples of how startups can empower communities in ways that generate a thousand times more value than building a proprietary tool.

Of course, there are no exceptions to OpenAI, but the ability to train and run ChatGPT is beyond the reach of other companies.

Even with the full support of a sponsor of tens of billions of dollars like Microsoft, OpenAI is still replenishing blood. On April 29, according to a North American technology media, OpenAI, a developer of large language model ChatGPT, recently received a new financing totaling more than $300 million, and the company's valuation has exceeded $27 billion.

Hugging Face is not too worried about the pressure of cost and other aspects, and earlier this year, Amazon Web Services announced further cooperation with Hugging Face.

Adam Selipsky, CEO of Amazon Web Services, said that generative AI has great potential, but the cost and expertise make most companies prohibitive. The partnership between Hugging Face and Amazon Web Services aims to help users create their own generative AI applications with the highest performance and lowest cost.

Of course, in such a cutting-edge track, of course, what can be solved with money is not a big problem, so in Zhihu, it is not surprising that there must be such a cliché topic of debate, and it is also about Hugging Face in Zhihu's most popular topic "why China does not have Hugging Face".

Moreover, there will also be a fixed consideration: how can this kind of open-source model library company be commercialized at present, and where is the moat?

Such generalized soul questions are actually difficult to answer, so comments begin to turn in the limelight, such as:

  • Material is the basis of spirit, if you are bound by housing prices, rents, and 996, it is impossible to have the energy to engage in open source. Staying alive is the most important thing;
  • Foreign copyright awareness is strong, and companies are more willing to spend money on open source company services; Domestic companies, on the other hand, are more inclined to spend money to recruit research;
  • The domestic AI industry is still in the stage of primitive accumulation of capital, and for small companies, they can do anything to cheat information and technology from their peers in order to survive.

In the era of AI 2.0, no longer any technology can be copy2China, through simple adjustment, you can rely on the huge demographic dividend to achieve a win-win situation based on secondary "innovation".

Open source has become a watershed, if you develop the habit of using it instead of contributing, of course, you will not be able to grasp the core competitiveness.

Similarly, in the open source world, you can't bet on certainty, and you may never understand Hugging Face if you rely on past stereotypes.

Clément Delangue argues that startups can empower open communities in a way that generates a thousand times more value than building a proprietary tool.

This also corresponds to the business logic of the era of artificial intelligence, when the complexity of large models is getting higher and higher, and the cost of managing and deploying models is becoming more and more uncontrollable. Hugging Face's open-source model library can not only help developers and companies shorten the development cycle, but also optimize up to tens of millions of dollars in computing resources.

Clément Delangue said: "Companies don't need to take 100% dividends from the value they create, they only need to monetize 1% of that value, but even 1% is enough to make you a high-capitalization company.

Hugging Face: Quietly build an ark in the age of great exploration of artificial intelligence

& So on: Green and clean AI will be a long-term topic

In the past six months, public figures in the field of science and technology have been constantly looking forward to the devastating changes brought by the big model in five or ten years.

  • Li Yanhong said: Ten years from now, 50% of the world's jobs will be prompt word engineers.
  • But, Sam Altman says, in 5 years, we're not going to need prompt engineers. Because with the evolution of AI, it can understand the natural language of human beings, and can communicate with people normally, without the need to write special prompts.
  • Zhou Hongyi said: In three to five years, various industries will be reshaped by GPT, so whoever does not embrace artificial intelligence now will be eliminated.

But in fact, within a few years after the release of the original iPhone, the most eye-catching applications on smart terminals were nothing more than Angry Birds, fishing experts, fruit ninjas, and so on.

Instead of advocating the so-called iPhone moment for the big model, it may be easier to find opportunities by paying more attention to the changes such as multi-touch and QR codes before the emergence of the iPhone.

In Clément Delangue's view, machine learning is replacing software as the new way to build technology, and in the past, traditional software architectures and programming methods might have required writing millions of lines of code, but machine learning is completely unnecessary and works better and faster.

Big models and artificial intelligence will inevitably be a protracted war, compared with verbal battles, predictions and criticism, the industry should pay more attention to practical problems, otherwise the computing power resources consumed by this war of attrition may be at least comparable to mining.

BLOOM's training process was completed on the French supercomputer Jean Zay, using 384 80GB memory versions of the A100 GPU.

At the time of BLOOM training, Hugging Face published a paper titled "Estimating the Carbon Footprint of BLOOM, a 176B Parameter Language Model", and unveiled a new method to accurately calculate the carbon emissions generated by the training model. The approach can cover the entire lifecycle of the model, not just during training.

Training large models undoubtedly consumes a lot of energy, for example, it has been publicly reported that training a BERT model produces about 1,438 pounds of carbon dioxide, equivalent to the carbon emissions of a single round-trip flight from New York to San Francisco.

BLOOM has a training throughput of about 150 TFLOPs, using supercomputing powered by low-carbon nuclear energy, and the heat generated is recycled to heat the school.

Just as rigorous in AI ethics, Hugging Face's ESG responsibilities make people believe that this is a reliable AI company.

As for the future, Clément Delangue is also not very clear, saying: "We realize that more computing resources are not necessarily enough to solve the problem, and the returns are starting to decline. If investors follow suit, it doesn't mean they'll all succeed, but it's an interesting risk, and I'm very much looking forward to seeing what these companies can create in the future.

Source: 36 Kr

Read on