laitimes

Under the new technology paradigm, what is the moat of future artificial intelligence enterprises?

author:Growth Research Society

Lead

To build a sustainable and profitable business, you need to build a strong defensive moat around the company. This is especially important as we go through the biggest platform transformation in a generation, as apps are moving to the cloud, consumed on iPhones, Echo, and Tesla, built on open source, and driven by AI and data. These upheavals have rendered some existing moats useless, making it nearly impossible for CEOs to build a defensive business.

LLaMA, Alpaca, Vicuna, RedPajama and other models are increasingly open source, leading Google to say "we don't have a moat, and OpenAI doesn't have either." "The proprietary strengths of Google and OpenAI are being disrupted by open source, particularly with the release of Meta's LLaMA model, which has spawned an ecosystem that grows (and improves) on the LLaMA model. Google says "Paradoxically, the only clear winner in all of this is Meta." Because the leaked model is theirs, they effectively get the equivalent of the entire planet's free labor. ”

However, Meta is not the only beneficiary of this development. Startups in the entire market, big and small, also have advantages. In my original article, "The New Moat," published six years ago, I correctly pointed out the power of open source, but mistakenly assumed that it would only benefit large cloud providers that can provide open source services at scale. Instead, this new generation of AI models may shift power back to startups, as they can leverage the underlying models — both open source and non-open source — in their products.

In fact, some of the early beneficiaries of this new wave of AI are incumbents and startups that have been able to incorporate generative AI into their applications, such as Adobe, Abnormal, Coda, Notion, Cresta, Instabase, Harvey, EvenUp, CaseText and Fermat.

To paraphrase Darwin, "The companies that survive are not the strongest (the largest, most well-capitalized, or best-known), but the most adaptable to integrating AI." "The focus of this article is not to explore whether there is a moat, but to where the value of AI can accumulate and explode.

Historically, open source technology has reduced the value of the layer in which it resides and transferred value to adjacent layers. For example, open source operating systems like Linux or Android reduce the application's reliance on Windows and iOS and shift more value to the application layer. That doesn't mean there's no value in the open source layer (Windows and iOS definitely do!). )。 At the same time, you can still create value through cloud-based open source business models such as Databricks, MongoDB, and Chronosphere.

In an article published six years ago, the authors highlight how neighboring layers can benefit more from large cloud platforms. However, with the open source base model, we can see that some of the value that might otherwise have been captured by OpenAI or Google can now be transferred to applications, startups, and infrastructure around LLMs. OpenAI and Google can still capture value, while the ability to build and run these giant models remains a moat. Building developer communities and network effects is still a moat, but in a world where open source alternatives exist, these moats capture less value.

In this article, we'll review the traditional business moats that some technology companies typically take advantage of, and how they're broken. Today's startups need to build intelligent systems — AI-powered applications — "new moats." Businesses can build several different moats and change them as time changes.

Traditional commercial moat

To build a sustainable and profitable business, you need to build a strong defensive moat around the company. This is especially important as we go through the biggest platform transformation in a generation, as apps are moving to the cloud, consumed on iPhones, Echo, and Tesla, built on open source, and driven by AI and data. These upheavals have rendered some existing moats useless, making it nearly impossible for CEOs to build a defensive business.

01.

Economies of scale

Some of the greatest and oldest technology companies have strong moats. For example, Microsoft, Google, and Facebook (now Meta) all have moats based on economies of scale and network effects.

At this moment of technological transformation, a key component of building valuable AI products is the foundational models that are trained on billions or trillions of parameters, which require hundreds of millions of dollars in training expenses, and the computing resources that power them. Without the release of LLaMA, most of the value would likely go to companies like Google or startups like OpenAI, Anthropic, and Inflection that have the capital (and GPU) to train these models. One problem we face is the balance between trillion-parameter models and small models. If the race is more skewed towards larger and larger models, then perhaps scale becomes the ultimate moat.

The larger a product, the more operational leverage it has, which in turn will reduce your costs. SaaS and cloud services can have strong economies of scale: you can expand your revenue and customer base while keeping the core engineering of the product relatively stable.

As key computing partners for startups developing foundational models, AWS, Microsoft, and Google, the world's three largest cloud providers, are leveraging economies of scale and network effects to stay competitive in the current AI boom. Training AI models has become a matter of data center-scale, combining computing and networking into a giant building-scale supercomputer.

Relying on large cloud providers to run complex machine learning models has even led to a resurgence of Oracle as a partner of choice. The company initially lagged behind in the cloud server business, and later made a series of catch-up in AI, mainly through cooperation with NVIDIA. Oracle is currently working with a number of leading startups, including Adept, Character, and Cohere.

02.

Network effects

Metcalfe's Law states that if each additional user of a product or service brings more value to all other users, then your product or service has a "network effect." Messaging apps like Slack and WhatsApp, and social networks like Facebook are good examples of powerful network effects. Operating systems like iOS, Android, and Windows have powerful network effects because the more customers use the operating system, the more apps will be built on top of it.

One of the most successful cloud vendors, Amazon Web Services (AWS), has both scale and the power of network effects. Because "there are customers and data out there," more applications and services are being built on AWS. In turn, the infrastructure ecosystem that provides solutions attracts more customers and developers who build more apps, generate more data, and continue the virtuous cycle while reducing Amazon's costs through scale advantages.

The first innovators to gain user support can build network effects. OpenAI is rapidly establishing the first network effect barrier around their model. In particular, their function calls and plugin architecture could turn OpenAI into the new "AI cloud." However, the race to build network effects is too early to declare any company as a winner. In fact, there are many players who have extended this concept by creating proxies like LlamaIndex, Langchain, AutoGPT, BabyGI, etc., all designed to automate your applications, infrastructure, or parts of life.

03.

Deep technology/IP/industry accumulation

Most tech companies start with their own software or methods. These trade secrets can include core solutions to hard technical problems, new inventions, new processes, new technologies, and patents that later protect the intellectual property (IP) developed. Over time, a company's IP may evolve from a specific engineering solution to accumulated operational knowledge or insight into a problem or process.

Today, some AI companies are building their own models that are being used both to develop applications and as services for others to use. Startups in this space include Adept, Inflection, Anthropic, Poolside, Cohere, and others. As mentioned earlier, the key to these models is to weigh the cost of model training. It will be interesting to see if the pioneers of early foundational models such as OpenAI and Google were able to build moats with their deep technology, or whether they were ultimately just another model for all the academic research and work in open source and AI.

04.

High switching costs

Once customers start using your product, you want it as difficult as possible for them to switch to competitors. You can build this stickiness through standardization, lack of alternatives, integration with other apps and data sources, or because you've built a deep-rooted and valuable workflow that your customers rely on. Any of these can serve as a form of lock-in that makes it difficult for customers to leave.

An interesting consideration is whether there is a switching cost at the model layer or application layer. For example, Midjourney has millions of users generating images using its diffusion model. If a better model emerges, how hard will it be for Midjourney to replace its own? Even if a better model exists, how hard is it for users to switch to another app? Over the next few years, we'll see companies trying to establish switching costs at the application layer and potentially at the model layer.

05.

Brand/customer loyalty

A strong brand can be a kind of moat. With every positive interaction between the product and the customer, the brand advantage becomes stronger over time, but if the customer loses trust in its product, the brand's strength quickly disappears.

Trust is crucial in the field of AI, but for many, that trust has yet to be earned. These early AI models may have "illusions," giving wrong answers or creating strange personas, such as Sydney in Bing. There will be a race to build trusted AI and tools like Trulens to gain the trust of customers.

The traditional moat will be reshaped

A strong moat can help companies survive major platform transformations, but survival cannot be misinterpreted as thriving.

For example, high switching costs may partly explain why mainframe and "mainframe" systems still exist after all these years. Traditional businesses with deep moats may no longer be the high-growth drivers of their prime, but they are still generating profits. Companies need to recognize and react to the transformation of the industry as a whole so as not to fall victim to their own success.

Under the new technology paradigm, what is the moat of future artificial intelligence enterprises?

"Switching costs" as a moat: x86 server revenue didn't surpass mainframe and other "mainframe" revenue until 2009.

We can see the shift to AI platforms through the financial performance of NVIDIA as a major GPU provider and Intel as a major CPU provider. In 2020, NVIDIA surpassed Intel to become the most valuable chip provider. In 2023, the company's market capitalization reached trillions of dollars.

Under the new technology paradigm, what is the moat of future artificial intelligence enterprises?

These massive platform shifts, such as cloud computing and mobile, are technology trends that create opportunities for new entrants and enable founders to build their own paths on existing moats.

Successful startup founders tend to adopt a two-pronged strategy: 1) attack the moat of traditional businesses; 2) At the same time, build your own reliable moat to follow new trends.

AI is becoming the platform technology today, and this new LLM wave has the potential to break down the hierarchy between existing enterprises. One example is that through its integration with OpenAI's ChatGPT, the long-maligned Microsoft Bing may finally break Google's search moat.

Facebook, for example, has the strongest social network, but Instagram built a mobile-focused photo app that jumped on the smartphone wave and made $1 billion acquisitions. In enterprise services, SaaS companies like Salesforce are disrupting the market for on-premises software companies like Oracle. Now, with the advent of cloud computing, AWS, Azure, and Google Cloud are creating direct channels for customers. These platform shifts can also change the roles of buyers and end users. In the enterprise, buyers have moved from central IT teams to knowledge workers in the office, to people using iPhones, and finally to any developer with a GitHub account.

Today, the new LLM model has created a new user category: prompt engineers. As generative AI models are trained for use across industries, user roles become broader and more diverse. As AI becomes an intrinsic part of every product, it remains to be seen how durable the role of prompting engineers will be.

New moat?

Is it still possible to build a sustainable moat amid the current wave of disruption? For founders, they may feel that every advantage they build can be replicated by another team, or at least feel that a moat can only be built on a large scale. Open source tools and cloud computing have shifted power to the "new incumbents" — those at scale, with strong distribution networks, high switching costs, and strong brands. These companies include Apple, Facebook, Google, Amazon and Salesforce, among others.

Why does it feel as if "no moat" can be built? In the age of cloud computing and open source, deep techniques for attacking difficult problems are becoming a shallow moat. The use of open source makes technological advances more difficult to commercialize, while the use of cloud computing to deliver technology shifts defensiveness to different parts of the product. Companies that focus too much on technology and don't put it into the context of customer problems will be caught in a dilemma, "between open source and cloud computing." For example, existing technologies like Oracle's proprietary database are being attacked in the cloud by open source alternatives like Hadoop and MongoDB, as well as innovations like Amazon Aurora and Google Spanner. On the other hand, companies that build great customer experiences may gain defensiveness through the workflow of the software.

We believe that the moat of deep technology has not completely disappeared, and reliable business models can be built around intellectual property. If you choose one area in the technology stack and become the absolute best solution, you can also create a valuable company. However, that meant choosing a technical problem that didn't have much replacement, required tough engineering, and required operational knowledge to scale up.

The base model is one of today's deep technological/IP moats. The owners of the base model publish APIs and plugins, while also constantly working internally to develop better products. Developers can build applications on top of open source LLM with relative ease, which has led to a large number of startups offering a variety of specialized products. But for now, most startups in this tier aren't building enough moats.

One potential possibility is that large models can solve most of the complex problems, while smaller models can solve specific problems or power edge devices such as phones, cars, or smart homes.

Today, the market is skewed toward "full-stack" companies, SaaS offerings that provide application logic, middleware, and databases. Technology is becoming an intangible part of a complete solution (e.g., "Who cares which database your favorite mobile app backend is using, as long as your food arrives on time!"). ”)。 In the consumer space, Apple has made integrated or full-stack experiences popular by seamlessly integrating hardware with software. This integrated experience is also coming to dominate enterprise software. Cloud computing and SaaS make it possible to reach customers directly and cost-effectively. As a result, customers are increasingly looking to buy full-stack technology delivered as SaaS applications rather than buying the components of the tech stack and building their own applications. The emphasis on the entire application experience, or "top of the tech stack," is also why the authors evaluate companies through an additional framework—the stack of enterprise systems.

The stack of enterprise systems

Under the new technology paradigm, what is the moat of future artificial intelligence enterprises?

01.

Systems of Record

The underlying layer of a system is usually a database on which applications are built. If data and applications support critical business functions, it becomes a "system of record." There are three main systems of record in an enterprise: customers, employees, and assets. Customer Relationship Management (CRM) manages customers, Human Resource Management (HCM) manages employees, and Enterprise Resource Planning (ERP)/Financial Management manages assets.

Generations of companies have been built around having a system of record, and each wave of technology produces a new winner. In the CRM space, we've seen Salesforce replace Siebel as the system of record for customer data, and Workday replaces Oracle PeopleSoft as the system of record for employee data. Workday has also expanded into financial data. Other applications can be built around systems of record, but are often less valuable than actual systems of record. For example, marketplace automation companies like Marketo and Responsys have built large businesses around CRM, but have never been as strategic or valuable as Salesforce.

The underlying model does not replace existing systems of record, but is used to unlock value and understanding across all systems of record. As mentioned earlier, there are currently several base models. Whether the world evolves in the direction of a few large models that can be refined or trimmed for a variety of situations, or whether there is a market for smaller models, is debatable. In either case, these models are key elements of what we called "intelligent systems" in The New Moat six years ago.

02.

Systems of Engagement

Systems of engagement™ are interfaces between users and systems of record, and they can be powerful businesses because they control end-user interactions.

In the mainframe era, recording systems and participation systems were tied together, when the mainframe and terminal were effectively the same product. The client/server wave brought a wave of companies trying to take over your desktop, but they were eventually disrupted by browser-based companies and then replaced by mobile-first companies.

The current generation of companies vying for ownership of participating systems includes Slack, Amazon Alexa, and other startups with voice/text/conversational interfaces. In China, WeChat has become a dominant engagement system and is now an all-in-one platform covering everything from e-commerce to gaming.

Participating systems may be replaced faster than systems of record. Successive generations of engagement systems don't necessarily disappear, but rather users are constantly adding new ways to interact with their apps. In a multichannel world, ownership of an engagement system is most valuable, if you control the majority of end-user engagement, or a cross-channel system that can reach wherever users are.

One of the most important strategic advantages of participating systems is that you can coexist with multiple systems of record and collect all the data that passes through your product. Over time, you can leverage all the accumulated data to evolve your engagement location into an actual system of record.

Six years ago, the authors emphasized chat as a new system of engagement. Slack and Microsoft Teams tried to be the primary engagement system for businesses and provide a front end for chat for enterprise applications, but failed to reach their goal. This chat-first vision hasn't been realized yet, but the underlying model could change that. Instead of opening an app like Uber or Instacart to order a ride or deliver a ride, we can order dinner or plan a vacation by asking our AI assistant. In a future where everyone has their own AI assistant, all interactions will likely feel like using a messaging app. AI-based voice chat systems like Siri and AAlexa will be replaced by smart chat systems like Pi, a personal intelligent assistant developed by Inflection.ai.

The release of plugins and interface calls from OpenAI is building a new way of building and distributing applications, effectively making GPT a new platform. In this world, chat can become the front door to almost everything, our daily engagement system. It will be very interesting to see how the user experience of AI applications evolves in the near future. While chat seems to be very popular today, we expect to see multimodal interaction models create new engagement systems that go beyond chat.

03.

The New Moat: Systems of Intelligence

Superintelligent systems are still the new moat.

"What is an intelligent system and why is it so defensive?"

Intelligent systems are often valuable because they span multiple data sets and multiple systems of record. One example is combining web analytics, customer data, and social data to predict end-user behavior, churn, lifetime value (LTV), or deliver more timely content. You can build intelligence into a single data source or a single system of record, but this position will be harder to resist competition from vendors who own the data.

For startups to thrive around established players like Oracle and SAP, they need to combine their data with other data sources (public or private) to create value for your customers. Established companies have an advantage in their own data. For example, Salesforce is building an intelligent system called Einstein, starting with their own system of record, CRM.

After coming up with the concept of building "intelligent systems" six years ago, we've seen some incredible AI applications emerge such as Tome, Notable Health, RunwayML, Glean, Synthesia, Fermat, and hundreds of other startups. While it's unclear where value will accumulate in this emerging stack, this transformation offers ample opportunities for startups.

But as mentioned earlier, we didn't initially foresee the power of large language models that really added value to what we called intelligent systems six years ago.

A new set of "new stacks" has emerged in LLM applications, including a new set of middleware tools for linking prompts or composing models. Just as we've seen a large number of companies being created to make cloud computing and storage more manageable, we're seeing a wave of startups aiming to make the base model easier to use.

This new middleware stack will include data frameworks such as LlamaIndex for connecting enterprise data with LLM; Proxy frameworks, such as Langchain, are used to build applications and connection models. In addition, a new generation of security and observability tools is needed to ensure the uptime and security of these new applications.

Under the new technology paradigm, what is the moat of future artificial intelligence enterprises?

Future generations of enterprise products will use different artificial intelligence (AI) technologies to build intelligent systems. Not only will applications be transformed by AI, but data center and infrastructure offerings will also be transformed. We can categorize the main areas for building intelligent systems into three areas: customer-facing applications centered on the customer journey, employee-facing applications (such as human resource management, IT service management, finance, etc.), or infrastructure systems (such as security, computing/storage/networking, and monitoring/management). In addition to these broad horizontal use cases, startups can focus on specific industries or markets, building intelligent systems based on unique data in verticals such as Veeva in life sciences or Rhumbix in construction.

Previous applications replaced digital processes, but these new AI applications will augment and enhance human capabilities and make individuals more productive.

AI tools already exist to make design work, coding, data processing, legal work, and other work more accurate and fast. In the legal field, for example, companies like Harvey.AI and Even Up Law are performing the tasks of legal assistants and lawyers. Github Co-pilot has increased the productivity of every developer several times, and new developers can now write code like seasoned professionals. Designers designing with Adobe's new product, Firefly, can create digital images that previously required an entire team to complete. Productivity apps like Tome, Coda, and Notion now give every desk worker new superpowers, increasing speed and productivity. These are truly the AI-powered "Iron Man outfits" promised by technology. As we become more reliant on AI-based applications, it becomes more important to manage and monitor trusted AI to ensure that we don't make decisions based on illusions.

In all these markets, the focus of competition is shifting from old barriers (where data comes from) to new barriers (how data is used). Leverage company data to sell value-added products to customers, automate support tickets, prevent employee turnover, and identify security anomalies. Products that use data specific to an industry (e.g., healthcare, financial services) or specific to a company (customer data, machine logs, etc.) to solve strategic problems may seem like a very deep trench, especially if AI can replace or automate entire enterprise workflows, or create a new value-added workflow enabled by this intelligent technology.

Enterprise applications that build systems of record have always been a strong business model. Some enduring application companies, such as Salesforce and SAP, are built on a deep foundation of intellectual property, benefit from economies of scale, and accumulate more data and operational knowledge over time in the company's workflows and business processes. However, even these incumbent giants are not immune to the platform shift as a new generation of companies is attacking their field.

In fact, we may be at risk of fatigue with AI marketing, but all the buzz reflects AI's potential to transform many industries. Machine learning (ML) is a popular AI approach that can be combined with data, business processes, and enterprise workflows to provide context for building intelligent systems. Google was an early pioneer in applying machine learning to processes and workflows: they collected more data per user and applied machine learning to deliver more timely ads in the workflow of web search. There are other AI technologies that are evolving, such as neural networks, that will continue to change our expectations for these future applications.

However, the authors believe that the judgment is somewhat wrong in two aspects: First, AI will not make people feel tired at the moment. We are witnessing the beginning of the next great wave of technology, and the excitement is naturally high. Second, the base model has become the most transformative advance in artificial intelligence. Using this as an example, many previous machine learning/AI companies are at risk of being overtaken by the latest LLM (Large Language Model).

These intelligent AI-driven systems offer tremendous opportunities for startups. Companies that succeed in this space can build a virtuous cycle of data, because the more data you use to build and train a product, the better your model will become and the product will become. Ultimately, the product is customized for each customer, creating another moat – high switching costs. It is possible to build a company that combines the engagement and intelligence of the system or even the entire technical aspect of the enterprise, but intelligence or engagement systems can be the best entry point for startups to target existing businesses. Building the engagement or intelligence of a system is not a trivial task, it requires a deep technical background, especially in terms of speed and scale. In particular, technologies that facilitate an intelligent layer between multiple data sources will be essential.

We're already seeing a virtuous circle of data working. It's not just the value of data in training the original model, but also user data, feedback loops for models and applications, and even reinforcement learning in extreme cases, all of which strengthen the moat of data over time.

ChatGPT or personal AI tools like Inflection AI's Pi have clear potential to be the primary channel for every task, whether it's accessing apps, developing software, or communicating in a variety of scenarios. At the same time, data frameworks like LlamaIndex will be key to connecting personal data with LLM. The combination of models, usage data, and personal data will create a personalized app experience for each user or company.

Finally, some businesses can accelerate the development of intelligence by using customer and market data to train and improve models to provide better products for all customers.

Startups can build a defensive business model as a system of engagement, intelligence, or record. With the advent of artificial intelligence, smart applications will be the source of the next generation of great software companies as they become the new moat.

The old moat is the new moat

The rise of artificial intelligence is exciting, and current startups have basically come full circle in their quest to build new moats. The old moat proved to be more important than ever. If Google's "we have no barriers" prediction comes true, and AI models enable any developer with access to GPT or LLaMA to build intelligent systems, how do we build a sustainable business? The value of an app lies in how it is delivered. Workflows, integration with data and other applications, brand/trust, network effects, scale, and cost efficiency all become creators of economic value and barriers. Companies that can build intelligent systems still need to be proficient in marketing. They must not only perfectly find the fit between the product and the market, but also find the fit between the product and the market.

AI doesn't change the way startups market, sell, or collaborate. AI reminds us that while each generation of technology has its technical foundation, the fundamentals of business building have always remained the same.

Old barriers are actually new barriers.

Author | SenseAI

Source | SenseAI.Pay attention to the global AI frontier, enter technology startups, and provide multi-dimensional thinking in the industry.

Read on