laitimes

What is the difference between Google PaLM 2 and OpenAI GPT-4 and who is more powerful?

author:Snowfall

Google launched the next-generation Pathways Language Model (PaLM 2) on Google I/O 2023 on May 10, 2023. Its new Large Language Model (LLM) has many improvements over its predecessor (PaLM) and could eventually become GPT-4 ready to meet its biggest competitor, OpenAI.

But how much has Google really improved? Is PaLM 2 the differentiator Google is hoping for, and more importantly, with so many similar features, how does PaLM 2 differ from OpenAI's GPT-4?

What is the difference between Google PaLM 2 and OpenAI GPT-4 and who is more powerful?

Google

PaLM 2 vs GPT-4: Performance Overview

PaLM 2 has new and improved features than its predecessor. One of the unique advantages of PaLM 2 over GPT-4 is its smaller size, making it suitable for certain applications that don't have as much onboard processing power.

All these different sizes of animals have their own small models, called geckos, otters, bison, and unicorns, with geckos being the smallest, followed by otters, bison, and finally the largest unicorns.

Google also claims that WinoGrande and DROP have improved reasoning over GPT-4, with the former narrowly ahead of ARC-C. However, in terms of PaLM and SOTA, there have been significant improvements across the board.

According to Google's 91-page PaLM 2 research paper [PDF], PaLM 2 is also better at math. However, the way Google and OpenAI built the test results makes it difficult to directly compare the two models. Google also omitted some comparisons, probably because PaLM 2 did not perform as well as GPT-4.

In MMLU, GPT-4 scored 86.4, while PaLM 2 scored 81.2. The same goes for HellaSwag, where GPT-4 scored 95.3 but PaLM 2 only reached 86.8, and ARC-E, where GPT-4 and PaLM 2 scored 96.3 and 89.7, respectively.

The largest model in the PaLM 2 series is the PaLM 2-L. While we don't know its exact size, we do know that it is much smaller than the largest PaLM models, but uses more training computations. According to Google, PaLM has 540 billion parameters, so a "significantly smaller" PaLM 2 should have between 10 and 300 billion parameters. Keep in mind that these numbers are just based on what Google said in the PaLM 2 paper.

If this number is close to 100 billion or below, the parameters of PaLM2 are likely to be smaller than GPT-3.5. This is impressive considering that a model that may be under 100 billion can go neck and neck with GPT-4 and even beat it on some tasks. GPT-3.5 initially blew everything out of the water, including PaLM, but PaLM 2 has been fully restored.

Differences in GPT-4 and PaLM 2 training data

While Google has yet to publish the size of the PaLM 2 training dataset, the company reports in its research paper that the new LLM training dataset is much larger. OpenAI took the same approach when it launched GPT-4, without making any claims about the size of the training dataset.

What is the difference between Google PaLM 2 and OpenAI GPT-4 and who is more powerful?

Google PaLM 2

However, Google wants to focus on a deeper understanding of mathematics, logic, reasoning, and science, which means that most of PaLM 2's training data is focused on the above topics. Google said in its paper that PaLM 2's pre-trained corpus consists of multiple sources, including network documents, books, code, math, and conversational data, making it an overall improvement, at least compared to PaLM.

PaLM 2's conversational skills should also be on another level, considering that the model has been trained in more than 100 languages to give it better contextual understanding and better translation capabilities.

As for GPT-4's training data being confirmed, OpenAI tells us that it has trained the model using publicly available data and its licensed data. GPT-4's research page states that "data is a web-scale dataset that includes correct and false solutions to mathematical problems, weak and strong reasoning, contradictory and consistent statements, and represents a wide variety of ideologies and ideas." ”

When GPT-4 is asked a question, it can produce a wide variety of responses, but not all of them are relevant to your query. To align it with the user's intent, OpenAI uses reinforcement learning and human feedback to fine-tune the model's behavior.

While we may not know the exact training data for these models, we do know that the training intent is very different. We'll have to wait and see how this difference in training intent differentiates the two models in a real-world deployment.

What is the difference between Google PaLM 2 and OpenAI GPT-4 and who is more powerful?

PaLM 2 and GPT-4 chatbots and services

The first portal to access both LLMs is to use their respective chatbots, Bard for PaLM 2 and ChatGPT for GPT-4. That said, GPT-4 is behind ChatGPT Plus' paywall, and free users can only access GPT-3.5. Bard, on the other hand, is free for everyone and is available in 180 countries.

That's not to say you can't access GPT-4 for free either. Microsoft's Bing AI Chat uses GPT-4, is completely free, open to everyone, and second only to Bing Search, Google's biggest competitor in the field.

Google I/O 2023 is full of announcements about how PaLM 2 and generative AI integration will improve the Google Workspace experience with AI features that will be present in Google Docs, Sheets, Slides, Gmail, and pretty much every service the search giant offers. In addition, Google has confirmed that PaLM 2 has been integrated into more than 25 Google products, including Android and YouTube.

In contrast, Microsoft has introduced AI capabilities into the Microsoft Office suite of programs and many of its services. Currently, you can experience both LLMs in similar products from two rival companies that went head-to-head in the AI Wars.

However, since GPT-4 appeared early on, and care has been taken to avoid many of the mistakes Google made on the original Bard, it has actually become a third-party developer, startup, and pretty much any other LLM that wants to merge. So far, capable AI models in their services.

That's not to say developers won't switch to, or at least try, PaLM 2, but Google still needs to catch up with OpenAI in this regard. The fact that PaLM 2 is open source rather than locked into paid APIs means that it has the potential to be more widely adopted than GPT-4.

Can PaLM 2 compete with GPT-4?

PaLM 2 is still very new, so whether it can compete with GPT-4 remains to be answered. However, given everything Google has promised and the radical ways it decided to use to spread it, it looks like PaLM 2 is indeed comparable to GPT-4.

However, the GPT-4 is still a very capable model and, as mentioned earlier, beats the PaLM 2 in many comparisons. That said, the multiple smaller models of the PaLM 2 give it irrefutable advantages. Gecko itself is very lightweight and can work on mobile devices even when offline. This means that PaLM 2 can support completely different classes of products and devices that may be difficult to use GPT-4.

The AI race is intensifying

With the launch of PaLM2, the competition for AI dominance has heated up, as this could be the first worthy opponent to take on GPT-4. With an updated multimodal AI model called "Gemini" also in training, Google shows no signs of slowing down in this regard.