laitimes

Gorilla LLM Quick Start: AI large models for API call generation

author:The brain in the new tank

Gorilla is an advanced Large Language Model (LLM) designed to effectively interact with various APIs to enhance LLM capabilities in real-world applications. Links to Gorilla LLM: Official website | github | Thesis.

Gorilla LLM Quick Start: AI large models for API call generation
Recommended: Use the NSDT editor to quickly build programmable 3D scenes.

1. Introduction to Gorilla LLM

By using self-instruction and retrieval techniques, Gorilla excels at selecting and leveraging tools with overlapping and evolving functionality.

Evaluated using a comprehensive APIBench dataset, including HuggingFace, TorchHub, and TensorHub APIs, Gorilla surpasses GPT-4 performance in generating API calls.

Gorilla LLM Quick Start: AI large models for API call generation

When paired with a Document retrieval system, it demonstrates an impressive ability to adapt to changes in API documents, enhancing the reliability and applicability of its output.

Gorilla LLM Quick Start: AI large models for API call generation

2. Gorilla operating mechanism

The process of connecting Gorilla to the API involves several key steps:

  • User prompts: Users provide natural language cues that describe specific tasks or goals they want to achieve using the API.
  • Search (optional): In retrieval mode, Gorilla uses a document retriever, such as BM25 or GPT-Index, to fetch the latest API documentation from the database. The document is then concatenated with a user prompt accompanied by a message instructing Gorilla to use it as a reference.
  • API call generation: Gorilla processes user prompts (and, if applicable, retrieved documents) to generate appropriate API calls that meet the user's task or goal. This is made possible by Gorilla's fine-tuned LLaMA-7B model, which is designed for API calls.
  • Output: Gorilla returns the generated API call to the user, which can then be used to interact with the required API and complete the specified task.

Notably, Gorilla is highly adaptable and can operate in zero-shot and retrieval modes, allowing it to adapt to changes in API documentation and maintain accuracy over time.

3, Gorilla quickly to get started

Let's look at the code.

First, install OpenAI using pip:

pip install openai           

Configure the api_key and api_base like this:

import openai

openai.api_key = "EMPTY" # Key is ignored and does not matter
openai.api_base = "http://34.132.127.197:8000/v1"           

Use the OpenAI library to create a function that gets Gorilla results:

def get_gorilla_response(prompt="I would like to translate from English to French.", model="gorilla-7b-hf-v0"):
    completion = openai.ChatCompletion.create(
      model=model,
      messages=[{"role": "user", "content": prompt}]
    )
    return completion.choices[0].message.content           

Execute the function that sends the prompt and specify the model you want to use, in this case gorilla-7b-hf-v0.

prompt = "I would like to translate from English to Chinese."
print(get_gorilla_response(prompt, model="gorilla-7b-hf-v0" ))           

That's it. You will then receive full information from the Huggingface API and instructions on how to perform the request.

4. Gorilla demo

Click here to access the Gorilla demo:

Gorilla LLM Quick Start: AI large models for API call generation

5. Concluding remarks

Gorilla is a breakthrough LLM that generates accurate API calls and adapts to real-time changes in documentation. This model paves the way for future LLMs to become more reliable and versatile in interacting with tools and systems.

Future advances in LLM can focus on further reducing hallucinatory errors, improving adaptability to different APIs, and expanding its ability to handle complex tasks. Potential applications include acting as the primary interface to computing infrastructure, automating processes such as vacation booking, and facilitating seamless communication between various web APIs.

Original link: http://www.bimant.com/blog/gorilla-llm-api-calling-generation/