天天看點

OpenAI Cookbook 開放人工智能食譜

作者:AI讓生活更美好

OpenAI Cookbook 開放人工智能食譜

The OpenAI Cookbook shares example code for accomplishing common tasks with the OpenAI API.

OpenAI Cookbook 分享了使用 OpenAI API 完成常見任務的示例代碼。

To run these examples, you'll need an OpenAI account and API key (create a free account).

要運作這些示例,您需要一個 OpenAI 帳戶和 API 密鑰(建立一個免費帳戶)。

Most code examples are written in Python, though the concepts can be applied in any language.

大多數代碼示例都是用Python編寫的,盡管這些概念可以應用于任何語言。

Recently added/updated✨最近添加/更新✨

  • Related resources from around the web [May 22, 2023]

    來自網絡的相關資源 [2023 年 5 月 22 日]

  • Embeddings playground (streamlit app) [May 19, 2023]

    嵌入操場(流光應用)[2023 年 5 月 19 日]

  • How to use a multi-step prompt to write unit tests [May 19, 2023]

    如何使用多步驟提示編寫單元測試 [2023 年 5 月 19 日]

  • How to create dynamic masks with DALL·E and Segment Anything [May 19, 2023]

    如何使用DALL·E 和分段任何内容 [2023 年 5 月 19 日]

  • Question answering using embeddings [Apr 14, 2023]

    使用嵌入進行問答 [2023 年 4 月 14 日]

Guides & examples 指南和示例

  • API usage 接口用法How to handle rate limits

    如何處理速率限制Example parallel processing script that avoids hitting rate limits

    避免達到速率限制的并行處理腳本示例How to count tokens with tiktoken

    如何使用 tiktoken 計算代币

  • GPT 全球通用技術總局How to format inputs to ChatGPT models

    如何格式化 ChatGPT 模型的輸入How to stream completions

    如何流式傳輸完成How to use a multi-step prompt to write unit tests

    如何使用多步驟提示編寫單元測試Guide: How to work with large language models

    指南:如何使用大型語言模型Guide: Techniques to improve reliability

    指南:提高可靠性的技術

  • Embeddings 嵌入Text comparison examples

    文本比較示例How to get embeddings

    如何擷取嵌入Question answering using embeddings

    使用嵌入的問答Using vector databases for embeddings search

    使用矢量資料庫進行嵌入搜尋Semantic search using embeddings

    使用嵌入進行語義搜尋Recommendations using embeddings

    使用嵌入的建議Clustering embeddings 聚類嵌入Visualizing embeddings in 2D or 3D

    以 2D 或 3D 形式可視化嵌入Embedding long texts 嵌入長文本Embeddings playground (streamlit app)

    嵌入遊樂場(流光應用程式)

  • Apps 應用程式File Q&A 檔案問答Web Crawl Q&A 網絡爬蟲問答Powering your products with ChatGPT and your own data

    使用 ChatGPT 和您自己的資料為您的産品提供支援

  • Fine-tuning GPT-3 微調 GPT-3Guide: best practices for fine-tuning GPT-3 to classify text

    指南:微調 GPT-3 以對文本進行分類的最佳做法Fine-tuned classification

    微調分類

  • DALL-E 達爾-EHow to generate and edit images with DALL·E

    如何使用DALL生成和編輯圖像·EHow to create dynamic masks with DALL·E and Segment Anything

    如何使用DALL·E 和分段任何内容

  • Azure OpenAI (alternative API from Microsoft Azure)

    Azure OpenAI (來自 Microsoft Azure 的替代 API)How to use ChatGPT with Azure OpenAI

    如何将 ChatGPT 與 Azure OpenAI 結合使用How to get completions from Azure OpenAI

    如何從 Azure OpenAI 擷取完成How to get embeddings from Azure OpenAI

    如何從 Azure OpenAI 擷取嵌入

Related OpenAI resources相關開放人工智能資源

Beyond the code examples here, you can learn about the OpenAI API from the following resources:

除了此處的代碼示例之外,您還可以從以下資源中了解 OpenAI API:

  • Experiment with ChatGPT 使用 ChatGPT 進行實驗
  • Try the API in the OpenAI Playground

    在 OpenAI 操場中試用 API

  • Read about the API in the OpenAI Documentation

    在 OpenAI 文檔中閱讀有關 API 的資訊

  • Get help in the OpenAI Help Center

    在 OpenAI 幫助中心擷取幫助

  • Discuss the API in the OpenAI Community Forum or OpenAI Discord channel

    在 OpenAI 社群論壇或 OpenAI Discord 頻道中讨論 API

  • See example prompts in the OpenAI Examples

    請參閱 OpenAI 示例中的示例提示

  • Stay updated with the OpenAI Blog

    随時了解 OpenAI 部落格的最新動态

Related resources from around the web來自網絡的相關資源

People are writing great tools and papers for improving outputs from GPT. Here are some cool ones we've seen:

人們正在編寫出色的工具和論文,以提高 GPT 的輸出。以下是我們看到的一些很酷的:

Prompting libraries & tools提示庫和工具

  • Guidance: A handy looking Python library from Microsoft that uses Handlebars templating to interleave generation, prompting, and logical control.

    指導:來自 Microsoft 的一個外觀友善的 Python 庫,它使用 Handlebars 模闆來交錯生成、提示和邏輯控制。

  • LangChain: A popular Python/JavaScript library for chaining sequences of language model prompts.

    LangChain:一個流行的Python/JavaScript庫,用于連結語言模型提示序列。

  • FLAML (A Fast Library for Automated Machine Learning & Tuning): A Python library for automating selection of models, hyperparameters, and other tunable choices.

    FLAML(自動化機器學習和調優快速庫):用于自動選擇模型、超參數和其他可調選擇的 Python 庫。

  • Chainlit: A Python library for making chatbot interfaces.

    Chainlit:一個用于制作聊天機器人界面的Python庫。

  • Guardrails.ai: A Python library for validating outputs and retrying failures. Still in alpha, so expect sharp edges and bugs.

    Guardrails.ai:用于驗證輸出和重試失敗的 Python 庫。仍處于 alpha 階段,是以請期待鋒利的邊緣和錯誤。

  • Semantic Kernel: A Python/C# library from Microsoft that supports prompt templating, function chaining, vectorized memory, and intelligent planning.

    語義核心:來自微軟的 Python/C# 庫,支援提示模闆、函數鍊、矢量化記憶體和智能規劃。

  • Outlines: A Python library that provides a domain-specific language to simplify prompting and constrain generation.

    大綱:一個 Python 庫,它提供特定于域的語言來簡化提示和限制生成。

  • Promptify: A small Python library for using language models to perform NLP tasks.

    Promptify:一個小型的Python庫,用于使用語言模型執行NLP任務。

  • Scale Spellbook: A paid product for building, comparing, and shipping language model apps.

    縮放拼寫手冊:用于建構、比較和傳遞語言模型應用的付費産品。

  • PromptPerfect: A paid product for testing and improving prompts.

    提示完美:用于測試和改進提示的付費産品。

  • Weights & Biases: A paid product for tracking model training and prompt engineering experiments.

    權重和偏差:用于跟蹤模型訓練和提示工程實驗的付費産品。

  • OpenAI Evals: An open-source library for evaluating task performance of language models and prompts.

    OpenAI Evals:一個開源庫,用于評估語言模型和提示的任務性能。

  • LlamaIndex: A Python library for augmenting LLM apps with data.

    LlamaIndex:一個Python庫,用于用資料增強LLM應用程式。

  • Arthur Shield: A paid product for detecting toxicity, hallucination, prompt injection, etc.

    亞瑟·希爾德:用于檢測毒性、幻覺、及時注射等的付費産品。

  • LMQL: A programming language for LLM interaction with support for typed prompting, control flow, constraints, and tools.

    LMQL:一種用于 LLM 互動的程式設計語言,支援類型提示、控制流、限制和工具。

Prompting guides 提示指南

  • Brex's Prompt Engineering Guide: Brex's introduction to language models and prompt engineering.

    Brex的提示工程指南:Brex對語言模型和提示工程的介紹。

  • promptingguide.ai: A prompt engineering guide that demonstrates many techniques.

    promptingguide.ai:示範許多技術的快速工程指南。

  • OpenAI Cookbook: Techniques to improve reliability: A slightly dated (Sep 2022) review of techniques for prompting language models.

    OpenAI 食譜:提高可靠性的技術:對提示語言模型技術略有過時(2022 年 9 月)的回顧。

  • Lil'Log Prompt Engineering: An OpenAI researcher's review of the prompt engineering literature (as of March 2023).

    Lil'Log 提示工程:OpenAI 研究人員對提示工程文獻的回顧(截至 2023 年 3 月)。

  • learnprompting.org: An introductory course to prompt engineering.

    learnprompting.org:提示工程的入門課程。

Video courses 視訊課程

  • Andrew Ng's DeepLearning.AI: A short course on prompt engineering for developers.

    Andrew Ng's DeepLearning.AI:面向開發人員的快速工程短期課程。

  • Andrej Karpathy's Let's build GPT: A detailed dive into the machine learning underlying GPT.

    Andrej Karpathy的《Let's build GPT:詳細介紹GPT底層的機器學習》。

  • Prompt Engineering by DAIR.AI: A one-hour video on various prompt engineering techniques.

    DAIR的快速工程。AI:關于各種快速工程技術的一小時視訊。

Papers on advanced prompting to improve reasoning關于進階提示以改善推理的論文

  • Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (2022): Using few-shot prompts to ask models to think step by step improves their reasoning. PaLM's score on math word problems (GSM8K) rises from 18% to 57%.

    大語言模型中的思維鍊提示引出推理(2022 年):使用少數鏡頭提示要求模型逐漸思考可以提高他們的推理能力。PaLM在數學單詞問題(GSM8K)上的得分從18%上升到57%。

  • Self-Consistency Improves Chain of Thought Reasoning in Language Models (2022): Taking votes from multiple outputs improves accuracy even more. Voting across 40 outputs raises PaLM's score on math word problems further, from 57% to 74%, and code-davinci-002's from 60% to 78%.

    自洽性改善語言模型中的思維鍊推理(2022 年):從多個輸出中擷取投票可進一步提高準确性。對 40 個輸出進行投票進一步提高了 PaLM 在數學單詞問題上的分數,從 57% 提高到 74%,将 code-davinci-002 從 60% 提高到 78%。

  • Tree of Thoughts: Deliberate Problem Solving with Large Language Models (2023): Searching over trees of step by step reasoning helps even more than voting over chains of thought. It lifts GPT-4's scores on creative writing and crosswords.

    思想之樹:使用大型語言模型故意解決問題(2023 年):搜尋逐漸推理的樹比對思維鍊進行投票更有幫助。它在創意寫作和填字遊戲方面提高了 GPT-4 分。

  • Language Models are Zero-Shot Reasoners (2022): Telling instruction-following models to think step by step improves their reasoning. It lifts text-davinci-002's score on math word problems (GSM8K) from 13% to 41%.

    語言模型是零鏡頭推理者(2022):告訴遵循指令的模型逐漸思考可以提高他們的推理能力。它将數學單詞問題(GSM8K)的 text-davinci-002 分從13%提高到41%。

  • Large Language Models Are Human-Level Prompt Engineers (2023): Automated searching over possible prompts found a prompt that lifts scores on math word problems (GSM8K) to 43%, 2 percentage points above the human-written prompt in Language Models are Zero-Shot Reasoners.

    大型語言模型是人類級别的提示工程師(2023 年):自動搜尋可能的提示,發現一個提示将數學單詞問題的分數提高到 43%,比語言模型中的人工編寫提示高出 2 個百分點是零鏡頭推理者。

  • Reprompting: Automated Chain-of-Thought Prompt Inference Through Gibbs Sampling (2023): Automated searching over possible chain-of-thought prompts improved ChatGPT's scores on a few benchmarks by 0–20 percentage points.

    重新提示:通過吉布斯抽樣進行自動思維鍊提示推理(2023 年):對可能的思維鍊提示進行自動搜尋将 ChatGPT 在一些基準上的分數提高了 0-20 個百分點。

  • Faithful Reasoning Using Large Language Models (2022): Reasoning can be improved by a system that combines: chains of thought generated by alternative selection and inference prompts, a halter model that chooses when to halt selection-inference loops, a value function to search over multiple reasoning paths, and sentence labels that help avoid hallucination.

    使用大型語言模型的忠實推理(2022):推理可以通過以下系統來改進:由替代選擇和推理提示生成的思維鍊,選擇何時停止選擇-推理循環的挂脖模型,用于搜尋多個推理路徑的值函數,以及有助于避免幻覺的句子标簽。

  • STaR: Bootstrapping Reasoning With Reasoning (2022): Chain of thought reasoning can be baked into models via fine-tuning. For tasks with an answer key, example chains of thoughts can be generated by language models.

    STaR:用推理引導推理(2022):思維推理鍊可以通過微調融入模型。對于具有答案鍵的任務,語言模型可以生成示例思維鍊。

  • ReAct: Synergizing Reasoning and Acting in Language Models (2023): For tasks with tools or an environment, chain of thought works better you prescriptively alternate between Reasoning steps (thinking about what to do) and Acting (getting information from a tool or environment).

    React:語言模型中的協同推理和行動(2023 年):對于使用工具或環境的任務,思維鍊效果更好,您可以在推理步驟(思考該做什麼)和行動(從工具或環境中擷取資訊)之間交替。

  • Reflexion: an autonomous agent with dynamic memory and self-reflection (2023): Retrying tasks with memory of prior failures improves subsequent performance.

    反射:具有動态記憶和自我反思的自主代理(2023 年):使用先前失敗的記憶重試任務可提高後續性能。

  • Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP (2023): Models augmented with knowledge via a "retrieve-then-read" can be improved with multi-hop chains of searches.

    示範-搜尋-預測:為知識密集型 NLP 編寫檢索和語言模型(2023 年):通過“檢索然後閱讀”增強知識的模型可以通過多跳搜尋鍊進行改進。

  • Improving Factuality and Reasoning in Language Models through Multiagent Debate (2023): Generating debates between a few ChatGPT agents over a few rounds improves scores on various benchmarks. Math word problem scores rise from 77% to 85%.

    通過多智能體辯論提高語言模型中的事實性和推理性(2023 年):在幾輪中在幾個 ChatGPT 代理之間生成辯論可以提高各種基準測試的分數。數學單詞問題分數從 77% 上升到 85%。

Contributing 貢獻

If there are examples or guides you'd like to see, feel free to suggest them on the issues page. We are also happy to accept high quality pull requests, as long as they fit the scope of the repo.

如果您想檢視示例或指南,請随時在問題頁面上提出建議。我們也很樂意接受高品質的拉取請求,隻要它們符合存儲庫的範圍。

OpenAI Cookbook 開放人工智能食譜

繼續閱讀