laitimes

Use Ali Tongyi Qianwen and Semantic Kernel to build a knowledge assistant in 10 minutes!

author:Programming fun
I've been programming fun for 10 years. Net development experience veteran programmers, click "Follow" in the upper right corner to share open source projects and programming knowledge for you every day.

Preface

Tongyi Qianwen: It is a super-large-scale language model launched by Alibaba, of which the parameter model Qwen-72B has been announced as open source, and the 1.8 billion parameter model Qwen-1.8B and the audio large model Qwen-Audio have also been open-sourced, so far 1.8 billion, 7 billion, 14 billion, 72 billion parameters of the 4 large language models, as well as visual understanding, audio understanding two multimodal large models.

Semantic Kernel: An open-source project launched by Microsoft to integrate large language models with applications, making it easier for developers to build smarter and more efficient applications. For more introduction, you can see my previous sharing of "Microsoft's official production: GPT large model orchestration tool, supporting C#, Python and other language versions".

01

Build a thousand questions of general meaning

1. Deployment requirements

Python 3.8 and above;

PyTorch 1.12 and above, 2.0 and above recommended;

CUDA 11.4 and above is recommended (GPU users need to consider this option);

If CPU inference is used, it is recommended that the memory be more than 32 GB.

Using GPU inference, it is recommended to have more than 24G memory;

It is recommended to use a Liunx server.

2. Download the source code

git clone https://github.com/QwenLM/Qwen-VL           
Use Ali Tongyi Qianwen and Semantic Kernel to build a knowledge assistant in 10 minutes!

The downloaded source code is as follows:

Use Ali Tongyi Qianwen and Semantic Kernel to build a knowledge assistant in 10 minutes!

Among them, the source code openai_api.py is based on FastAPI, which imitates the OpenAI interface.

3. Install the large model dependency library

Go to the source code directory and run the following command to install the dependency library of Tongyi Qianwen:

pip install -r requirements.txt           
Use Ali Tongyi Qianwen and Semantic Kernel to build a knowledge assistant in 10 minutes!

4. Install fastapi dependencies

Run the following command to install the dependent library:

pip install fastapi uvicorn openai pydantic sse_starlette           
Use Ali Tongyi Qianwen and Semantic Kernel to build a knowledge assistant in 10 minutes!

5. Start FastAPI

Run the following command to start the project:

python3 openai_api.py           
Use Ali Tongyi Qianwen and Semantic Kernel to build a knowledge assistant in 10 minutes!

After successful installation, we can type :http://127.0.0.1:8000 in the browser to view the API documentation.

Use Ali Tongyi Qianwen and Semantic Kernel to build a knowledge assistant in 10 minutes!

02

集成Semantic Kernel

1. Create a console project

Use Ali Tongyi Qianwen and Semantic Kernel to build a knowledge assistant in 10 minutes!

2. Install dependencies

Install dependency library from NuGet: Microsoft.SemanticKernel.

Use Ali Tongyi Qianwen and Semantic Kernel to build a knowledge assistant in 10 minutes!

3. Add the Tongyi Qianwen extension

Add the General Qianwen extension method to SemanticKernel, the code is as follows:

using Azure.AI.OpenAI;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;
using Microsoft.SemanticKernel.TextGeneration;


namespace Chat2KnowL.Console
{
    public static class QwenServiceCollectionExtensions
    {
        /// <summary>
        /// 添加通义千问聊天服务
        /// </summary>
        /// <param name="builder"></param>
        /// <param name="modelId"></param>
        /// <param name="apiKey"></param>
        /// <param name="url"></param>
        /// <param name="orgId"></param>
        /// <param name="serviceId"></param>
        /// <returns></returns>
        public static IKernelBuilder AddQwenChatCompletion(this IKernelBuilder builder, string modelId, string apiKey, string url, string? orgId = null, string? serviceId = null)
        {
            string modelId2 = modelId;
            string apiKey2 = apiKey;
            string orgId2 = orgId;


            Func<IServiceProvider, object, OpenAIChatCompletionService> implementationFactory = (IServiceProvider serviceProvider, object? _) => new OpenAIChatCompletionService(modelId, new OpenAIClient(new Uri(url), new Azure.AzureKeyCredential(apiKey)), serviceProvider.GetService<ILoggerFactory>());
            builder.Services.AddKeyedSingleton((object?)serviceId, (Func<IServiceProvider, object?, IChatCompletionService>)implementationFactory);
            builder.Services.AddKeyedSingleton((object?)serviceId, (Func<IServiceProvider, object?, ITextGenerationService>)implementationFactory);
            return builder;
        }
    }
}           

4、SemanticKernel集成通义千问

SemanticKernel integrates with Tongyi Qianwen to achieve a simple knowledge assistant!

using Chat2KnowL.Console;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.OpenAI;


var builder = Kernel.CreateBuilder();


//通义千问
builder.AddQwenChatCompletion(
         "Qwen", "none", "http://127.0.0.1:8000/v1"
         );


var kernel = builder.Build();


//等待用户输入
Console.Write("用户:");
var input = Console.ReadLine();


//循环对话
while (input != "quit")
{
    var prompt = @#34;<message role=""user"">{input}</message>";
    var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.5 });
    Console.Write("知识助理:");
    var result =   kernel.InvokeStreamingAsync(summarize);
    await foreach (var item in result)
    {
        Console.Write(item.ToString());
    }
    Console.WriteLine();
    Console.WriteLine();
    Console.Write("用户:");
    input = Console.ReadLine();
}           

Caution:

A. Our interface is simulated by OpenAI, so we modify the interface address that conforms to OpenAI;

B. Enable support flow requests.

Modify the interface in the openai_api.py: create_chat_completion, see the red box for the modifications.

Use Ali Tongyi Qianwen and Semantic Kernel to build a knowledge assistant in 10 minutes!

In addition, if Tongyi Qianwen is deployed on a server and needs to be accessed by an external computer, you also need to change the value of the --server-name parameter to 0.0.0.0, or specify it when starting the project.

Use Ali Tongyi Qianwen and Semantic Kernel to build a knowledge assistant in 10 minutes!

5. Test the effect

Use Ali Tongyi Qianwen and Semantic Kernel to build a knowledge assistant in 10 minutes!

03

At last

I have put the above code on Github, open source project address:

https://github.com/bianchenglequ/chat2KnowL

Everyone is also welcome to collect it!

Use Ali Tongyi Qianwen and Semantic Kernel to build a knowledge assistant in 10 minutes!

Project Introduction: The knowledge document Q&A tool uses a large model to talk to the document, and provides Al analysis, reading, and Q&A tools to help you quickly understand the content of the document.

If you have any questions, please leave me a message!

I've been programming fun for 10 years. Net is a veteran programmer with experience in development, focusing on open source projects and programming knowledge sharing.

Private message reply: [888], receive. Net Video Tutorials.

- End -

Read on