laitimes

Xiao Sa Team | Open source is the future?AI model, sharing = win-win?

Xiao Sa Team | Open source is the future?AI model, sharing = win-win?
Xiao Sa Team | Open source is the future?AI model, sharing = win-win?

On April 16, the Artificial Intelligence Law (Model Law) 2.0 (hereinafter referred to as the "Model Law 2.0") was released, and the 2.0 version of the manuscript carries out a major iterative upgrade of the "Model Law on Artificial Intelligence Law 1.0 (Expert Suggestion Draft") released in August 2023. One of the most striking of these is Article 71 of the Model Law 2.0, which for the first time provides for the reduction of legal liability for open-source AI developers. According to the Model Law 2.0, "those who provide some code modules required for the research and development of artificial intelligence in a free and open source manner, and at the same time publicly explain their functions and security risks in a clear manner, shall not bear legal responsibility." Individuals or organizations that provide AI for free and open source can prove that they have established an AI compliance governance system that meets national standards and have taken corresponding security governance measures, they may be mitigated or exempted from legal liability. This means that the legal profession has a clear attitude on the issue of using AI legal norms to promote development, and has a clear attitude in supporting AI developers to adopt an open-source development model, and encourages the disclosure and sharing of AI research and development technologies through measures such as the design of special liability reduction clauses. In previous columns, Sister Sa's team has always emphasized that "public" and "open source" are the core values of AGI. As artificial intelligence gradually begins to empower the development of productivity in various fields, only through standardized design can we truly achieve a "win-win" situation and promote the benign development of the AGI field by balancing its "private interests" based on intellectual property rights and the "public welfare" based on the technological development of all mankind.

01 The debate between open source and closed source of large models

In the field of AI model research and development, the debate between "open source" and "closed source" has been around for a long time. The latest fierce confrontation between the two, the Musk v. OpenAI case in early March 2024, marks that the conflict between the two sides has expanded from simple technology competition to antitrust and other legal and compliance fields. As a representative of the developer of the closed-source model, in Musk's accusation, OpenAI has de facto become a closed-source subsidiary controlled by Microsoft after receiving funding from Microsoft, making the GPT-4 and other series of products developed by it a tool for Microsoft to grab huge commercial profits. In order to achieve its profit goals, OpenAI has gradually deviated from its initial commitment to public and open-source artificial general intelligence (AGI) and has begun to build technological hegemony in related fields. Although Musk's above accusations have not been supported by sufficient evidence so far, and based on his long-term "Internet celebrity persona", filing a lawsuit seems to be just a means of marketing for his newly developed open-source large model of his own XAI company. However, it is not difficult to see from the impact of this lawsuit that there are indeed many conflicts of interest and development concepts between the developers of the emerging open source model and the "not so open" OpenAI, which has long occupied a leading position in the industry.

On the one hand, the existence of the conflict between the two sides is due to the fact that OpenAI has indeed achieved fame and fortune through the development of closed-source large models. Although when it was founded in 2015, OpenAI did have its core mission of "ensuring that general artificial intelligence benefits all mankind" and tried to build a non-profit organization that was different from traditional technology companies, its real breakthrough in the development of ChatGPT was based on Microsoft's continuous olive branches thrown at it. According to public information, since the establishment of OpenAI, Microsoft has invested a total of $13 billion in it and has a 49% stake in OpenAI's for-profit business. It can be seen that Marx's accusations are not groundless, and OpenAI's achievements in the field of AI large model openness are indeed inseparable from the fact that it deviates from its original intention and insists on closed-source product development for Microsoft. On the other hand, the conflict exists because GPT-4 is still the technological leader in the industry to this day. In the context of the intensifying competition in the research and development of AI large models around the world, including XAI's 314 billion large model Grok-1 and Meta's 400 billion large model Llama 3 released the day before yesterday, they are all representatives of open source large models and have challenged GPT-4 in their respective advantageous fields. However, challengers are always challengers, and GPT-4 turbo, as the "only king" of closed-source large models, still occupies an absolute leading position in the field of language large models. As an open-source large-scale model developer who has given up its commercial orientation and takes public welfare data acquisition and R&D as its primary goal, it is really not a taste to watch closed-source developers make a lot of money while still maintaining an industry leadership in technology.

02

The balance between the contradiction between open source and closed source in the system

Therefore, when the contradiction between open source model developers and closed-source model developers cannot be solved at the technical and economic levels, the legal system takes the lead in proposing a solution to this problem. The provisions of the Model Law 2.0 on the reduction and exemption of legal liability for open source AI developers have to a large extent shared with the world the idea of AI governance with mainland characteristics.

First, it can be seen from article 71 of the Model Law 2.0 that the so-called "open-source AI developers" are aimed at entities that "provide AI in a free and open-source manner", and this definition directly excludes for-profit AI developers. As for non-profit open source AI developers, they are actually divided into two categories through the two paragraphs of Article 71, namely, "those who publicly explain their functions and security risks in a clear manner" and "those who can prove that they have established an AI compliance governance system that meets national standards and take corresponding security governance measures", the former directly does not bear legal responsibility, and the latter can reduce the legal liability as appropriate. In a sense, this line of thinking is the embodiment of the principle of "whoever benefits is responsible" in civil law, and the law should encourage entities that have not used the developed artificial intelligence products to obtain benefits and have fully completed the compliance work for the risks of self-developed products, so they will be reduced or exempted from liability to varying degrees depending on their degree of compliance. Second, the Model Law 2.0 does not explicitly list the types of relief from liability, but directly adopts the expressions "not bearing legal liability" and "mitigating or exempting from legal liability". This means that for non-profit open source AI developers, the scope of reduction of their legal liability is very broad on the premise that they have completed their own compliance construction. The adoption of this method is not only to balance the personal interests of open source AI developers with their contributions to the public welfare in the field of scientific research, but also to encourage them to strengthen their own compliance construction, fully consider possible risks in the process of technology research and development, and publicly explain security risks to the society.

03

Technology open source and the future ecology of AGI

In fact, although the closed-source model can attract more R&D investment through the protection of intellectual property rights and use this advantage to further consolidate its leading position in the industry, the open-source model itself also has many advantages that closed-source developers must give up. For example, the open-source development model relies on community building to drive innovation, which allows developers interested in the field to participate in the development of the domain from all over the world, which is inherently more efficient in model training and data acquisition. Another example is that open-source code can greatly improve the flexibility and customizability of the program, and make the large model more auditable, through the continuous review of the source code and parameters by community members, it can better reduce the security risk of AI and improve the compliance of the large model. In addition, closed-source code always carries the risk of anti-monopoly review, and it is often difficult for relevant developers to explain that they have no intention of restricting fair competition in the market through technology monopoly. Therefore, in the future, developers who adopt the open source form will have a considerable degree of advantages in terms of legal risk avoidance and their own compliance construction.

04

Write at the end

In fact, from the perspective of the current development of the AGI field, simply discussing the balance of interests, the protection of the "private interests" of intellectual achievements should make appropriate concessions to the "public welfare" of the future of human technology. Of course, this kind of "concession" is not to damage the relevant rights and interests enjoyed by developers due to their intellectual achievements, but through the methods mentioned in the Model Law 2.0, through the balance of rights and responsibilities, to promote AI developers to choose their own track more rationally, whether open source or closed source, the ultimate goal is to promote the development of the AGI industry and promote global scientific and technological innovation and economic growth.

The above is today's sharing, thank you readers!

If you have friends around you who are interested in new technology and digital economy, welcome to forward it to Ta.

Xiao Sa Team | Open source is the future?AI model, sharing = win-win?
Xiao Sa Team | Open source is the future?AI model, sharing = win-win?

For more information, please contact our team

[email protected]

[email protected]

飒姐工作微信:【 xiaosalawyer】

Sister Sa's work phone:【 +86 171 8403 4530】

Xiao Sa Team | Open source is the future?AI model, sharing = win-win?

Read on