laitimes

Find bugs for ChatGPT! OpenAI announced a bug bounty program of up to $20,000

Security has become one of the important issues for whether large AI models such as ChatGPT and GPT-4 can be applied on a large scale in all walks of life. OpenAI has also received a lot of criticism from industry insiders and regulators because of this problem.

Today, OpenAI officially released a blog post called "Announcing OpenAI's Bug Bounty Program", announcing the launch of a bug bounty program and promising timely remediation of verified vulnerabilities to create safe, reliable, and trustworthy technologies and services that benefit everyone. According to reports, the maximum prize of the bug bounty program is up to $20,000.

OpenAI wrote in a blog post, "We believe transparency and collaboration are critical to solving this real-world problem. That's why we invite security researchers, ethical hackers, and technology enthusiasts from around the globe to help us identify and fix vulnerabilities in our systems. ”

"This initiative is critical to our commitment to developing safe and advanced artificial intelligence. We need your help as we create safe, reliable and trustworthy technologies and services. ”

Academic headlines made simple edits to the article without changing the general meaning of the original text.

A commitment to AI for safety

OpenAI's mission is to create AI systems that benefit everyone. To this end, we invest heavily in research and engineering to ensure that our AI systems are safe and reliable. However, just like any other complex technology, AI systems can also have vulnerabilities and flaws.

We are convinced that transparency and cooperation are essential to address this real problem. That's why we invite security researchers, ethical hackers, and technology enthusiasts from around the globe to help us identify and fix vulnerabilities in our systems.

We are pleased to be able to reward eligible vulnerability information on top of our coordinated commitment to disclosure. Your expertise and vigilance will have a direct impact on ensuring the safety of our systems and users.

About the bug bounty program

Bug bounty programs are a way for us to recognize and reward valuable insights from security researchers who contribute to keeping our technology and company secure. We invite you to report a vulnerability, bug, or security flaw that you find in our systems. By sharing your findings, you'll play a key role in making our technology safer for everyone.

We have partnered with Bugcrowd, a leading bug bounty platform, to manage the submission and reward process, with the aim of ensuring a streamlined experience for all participants. The detailed rules are as follows:

You are authorized to test in compliance with this Policy.

Follow this Policy and any other relevant agreements. In the event of inconsistency, this Policy takes precedence.

Timely reporting of discovered vulnerabilities.

Avoid invasion of privacy, damage to systems, destruction of data, or damage to the user experience.

Use OpenAI's Bugcrowd program for vulnerability-related communication.

Vulnerability details are kept confidential until OpenAI's security team authorizes the release, and we will provide authorization within 90 days of receiving the report.

Only test in-scope systems and respect out-of-scope systems.

Do not access, modify, or use data that belongs to others, including confidential data from OpenAI. If a vulnerability exposes this data, stop testing, file a report immediately, and delete copies of all information.

Unless authorized by OpenAI, you can only interact with your own account.

Disclosure of vulnerabilities to OpenAI must be unconditional. Do not engage in blackmail, threats, or other coercive tactics to elicit a response. OpenAI refuses to provide a safe harbor for vulnerability disclosures in such cases.

At the same time, model security issues do not fall under the bug bounty program because they are not separate, discontinuous bugs that can be fixed directly. "Solving these problems often requires a lot of research and a broader approach."

In addition, questions related to the content of model prompts and responses are strictly outside the scope and are not rewarded unless they have an additional directly verifiable security impact on the services in scope. For example:

Examples of security issues that are not in scope:

Jailbreak/security bypass (e.g. DAN and related tips);

Let the model speak ill of you;

Let the model tell you how to do bad things;

Let the model write malicious code for you.

The model creates hallucinations:

Let the model pretend to do bad things;

Let the model pretend to give you secret answers;

Have the model pretend to be a computer and execute the code.

In addition, the initial priority of most findings will use the Bugcrowd vulnerability rating classification. However, the priority and reward of vulnerabilities may be modified based on likelihood or impact, which is at OpenAI's sole discretion. For downgraded questions, researchers will receive a detailed explanation.

Read on