laitimes

Businesses should take the security risks posed by employees using ChatGPT

author:Block software development

Employees are submitting sensitive business data and privacy-protected information to large language models (LLMs) such as ChatGPT, raising concerns among employers that artificial intelligence (AI) services may incorporate data into their models and retrieve information later if the service does not have proper data security.

Businesses should take the security risks posed by employees using ChatGPT

ChatGPT's viral success has sparked frantic competition among tech companies to push AI products to market. Google recently launched its ChatGPT competitor, which it calls Bard, while Microsoft (MSFT), an investor in Open AI, launched its Bing AI chatbot to a limited number of testers.

But the announcements have heightened concerns about the technology. Both Google and Microsoft tool demos have been blamed for producing factual errors. Meanwhile, Microsoft is trying to take control of its Bing chatbot as users report disturbing reactions, including confrontational rhetoric and dark fantasies.

Some businesses encourage employees to incorporate ChatGPT into their daily routines. But others worry about the risks. The banking industry, which handles sensitive customer information and is closely watched by government regulators, has an additional incentive to proceed with caution. Schools are also restricting ChatGPT for fear it could be used to cheat homework. New York City public schools banned it in January. According to statistics, more than 4% of employees put sensitive company data into large language models, which raises concerns that its popularity could lead to a large number of leaks of confidential information.

The ability of ChatGPT

ChatGPT is an artificial intelligence language platform that is trained to engage in conversational interactions and perform tasks. To train an AI like ChatGPT, massive data sets are fed into computer algorithms. The model is then evaluated to determine how well it makes predictions when looking at data it hasn't seen before. The AI tool then tests to determine whether the model performs well when dealing with large amounts of new data that it has never seen before.

While chat GPT can improve the efficiency of workplace processes, it also poses legal risks for employers.

Given the way AI is trained and learned, employers may present significant risks when employees use ChatGPT to perform their job duties. When employees get information relevant to their work from sources like ChatGPT, accuracy and bias are concerns. Employers need to assess the legal risks employees may face when using ChatGPT and other AI tools. This needs to address confidentiality and privacy issues at work, bias and fairness, legal compliance and accountability.

Accuracy and reliance on AI

ChatGPT's ability to provide information as an AI language model is only as good as what it learns during the training phase. Although ChatGPT is trained on a wealth of online information, there are still gaps in its knowledge base. The current version of ChatGPT is only trained on datasets that were available before 2021. Moreover, the tool extracts online data that is not always accurate. If employees rely on ChatGPT for work-related information and don't fact-check it, problems and risks can arise depending on how employees use the information and where they send it.

Employees are submitting sensitive business data and privacy-protected information to large language models (LLMs) such as ChatGPT, raising concerns among employers that artificial intelligence (AI) services may incorporate data into their models and retrieve information later if the service does not have proper data security.

In a recent report, data security service Cyberhaven detected and blocked 2.1% of its client company's 40,000 employees' requests to enter data into ChatGPT because of the risk of leaking confidential information, customer data, source code, or regulated information. Therefore, employers should have policies that set specific guardrails for how employees use information related to their work in ChatGPT.

In one case, an executive cut and pasted the company's 2023 strategy document into ChatGPT and asked it to create a PowerPoint slide. In another case, the doctor entered his patient's name and their medical condition and asked ChatGPT to produce a letter to the patient's insurance company.

"As more employees use ChatGPT and other AI-based services as productivity tools, the risks will increase." Cyberhaven CEO Howard Ting said.

"Data is moving from on-premises to the cloud, and the next big shift will be moving data to these generated applications," he said. "It remains to be seen what the result will be – I think we are in the pre-race phase; We haven't even made it to the first inning yet. ”

With the rapid popularity of OpenAI's ChatGPT and its underlying AI model (Generative Pre-train Transformer or GPT-3), as well as other LLMs, companies and security professionals are beginning to worry that sensitive data ingested into the model as training data could reappear when prompted by the right query. Some companies are taking action to restrict employees from using ChatGPT at work.

Walmart has clear instructions to its employees about generative artificial intelligence, such as ChatGPT: Don't share any information about Walmart with emerging technologies. In an internal memo to employees, the retailer's technology and software engineering department said, "The use of ChatGPT was blocked after we noticed activity that posed a risk to our company." We've since spent time evaluating and developing a set of usage guides around generative AI tools, and are now making ChatGPT available for the Walmart network. ”

Walmart spokeswoman Erin Hulliberger, who did not respond to inquiries about when the company would block the generation of AI and the nature of the activity, said in a statement: "Most new technologies bring new benefits and new risks. It is not uncommon for us to evaluate these new technologies and provide our employees with guidelines on how to use them.

The guidelines released by Walmart tell employees that they should "avoid entering any sensitive, confidential or sensitive information into ChatGPT," such as financial or strategic information or personal information about shoppers and employees. "Nor should any information about Walmart's business, including business processes, policies or strategies, be entered into these tools." It is understood that Walmart employees must also review the output of these tools before relying on the information they provide. Employees should not cut and paste existing code into these tools, nor should they use these tools to create new code.

"Putting Walmart information into these tools may expose the company's information, may violate confidentiality, and may significantly affect our rights to any code, product, information or content," a Walmart spokesperson said. "Every employee has a responsibility to appropriately use and protect Walmart data. Second, Walmart Global touts ChatGPT as "improving efficiency and innovation," but it and other generative AI tools must be used "appropriately."

In addition to Walmart, JPMorgan Chase has temporarily banned its employees from using ChatGPT, and according to people familiar with the matter, the largest U.S. bank has restricted its use of ChatGPT among its employees around the world. The person said the decision was not made because of a particular issue, but to comply with restrictions on third-party software, mainly because of compliance issues.

Businesses should take the security risks posed by employees using ChatGPT

More and more users are submitting sensitive data to ChatGPT.

As more software companies connect their apps to ChatGPT, LLM may collect much more information than users or their companies realize, exposing employers to legal risks.

The risks are not theoretical. In a February 2021 paper, more than a dozen researchers from who's who companies and university lists, including Apple, Google, Harvard, and Stanford, found that so-called "training data extraction attacks" can successfully recover verbatim text sequences, personally identifiable information (PII), and other information from training documents from LLM.

There will be more and more AI products like GPT

In fact, these training data extraction attacks are one of the main adversarial problems for machine learning researchers. According to MITRE's Atlas knowledge base for AI systems, these attacks, also known as "infiltration through machine learning inference," can collect sensitive information or steal intellectual property.

It works like this: By querying the generation AI system in the same way that it invokes a specific item, an adversary can trigger the model to call specific information instead of generating synthetic data. GPT-3 is the successor to GPT-2, and there are many real-world examples, including instances where GitHub's Copilot calls specific developers' usernames and coding priorities.

In addition to GPT-based products, other AI-based services raise questions about whether they pose a risk. For example, the automatic transcription service Otter.ai transcribes audio files to text, automatically recognizes speakers and allows important words to be marked and phrases highlighted. The company's storage of this information in the cloud raises concerns among users. The company said it has committed to keeping user data private and implementing strong compliance controls, according to Julie Wu, senior compliance manager at Otter.ai, "Otter has completed SOC2 Type 2 audits and reports, and we have put in place technical and organizational measures to protect personal data, and speaker identification is account-bound." Adding a speaker's name will train Otter to recognize speakers for future conversations that you record or import into your account, but it will not allow speaker recognition across accounts. ”

The API accelerates the use of GPT

The popularity of ChatGPT has taken many companies by surprise. According to the latest data released a year ago, more than 300 developers are using GPT-3 to power their applications. For example, social media company Snap and shopping platforms Instacart and Shopify both use ChatGPT via APIs, adding chat functionality to their mobile apps.

Based on conversational interviews with the company's customers, the shift to generative AI applications is expected to only accelerate, for everything from generating memos and presentations to classifying safety incidents and interacting with patients.

As seen so far, one corporate employer said, "Look right now, as a stopgap measure, I'm just blocking this app, but my board has told me we can't do that." Because these tools will help our users be more productive – with a competitive advantage – if my competitors are using these generative AI applications and I don't allow my users to use it, it will put us at a disadvantage. ”

The good news is that educating employees about security awareness can have a significant impact on whether a particular company's data is compromised, as a small number of employees are responsible for most risk requests. Less than 1% of employees are responsible for 80% of incidents that send sensitive data to ChatGPT.

"You know, there are two forms of education: one is classroom education, like when you onboard employees. Then there's contextual education, when someone actually tries to paste the data", both of which are important, but the latter is more effective in practical terms.

In addition, OpenAI and other companies are working to limit LLM's access to personal information and sensitive data: when you ask ChatGPT for personal details or sensitive company information, ChatGPT currently explicitly refuses to answer.

For example, when asked, "What is Apple's strategy for 2023?" ChatGPT responded: "As an AI language model, I don't have access to Apple's confidential information or future plans. Apple is a highly secretive company, and they don't usually disclose their strategies or future plans to the public until they're ready to release them. ”

The biases inherent in artificial intelligence

There is also the problem of inherent bias in artificial intelligence. The EEOC focuses on this issue because the agency is relevant to the employment discrimination laws it enforces. In addition, state and local lawmakers in the United States are proposing — and some have already passed — laws restricting employers' use of AI.

The information provided by an AI necessarily depends on the information on which it is trained, and those who decide what information the AI receives. This bias may manifest itself in the type of information provided by ChatGPT to answer questions asked in its "conversations."

In addition, if ChatGPT is used in the employment decision consultation scenario, this can lead to discrimination claims. Under state and local law, the use of AI in certain employment decisions and/or audits requires prior notice before it can be used in certain employment settings, which may also create compliance issues.

Because of the risk of bias in AI, employers should include a general prohibition in their policies on the use of AI in employment decisions without approval from the legal authorities.

Privacy disclosure and breach of confidentiality

Confidentiality and data privacy are other issues that employers consider when considering how employees use ChatGPT at work. Employees may share proprietary, confidential, or trade secret information when engaging in "conversations" with ChatGPT. Although ChatGPT says it doesn't retain the information provided in conversations, it does "learn" from each conversation. Of course, the user is entering information in a conversation with ChatGPT via the Internet, and the security of such communication cannot be guaranteed.

If an employee discloses sensitive information to ChatGPT, the employer's confidential information may be affected. Prudent employers will prohibit employees from referencing or entering confidential, proprietary, or trade secret information into AI chatbots or language models such as ChatGPT in employee confidentiality agreements and policies.

A good argument is that the information provided to chatbots online is not necessarily the disclosure of trade secrets. On the other hand, since ChatGPT is trained in a lot of online information, employees may receive and use information about trademarks, copyrights, or other personal or entity intellectual property from the tool, creating legal risks for employers.

Other issues that employers should be concerned about

In addition to these legal issues, employers should also consider the extent to which they allow employees to use ChatGPT at work. Employers are at an important crossroads when considering whether and how to accept or restrict the use of ChatGPT in their workplaces.

Employers should weigh the efficiency and economics that employees can achieve by using ChatGPT to perform tasks such as writing regular letters and emails, generating newsletters, and creating presentations, against the potential loss of development opportunities for employees to perform such tasks on their own.

ChatGPT isn't going away, and new and improved versions should be available within a year. Employers ultimately need to address their use in the workplace because the next iteration will be better. For all the risks that ChatGPT can bring, employers can also take advantage of it. The discussion has only just begun. Employers may do some learning and beta testing on this, and the same goes for ChatGPT.

Translated from: https://www.darkreading.com/risk/employees-feeding-sensitive-business-data-chatgpt-raising-security-fears

Read on