laitimes

A Look at Enterprise Security Through Generative AI says Amazon CSO Steve Schmidt

author:Brother Tao said things

Recently, it was revealed that a doctor in a hospital in Chongqing encountered trouble, knowing that the medical insurance card was on him, but he was accused that someone used the QR code of his medical insurance card to buy tens of thousands of yuan of health care products. It wasn't until the police came to the door that they learned that it was someone with ulterior motives who stole his medical insurance card through the "AI face changing" software.

In the past year or so, generative AI has developed rapidly, bringing more convenience and imagination to people's production and life, but also increasing security challenges and pressures. The security threats brought by AI include not only the offensive and defensive confrontation at the technical level, but also the vulnerability of the system itself, and deeper ethical issues.

According to Gartner, the top factors driving cybersecurity trends in 2024 include generative AI, persistent threat exposure, third-party risk, and more. Gartner predicts that by 2025, the adoption of generative AI will lead to a surge in cybersecurity resources required by organizations, leading to an increase in security spending on applications and data by more than 15%.

In the face of the rising wave of generative AI, enterprise security and risk managers must take precautions and prepare in advance in multiple dimensions such as technology, solutions, organizational structure, and management.

Safety is a corporate culture

In the face of the security challenges posed by generative AI, many countries and governments, as well as enterprises, have taken action. In July 2023, the Cyberspace Administration of China (CAC) and seven other government departments jointly issued the Interim Measures for the Administration of Generative AI Services (hereinafter referred to as the "Measures"). In order to guide generative AI service providers and relevant competent authorities to implement the requirements of the Measures, the National Cybersecurity Standardization Technical Committee issued the Basic Requirements for the Security of Generative AI Services. After that, the Cyberspace Administration of China will work with relevant departments to promote the filing of generative AI services in an orderly manner in accordance with the requirements of the Measures.

On March 13, 2024, the European Parliament officially voted to approve and approve the EU Artificial Intelligence Act. The bill would regulate foundational models, or generative AI, while restricting the government's use of real-time biometric surveillance in public places to handle certain crime cases, prevent real threats such as terrorist attacks, and search for people suspected of the most serious crimes. It is reported that the bill is expected to enter into force in early 2025 and be implemented in 2026.

A Look at Enterprise Security Through Generative AI says Amazon CSO Steve Schmidt

Steve Schmidt, Chief Security Officer at Amazon

To prevent and defend against the security risks that may be brought about by generative AI, manufacturers in many related fields such as cloud computing, big data, artificial intelligence, and cybersecurity should achieve the greatest scope and degree of collaboration. In an interview with the Wall Street Journal, Amazon's chief security officer, Steve Schmidt, said, "The security team's job is to help organizations understand the benefits and risks of innovative technologies like generative AI, and how they can use it to improve their security efficiency. ”

In fact, security has been internalized as a part of Amazon Web Services' corporate culture. As a pioneer and leader in cloud computing, security is the highest priority for Amazon Web Services. By summarizing experience in practice, Amazon Web Services has built an effective security culture. Steve Schmidt himself is one of the pioneers and advocates of Amazon's safety culture. "Not only have we built and run a very large-scale security system in an enterprise that has never been seen before, but we've also developed a culture that's unusual in the security space," he said. One of the things I'm most proud of is creating the right culture in my business. ”

Safety is the bottom line for every company to run properly. The establishment of safety thinking and culture is indispensable. Amazon Web Services holds weekly security meetings that the CEO attends to ensure business needs and focus on security issues. Needless to say, safety should be the responsibility of everyone in the company.

Businesses can gradually improve efficiency and competitiveness with automated tools, and security should be embedded throughout the development process. By using different tools for basic or complex problems, developers can clearly understand the boundaries of security, and the development process is more secure and the review process is more efficient.

In short, security should be a baseline for the enterprise. A successful security team is also a must-have for businesses. The members of the security team should be diverse, preferably with different personalities or different backgrounds, cultures, and when confronted with the business, they should not say "can't do this, can't do that", but should tell the business team that this can be done.

Security recommendations when applying generative AI

During this year's National People's Congress and the National People's Congress, artificial intelligence and new quality productivity have become high-frequency words. The state has also called for the "artificial intelligence +" action. To ensure the implementation of AI applications and promote the vigorous development of the digital economy, it is necessary to coordinate development and security. Representatives of the National People's Congress and the National People's Congress have also offered suggestions for cyber security in the era of digital intelligence. For example, some delegates suggested that the construction of a network security system should be further improved, and personal information protection and data security management should be strengthened; some delegates proposed that a national, industry-level, and city-level digital security public service infrastructure should be built; some delegates suggested that relevant enterprises should carry out joint innovation to achieve breakthroughs in the cutting-edge technology of "AI + security" around actual offensive and defensive combat and application scenarios; and some delegates called for the introduction of an artificial intelligence law as soon as possible and the construction of an artificial intelligence algorithm governance system.

Steve Schmidt gives safety advice when using generative AI from his professional perspective. He believes that there are three questions that any business should ask itself when it comes to the security of generative AI.

A Look at Enterprise Security Through Generative AI says Amazon CSO Steve Schmidt

The first question is, where is the data? Companies must be clear about where the data comes from and how it is processed and protected throughout the workflow of training a model with data.

The second question is, what happens to my queries and any related data? Training data isn't the only sensitive dataset that businesses need to focus on. When businesses start using generative AI and large language models, they quickly learn how to make queries more effective. After that, they add more details and specific requirements to their queries to get better results. Enterprises using generative AI for queries must have a clear understanding of how the generative AI service processes the data fed into the model and the query results. Enterprise queries are inherently sensitive and should be part of a data protection plan. From an outside perspective, a lot of information can be inferred from the questions asked by users. In many cases, this information is very sensitive and also needs to be paid attention to and dealt with effectively.

The third question is, is the output of the generative AI model accurate enough? From a security perspective, the use cases for generative AI define the risks. In other words, different scenarios have different requirements for accuracy. If you're using a large language model to generate custom code, you'll want to make sure that the code is well written, that it's following best practices, etc.

With these issues in mind, there are three things that companies must pay special attention to when using generative AI for business innovation.

First, it's easy for security teams to say "no," but it's not the right thing to do. Companies train internal employees about the company's policies on the use of AI so that employees know how to use it safely. Steve Schmidt says it's easy to say "no" to an organization's security team, but it's just as easy for all business teams and developers to bypass it. Therefore, in order for businesses to properly and safely use generative AI, the best practice is to educate, inform, coach, set guardrails, and use cloud services that meet preset goals, while also understanding exactly how those services use and retain data.

Second, visibility is very important. Businesses need visibility tools to understand how employees are using data. It is necessary for businesses to limit access to data beyond the needs of the job. If you see a non-compliance with your policies, such as when you need to access sensitive data outside of your work needs, stop it immediately. In other cases, such as when employees are using data that is less sensitive but may violate policy, companies should proactively contact employees to learn more about their true purpose and find appropriate solutions.

Finally, solve the problem through the mechanism. Mechanisms are reusable tools that allow us to drive specific behaviors precisely over time. For example, when an employee violates the rules, the system will prompt the employee through a pop-up window, etc., and suggest the use of specific internal tools, and report the relevant issue.

That's what Steve Schmidt had to say from experience. "Leveraging generative AI to improve the writing of security code is an effective way to move the industry to the next level of security," he said. ”

Fight AI with AI

"Use AI to fight AI", this is a generally accepted way of doing so. When it comes to preventing hackers, generative AI has improved the efficiency of security engineers. Businesses can use generative AI models to build automated response processes and can quickly respond and output to scheduled events. Especially in the field of human interaction, large models can allow managers who are not technical to quickly understand what is happening when a security incident occurs. For example, Amazon Detective has a generative AI-based process for building textual descriptions of security incidents, which means that security engineers can take what is ready, adjust it, and use it to interpret the incident as it happens, saving hours of time.

In addition, generative AI can be used to alleviate the current shortage of cybersecurity talent. According to Steve Schmidt, the use of artificial intelligence and machine learning techniques allows businesses to identify and address security issues faster and more effectively. For example, generative AI has played a huge role in detecting anomalous behavior in customer accounts, helping businesses more accurately isolate and alert individual users to highly suspicious behavior. This allows an organization's security team to focus on strategic business initiatives and higher-value tasks, rather than just discovering and responding to incidents.

A Look at Enterprise Security Through Generative AI says Amazon CSO Steve Schmidt

In terms of technology and services, Amazon Web Services is fully prepared to prevent the security challenges brought about by generative AI. At re:Invent 2023, Amazon Web Services launched a security service competency with generative AI capabilities. Amazon Inspector's Amazon Lambda function code scanning feature leverages generative AI and automated inference for code repair. Based on generative AI, Amazon Web Services is able to create contextually relevant code patches for multiple categories of vulnerabilities, which allows developers to quickly perform operations such as verification and code replacement to quickly and effectively resolve issues. Amazon Detective is able to use generative AI to provide a summary of the findings group. Specifically, generative AI can automatically analyze investigative discovery groups and provide insights in natural language to accelerate security investigations. The purpose of the Amazon Detective service is to make it easier for users to analyze, investigate, and quickly identify the root cause of potential security issues or suspicious activity. For security experts, this helps validate what they've observed and streamlines their work, while for software developers, Amazon Detective makes it easier for them to understand the advanced security knowledge needed in security investigations through the use of AI. Taken together, both services dramatically improve the efficiency of enterprise security implementations and demonstrate the potential of generative AI technology in the security space.

On March 7, 2024, Amazon Web Services announced the launch of Amazon Network Firewall, a fully managed service, in the Amazon Web Services (Beijing) and (Ningxia) Regions, in collaboration with Sinnet and NWCD to help customers more easily provide cybersecurity protection for their workloads running on Amazon Web Services.

In addition, Amazon Web Services provides customers with a more secure operating environment by strengthening collaboration with partners. For example, Palo Alto Networks leveraged Amazon Web Services China Region to accelerate the launch of its security solutions in China, covering cybersecurity, cloud security, operational security, and threat intelligence and consulting, providing a globally consistent security experience. As another example, the NVIDIA GB200 will benefit from the enhanced security of the Amazon Nitro system, which protects the customer's code and data during processing, both on the client and in the cloud. This unique feature has been independently verified by NCC Group, a leading cybersecurity company.

Steve Schmidt said that Amazon's security team has been working on applying various AI technologies to improve the customer experience for many years. In the very beginning, writing more secure code was a major impact of generative AI. From a security and cost standpoint, it's much better to write secure code from the start than to modify it after it's been written, or after the integration has been tested, or even delivered to the customer. Arguably, the way the code is written is one of the biggest levers in information security. Amazon's security team leverages out-of-the-box generative AI applications to continuously upgrade the security industry, starting at the code stage.

Read on