laitimes

Why is Google promoting Bard while warning employees to use AI chatbots with caution?

author:AI dazzling technology

AI chatbots have been a hot topic in the field of artificial intelligence in recent years, and they can communicate with humans through natural language, providing various services and functions. Among them, the most representative is OpenAI's GPT-3, which is currently the world's largest language model with 175 billion parameters that can generate smooth and logical text. Not far behind, Google has launched its own large-scale language model, Bard, which has smaller parameters but a larger amount of training data and supports 20 languages. Google also posted Bard's code and model cards publicly on GitHub to make it easier for other researchers to use and study the model.

Why is Google promoting Bard while warning employees to use AI chatbots with caution?

However, while Google is vigorously promoting Bard, it has made a surprising request for its employees: use AI chatbots, including its own Bard, carefully. According to Reuters, four people familiar with the matter said that Google's parent company Alphabet has advised employees not to enter confidential materials into AI chatbots and to avoid directly using AI-generated computer code. This is for the sake of protecting information and security, as AI chatbots can leak data or generate inappropriate text or code.

Why is Google promoting Bard while warning employees to use AI chatbots with caution?

The move sparked questions and discussions: Why did Google impose restrictions on its own employees while promoting Bard's advantages and potential to the world. Does this imply that Bard has any deficiencies or hidden dangers? Does Google have double standards? This article will analyze from the following aspects.

Google's concerns

Google's warning to employees to use AI chatbots is not unjustified or malicious. In fact, this is based on awareness and respect for the characteristics and limitations of AI chatbots themselves. AI chatbots, while powerful and intelligent, are not perfect and unreliable. They have the following problems and risks:

  • Data breaches. AI chatbots are trained and generated based on large amounts of text data, which will use the information entered by the user as training material, and may reuse or leak this information in subsequent outputs. If the user enters some sensitive or confidential data, such as personal information, account password, business plan, etc., then this data may be exposed to other users or third parties by AI chatbots. This is a great security risk for both users and companies.
  • Text quality. Although AI chatbots can generate smooth and logical text, they do not guarantee the correctness, rationality and legitimacy of the text. Since AI chatbots are generated based on statistical patterns in training data, they may generate biased, toxic, or untrue text that misleads or harms users or society. For example, AI chatbots may generate discriminatory, abusive or defamatory statements, or fabricate false news, historical or scientific facts.
  • Code quality. AI chatbots can generate not only text, but also computer code. This is an attractive feature for programmers because it helps them improve programming efficiency and quality. However, the code generated by AI chatbots is not necessarily correct, efficient, or secure. Since AI chatbots are generated based on grammar rules and code snippets in the training data, they may generate some error-based, vulnerable, or malicious code that can cause the program to run incorrectly, crash, or be attacked.

In summary, Google's warning against employees using AI chatbots is based on the awareness and prevention of these problems and risks. Google isn't trying to dismiss or discredit Bard or other AI chatbots, but rather to remind employees to be careful when using these tools to avoid unnecessary losses and trouble.

Google's double standards

While Google's warning about employees using AI chatbots is justified and necessary, it also exposes Google's double standards in promoting Bard. These double standards are mainly reflected in the following aspects:

  • External propaganda and internal restrictions. In promoting Bard to the world, Google highlighted its strengths and potential, such as support for multiple languages and tasks, open source code and model cards, and more. However, when promoting Bard to its own employees, it highlighted its limitations and risks, such as the potential to leak data and generate inappropriate text or code. This kind of external propaganda and internal restriction gives the impression that Google is deceiving or misleading external users, making people doubt Bard's true performance and credibility.
  • Respect for technology versus skepticism for technology. As a technology-driven company, Google has been actively developing and applying various advanced technologies, especially artificial intelligence technologies. In promoting Bard, Google also showed respect and trust in the technology, believing that Bard could help researchers and engineers explore AI applications and related functions. However, in the face of his own employees, he showed skepticism and distrust of the technology, believing that Bard might give inappropriate text or code suggestions. This kind of admiration for technology and skepticism of technology gives the impression that Google is contradicting itself or deceiving itself, making people doubt the true value and reliability of Bard.

Google's way out

In the face of Bard's problems and risks, Google should not adopt the double standards of external publicity and internal restrictions, technology admiration and technical skepticism, and open science support and open science obstruction, but should take the following measures:

  • Transparency to the outside world and internally to the public. Google should openly acknowledge and explain Bard's limitations and risks when promoting Bard to the world, so that external users can clearly understand the true performance and trustworthiness of Bard, and what they should be aware of when using Bard. At the same time, Google should also transparently demonstrate and explain Bard's advantages and potential when promoting Bard to its own employees, so that internal employees can fully utilize Bard's features and resources, and improve Bard's quality and safety.
  • Respect and responsibility for technology. Google should respectfully recognize and accept Bard's features and limitations when promoting Bard, and not exaggerate or belittle Bard's capabilities and values. At the same time, Google should also use Bard responsibly to supervise and manage Bard's output and behavior, and not blindly trust or doubt Bard's suggestions and results.
  • Adhere to and improve open science. Google should persistently support and commit to the ideas and goals of open science when promoting Bard, and not give up or go against the spirit of open science because of some problems and risks. At the same time, Google should also use Bard to perfectly regulate and improve the practice and methods of open science, and not ignore or undermine the principles of open science because of some advantages and potential.
Why is Google promoting Bard while warning employees to use AI chatbots with caution?

In short, Google, as a technology-driven company, should promote and use AI chatbots like Bard in an open, transparent, respectful, responsible, insistent, and perfect attitude, rather than dealing with this problem in a double-standard way. Only in this way can Google truly exploit the advantages and potential of Bard, while avoiding the problems and risks brought by Bard, so as to make greater contributions to the field of artificial intelligence.

Read on