KEY POINTS

  • Alphabet reportedly told AI engineers to avoid directly using computer codes that chatbots can generate
  • The company previously barred employees from giving Bard internal information
  • Bard's EU launch was postponed earlier this week over privacy concerns

Google parent Alphabet seems concerned about how its AI-powered chatbot Bard handles sensitive data as it reportedly warned employees against feeding confidential information to the ChatGPT rival.

Four people familiar with the matter told Reuters that the search engine giant advised employees not to input confidential data into its AI chatbots, citing company policies on information security. Alphabet has also reportedly told its artificial intelligence engineers to avoid directly using computer codes that chatbots can generate.

Alphabet told the outlet that Bard can generate undesired code suggestions, but it was still helpful to programmers. Google also said it wanted to be transparent about the limits of its technology.

Experts say Google's warning to its employees was a reflection of how companies working on AI are becoming more attentive to data security.

This is not the first time Google alerted its employees about Bard's training.

In February, Insider reported that Google told workers testing the chatbot ahead of its launch that they shouldn't give the chatbot internal information. The company also provided guidelines on how to train Bard. It reportedly told staff not to "describe Bard as a person, imply emotion, or claim to have human-like experiences."

"Safety is a top priority. If you find an answer [from Bard] that offers legal, medical, financial advice, his hateful, harmful, false, illegal or abusive, or solicits sensitive information (e.g. personally identifiable information), give it a thumbs down and mark it unsafe," Google reiterated.

Earlier this month, the California-based company issued a privacy notice warning users to "not include information that can be used to identify you or others in your Bard conversations." It said users should not include "confidential or sensitive information" when conversing with the chatbot.

Steve Mills, chief AI ethics officer at Boston Consulting Group, explained that the biggest concern companies have about chatbots was the "inadvertent disclosure of sensitive information."

If sensitive company information fed into AI chatbots are used to further train the models, companies would have ultimately "lost control of that data."

It was earlier reported that Google employees had pleaded with the search giant to hold back on Bard's launch as it was not ready for general use. Internal messages showed that employees saw Bard as a "pathological liar" and "worse than useless." One employee reportedly warned that the chatbot's answers to scuba diving queries "would likely result in serious injury or death."

Google's latest warning to employees came days after Bard's launch in the European Union was postponed over privacy concerns.

Graham Doyle, deputy commissioner of the Irish Data Protection Commission, said the Irish watchdog "had not had any detailed briefing nor sight of a data protection impact assessment or any supporting documentation" from Google, Politico reported.

A Google spokesperson told Gizmodo that the company would launch Bard in the EU "responsibly" after it engages with industry experts, policymakers and regulators.

Earlier this month, the EU's commissioner for transparency, Vera Jourova, said she has asked 44 companies and organizations that signed up for the bloc's voluntary Code of Practice to "clearly label" online content generated by AI.

Companies such as Bing maker Microsoft and Google "should build in necessary safeguards so that these services cannot be used by malicious actors to generate disinformation," she added.

Bard was launched early in February. Google describes the ChatGPT rival as a chatbot that "seeks to combine the breadth of the world's knowledge with power, intelligence and creativity."

A March review of ChatGPT, Bing Chat and Bard by Wired noted that while the three generative AI models were "smart" and "interactive," they were also "pretty little liars."

Illustration shows Artificial Intelligence words
Alphabet was reportedly planning to launch Bard in the EU this week, but privacy concerns blocked the ChatGPT rival's European dream. Reuters