KEY POINTS

  • One employee said Bard was 'worse than useless,' and others called it 'cringeworthy'
  • Some employees said the ethics team has become 'disempowered and demoralized'
  • A comparative research of ChatGPT, Bing and Bard showed that the latter made 'bizarre' errors

Google employees pleaded with the company to not launch its own version of Microsoft-backed ChatGPT as they believed that the chatbot called Bard was "useless" and wasn't ready for general use, a new report revealed.

Google's AI language model was dubbed a "pathological liar" by employees, as per internal messages, Bloomberg reported Wednesday. The outlet spoke with 18 former and current Google workers who revealed how employees were reportedly concerned not just because Bard was "worse than useless" but due to ethical issues.

One employee reportedly said Bard's answers to queries about scuba diving "would likely result in serious injury or death," while another employee said the ChatGPT rival provided dangerous advice on plane landing. Bloomberg added that employees referred to the chatbot as "cringeworthy."

"Bard is worse than useless: please do not launch," one employee said, according to Bloomberg. The outlet also paraphrased testimonials of current and former employees about Google "providing low-quality information in a race to keep up with the competition, while giving less priority to its ethical commitments."

Former and current employees told Bloomberg that Google's ethics team has become "disempowered and demoralized" as they have been "told not to get in the way or try to kill any of the generative AI tools in development."

Brian Gabriel, a spokesperson for Google, told Bloomberg that the company continues to invest "in teams that work on applying our AI Principles to our technology."

In The Verge's research comparing Microsoft's Bing, OpenAI's ChatGPT and Google's Bard, the outlet found that when asked for a recipe for chocolate cake, Bard made "some changes that meaningfully affect flavor" and also understated the cake's baking time.

The comparative research also revealed that Bard offered "a bizarre series of errors" when answering a question about the average salary for a plumber in New York City. When asked for a marathon training plan, Bard gave a "confusing" strategy, as per The Verge.

In February, Google employees used the internal forum Memegen to express their thoughts on the unveiling of Bard. Some staffers referred to Google's work on Bard as "rushed" and botched," according to messages and memes viewed by CNBC.

One highly-rated meme on the forum stated that Google's leadership was "being comically short sighted and un-Googlely in their pursuit of 'sharpening focus,'" as per CNBC. Another meme said that "rushing Bard to market in a panic validated the market's fear about us."

Google opened access to Bard for consumers in the U.S. and the U.K. last month as the race toward AI dominance ensued between big tech firms. At the time, senior product director at Google Jack Krawczyk said internal and external testers used Bard for "boosting their productivity, accelerating their ideas, really fueling their curiosity," Reuters reported.

During a demo of how Bard worked, a pop-up notice warned that the chatbot "will not always get it right" and the company also highlighted to Reuters some of the mistakes Bard made in the demo, including an instance wherein the chatbot produced nine paragraphs instead of four.

In 2021, Google announced a restructuring of its responsible AI teams after Timnit Gebru, who co-headed the company's ethical AI team, revealed that she was abruptly fired. Margaret Mitchell, Gebru's counterpart, was also fired later, multiple outlets reported.

Gebru was working on a paper that explained the dangers behind large language processing models like ChatGPT and Bard before she was dismissed. She was asked by Google Research vice president Megan Kacholia to retract the paper's publication but Gebru said the company should be more transparent, as per The Verge. Gebru was reportedly fired after she pushed back against the order.

Following the removal of Gebru, Google's AI team wrote a letter to the leadership, stating that "words that are not paired with action are virtue signaling; they are damaging, evasive, defensive, and demonstrate leadership's inability to understand how our organization is a part of the problem," CNBC reported.

As per CNBC, the employees' letter demanded "structural changes" following the "retaliation" against Gebru, if the search engine giant wanted work on ethical AI to continue.

A 3D-printed Google logo is seen in this illustration
Google's Bard was rolled out last month to U.S. and British consumers, but a new report revealed how employees didn't want the ChatGPT rival to launch after all. Reuters