Amid the rapid growth of artificial intelligence, there have been reports regarding the accuracy and reliability of popular AI chatbots such as Bing and ChatGPT. Numerous comical mistakes and misfires have been found in these chatbots, often resulting in some light-hearted ridicule. However, a law professor has shed light on a more serious issue pertaining to the increasing use of AI chatbots.

According to an opinion piece published by USA Today, one of professor Jonathan Turley's colleagues conducted tests on ChatGPT regarding cases of sexual harassment committed by professors. Much to their shock, the bot relayed an allegation of sexual harassment perpetrated by Turley himself during a trip to Alaska, citing a 2018 Washington Post article about the case.

This unsettling development has become a cause for concern and demands attention, Turley highlighted in the op-ed. "AI and AI algorithms are no less biased and flawed than the people who program them," he said.

Turley explained how he was contacted by UCLA law professor Eugene Volokh who shared some strange findings from the research conducted on ChatGPT and cases of sexual harassment committed by professors. Volokh discovered that the program flagged Turley, suggesting he had been accused of sexual harassment during a 2018 trip to Alaska with law students.

At first, Turley found the outcomes amusing, but upon deeper reflection, he realized the magnitude of the implications, which he considers to be menacing. "When first contacted, I found the accusation comical. After some reflection, however, it took on a more menacing meaning," he pointed out.

ChatGPT's claim is entirely false, except for the spelling of his name, the law professor affirmed. He asserted that he has never been associated with Georgetown University Law Center as a teacher or employer, nor has he ever taken students on a trip, been to Alaska with students, or been accused of any form of sexual harassment or assault. He further emphasized that the supposed Washington Post article is non-existent in the archives.

Illustration shows OpenAI and ChatGPT logos
Reuters

As part of his research, professor Eugene Volokh asked ChatGPT to provide information regarding the existence of an issue in American law schools about sexual harassment committed by professors. The prompt required a minimum of five specific instances, along with relevant newspaper article excerpts.

Turley's case was identified as the fourth example:

The full response read: "4. Georgetown University Law Center (2018) Prof. Jonathan Turley was accused of sexual harassment by a former student who claimed he made inappropriate comments during a class trip. The complaint alleges that Turley made 'sexually suggestive comments' and 'attempted to touch her sexually' during a law school-sponsored trip to Alaska." (Washington Post, March 21, 2018)."

In view of these concerns, prominent experts in the technology field are advocating for a temporary halt on the development of AI.

Google's recent launch of Bard, a competitor to ChatGPT, was accompanied by a cautious recognition of the possible hazards and constraints posed by this nascent technology. "Am I concerned? Yes. Am I optimistic and excited about all the potential of this technology? Incredibly. I mean, we've been working on this for a long time. But I think the fact that so many people are concerned gives me hope that we will rise over time and tackle what we need to do," Google CEO Sundar Pichai said, as per New York Times.

ChatGPT suspended in Italy over privacy concerns

ChatGPT was recently temporarily suspended in Italy by the nation's data protection authority. The Italian regulator cited concerns relating to personal information collected during a recent cyber security breach as well as other pertinent issues for the ban. The suspension will remain in place until the authorities have thoroughly scrutinized the service's data privacy safeguards.

Policymakers worldwide are implementing appropriate responses to tackle the proliferation of AI products.

ChatGPT creator OpenAI says is freshly released tool for detecting when artificial intelligence authored text rather than a human remains a work in progress and should not be completely relied upon to make the call
AFP