OpenAI, the artificial intelligence (AI) firm, has rolled out the GPT-4 model that outsmarts humans and can easily identify exploits in Ethereum (ETH) contracts.

The launch of the new version of the massively popular and highly intelligent artificial intelligence chatbot ChatGPT showcased its large language model, which is different from its previous versions and offers the feature called a multimodal system that can process images, videos, or audio, aside from text.

"It passes a simulated bar exam with a score around the top 10% of test takers," OpenAI said adding, "In contrast, GPT-3.5's score was around the bottom 10%."

ChatGPT version 4 is also more intelligent as its aces bar and multiple SATs. But, one of the version's more interesting features is its ability to identify exploits in Ethereum smart contracts.

This was tested and reported by former Coinbase director Conor Grogan, who detailed his experience of the GPT-4 on the social media platform Twitter on Wednesday. According to him, the ChatGPT version 4, after getting inserted in a live Ethereum smart contract, instantly highlighted multiple security vulnerabilities and explained how the code could be exploited.

"In an instant, it highlighted a number of security vulnerabilities and pointed out surface areas where the contract could be exploited. It then verified a specific way I could exploit the contract," Grogan said in a tweet.

In a follow up tweet, the former Coinbase executive revealed that the contract he tested ChatGPT v4 on was hacked in 2018 by exploiting the same vulnerabilities pointed out by the AI chatbot.

"I believe that AI will ultimately help make smart contracts safer and easier to build, two of the biggest impediments to mass adoption," he said in another tweet.

Interestingly, one Twitter user who goes by the handle @EdwardWll suggested that Grogan should try using the latest version of ChatGPT AI to "Euler contracts," which is a collection of smart contracts connected with a module system.

Unfortunately, Grogan said that he had tried but "the contracts are a bit too long to be processed by GPT-4 right now."

It may be recalled that the first version of ChatGPT was also capable of identifying code bugs to a certain degree. However, it appears that while the latest version of the popular AI chatbot has become a lot smarter, it is still incapable of fixing its flaws.

"While they've made a lot of progress, it's clearly not trustworthy," said Allen Institute for AI CEO and University of Washington professor emeritus Oren Etzioni. "It's going to be a long time before you want any GPT to run your nuclear power plant," he added.

Carnegie Mellon University professor Vincent Conitzer, who specializes in AI, also thinks the same about the latest version of ChatGPT. While he said that "it definitely seems to have gained some abilities," he noted that it still makes errors like presenting fake math proof.

Illustration picture of ChatGPT
Reuters