GettyImages-985140484
Artificial Intelligence continues to develop in all parts of the globe. Pictured: An AI robot (L) by CloudMinds is seen during the Mobile World Conference in Shanghai on June 27, 2018. AFP/Getty Images

The Elon Musk-backed Artificial Intelligence group, OpenAI, recently revealed that its AI software has the capacity to create predictive texts and even produce highly-convincing fake content by incorporating words and ideas from 8 million web pages fed into its system.

The language model, called GPT-2 (a successor to GPT), was trained to predict the next words in 40GB of Internet text. Compared to the GPT model, the GPT-2 has more than 10X the parameters and trained to handle the same amount of data compared to its predecessor.

According to a blog post from OpenAI posted on Feb. 14, the company has trained a large-scale unsupervised language model which generates coherent paragraphs of text.

"It achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training,” the article stated. This is where the problem begins.

According to BGR, the language model is considered to be so powerful that it could easily create manuscripts and even convincing fake news by simply tying together words fed into its system from billions of web sources. The system was described as “chameleon-like" as it also adapts the style and content of the conditioning text. Because of this, the GPT-2 can generate realistic and coherent continuations regarding a specific topic.

OpenAI conducted a test and used an example of texts with reference to a unicorn, a mystical creature. The text, which cited a sham discovery that unicorns are real, was ‘continued’ flawlessly by the GPT-2 platform.

The implications of its capabilities could pose some problems if used for large scale information drive like the elections, per the site. This thought is actually not lost to OpenAI which acknowledged consequences to arise from using the system.

“Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper,” OpenAI said in the same blog post.

Although there was no mention of fake news in the blog post, sources pointed out that the system could be inappropriately used in instances such as presidential campaigns.

Using the ‘unicorn’ sample, it showed that the system is already on its way to producing a number of prompts that are described to have a human quality aspect that shows coherence. There are failures, of course, but OpenAI also revealed that it can already produce reasonable samples about 50 percent of the time during the company’s research.