Artificial intelligence is a growing field for tech companies, but a recent experiment from Facebook showcased an unintended consequence of the new technology.

As part of a project on machine learning applications, Facebook researchers worked on developing artificial intelligence-powered agents that would be able to negotiate with themselves. In the test, the agents were tasked with automatically figuring out ways to split a group of items. In addition, both agents had specific values assigned to each item, ensuring that they couldn’t come to a draw.

Read: Artificial Intelligence Will Turn Against Us, Says John McAfee

However, when the bots started talking to one another, something interesting happened. Via Fast Company, here’s a partial transcript of their chat log:

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else

Alice: balls have a ball to me to me to me to me to me to me to me to me

Bob: i . . . . . .. . . . .  .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

This might look like nonsense, but according to Facebook, this conversation was yet another example of AI dynamically generating its own contextual language and the ability to understand conversations. Dhruv Batra, a visiting Facebook AI research scientist from Georgia Tech, told Fast Company that for the AI agents, there wasn’t any guidance to stick to typical English sentence structure, so they made up the difference on their own.

“Agents will drift off understandable language and invent codewords for themselves,” Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

Part of this phenomena has come from the type of work researchers are doing. Companies like Facebook and Google are developing consumer-grade ways for AI to talk with other humans, so they’ve typically focused solely on English speech. In the past, researchers have run into instances where AI can independently develop its own language and conversational system before.

In a later part of its testing, Facebook re-enforced standard English language structures on the negotiation agents and added additional training to teach the agents good and bad behaviors. As researchers noted, the upgraded agents performed extremely well in human-to-agent tests:

Interestingly, in the FAIR experiments, most people did not realize they were talking to a bot rather than another person — showing that the bots had learned to hold fluent conversations in English in this domain. The performance of FAIR’s best negotiation agent, which makes use of reinforcement learning and dialog rollouts, matched that of human negotiators. It achieved better deals about as often as worse deals, demonstrating that FAIR's bots not only can speak English but also think intelligently about what to say.

Read: Apple Launches Blog Focusing On Its Artificial Intelligence Research

But beyond the long-disputed science-fiction fears of AI developing self-awareness and enslaving humanity, Facebook’s findings illustrate another interesting aspect of AI and language development.

Consumer-focused products have made human interaction a major part of AI development, but what if AI-powered programs could simply work and talk without needing humans? For a language like English, researchers need to invest time and development making ways for AI to understand areas like syntax and sentence structure. By bypassing these hoops for agents to jump through and allowing AI to work without needing human language, researchers have argued that AI-powered hardware or software could learn at even faster rates.