Human languages might be a distant cousin of bird songs, with the two sharing a biological origin going back millions of years in evolution.

Researchers have found that zebra finches are hardwired to learn and create certain kinds of patterns in their sounds — patterns that are similar to those seen in human speech and expression. The finding suggests that human minds may also be biased toward certain language formats and designs, with that brain wiring being something we have retained through evolution and that links us both to birds and possibly other species as well.

According to their study in the journal Current Biology, the scientists brought out this idea by giving young birds different options for how they could order their messages. They found that the birds were biased toward certain patterns, and those same patterns they preferred were close to what would have been observed in wild birds. For instance, they were more likely to put short, high-pitched sounds in the middle of their songs, and long, low-pitched sounds at the end.

“This matches patterns observed across diverse languages and in music, in which sounds at the end of phrases tend to be longer and lower in pitch than sounds in the middle,” McGill University explained.

Previous research has already shown that human languages share similar patterns in their structures and sound qualities like pitch; that the songs of zebra finches have patterns across their own populations; and that these arrangements are similar between the two species.

“Because the nature of these universals bears similarity to those in humans and because songbirds learn their vocalizations much in the same way that humans acquire speech and language, we were motivated to test biological predisposition in vocal learning in songbirds,” study co-author Logan James said in the university statement.

The new research suggests that human speech patterns, just like in songbirds, are more than just a product of learning — our brains lean toward a certain communication structure.

“In the immediate future,” senior study author Jon Sakata said in the statement, “we want to reveal how auditory processing mechanisms in the brain, as well as aspects of motor learning and control, underlie these learning biases.”