• The human brain fires different neural patterns based on whether it was a sentence or a phrase that was spoken
  • The study can help give insights into machine learning efficiency
  • The research was conducted on Dutch participants where a set of Dutch words was used

Conversations take a life of their own when stimulating enough. The vernacular may or may not always be in proper sentences; there may be phrases and slang words in the mix. Just how does the brain connect these abstract structures and grasp meaning from them? A study published in PLOS biology aims to tackle this question.

The neuroimaging study conducted by Max Planck Institute of Psycholinguistics and Radboud University in Nijmegen recorded neural stimulation patterns of the brain when participants were presented with sentences and phrases.

Two sets of words were used in the study: the vase is red (sentence) and the red vase (phrase). It was found that the brain fired different neural patterns when subjected to a sentence and when subjected to a phrase.

The study, conducted by a group of researchers, mainly Lise Meitner Group Leader Andrea Martin, along with first author and Ph.D. candidate Fan Bai and MPI director Antje Meyer, was performed on 15 Dutch native speakers, and their brain activity, was recorded with the help of electroencephalography (EEG) through the scalp.

The Dutch version of the words was used such as "de vaas is rood" for "the vase is red" and "de rode vaas" for "the red vase," which were similar in meaning, made of a matching number of syllables and were of equal duration. The results were in line with the computer simulations Martin has theorized.

To realize this "time-based" model of the language structure she had developed with Leonidas Doumas from the University of Edinburgh, an experiment was conducted, wherein for each spoken stimulus, a set of three tasks was to be performed in random order.

The first involved a structure-based task, in which the participants had to press a button depending on whether they heard a phrase or a sentence. The second and third tasks were meaning-based, as the participants had to identify the color or the object of the spoken stimulus with the pictures that were provided. "Our findings show how the brain separates speech into the linguistic structure by using the timing and connectivity of neural firing patterns. These signals from the brain provide a novel basis for future research on how our brains create a language," said Martin.

She added that the time-based mechanism could be used for machine learning systems that interface with spoken language comprehension in order to represent the abstract structure -- something machine systems currently struggle with. "We will conduct further studies on how knowledge of the abstract structure and countable statistical information, like transitional probabilities between linguistic units, are used by the brain during spoken language comprehension," she said.

The BCI interface uses an EEG cap to measure brain signals. Graz University of Technology, Austria