DOI: 10.1162/nol_a_00118 ISSN: 2641-4368

Localizing Syntactic Composition with Left-Corner Recurrent Neural Network Grammars

Yushi Sugimoto, Ryo Yoshida, Hyeonjeong Jeong, Masatoshi Koizumi, Jonathan R. Brennan, Yohei Oseki
  • Neurology
  • Linguistics and Language

Abstract

In computational neurolinguistics, it has been demonstrated that hierarchical models such as recurrent neural network grammars (RNNGs), which jointly generate word sequences and their syntactic structures via the syntactic composition, better explained human brain activity than sequential models such as long short-term memory networks (LSTMs). However, the vanilla RNNG has employed the top-down parsing strategy, which has been pointed out in the psycholinguistics literature as suboptimal especially for head-final/left-branching languages, and alternatively the left-corner parsing strategy has been proposed as the psychologically plausible parsing strategy. In this article, building on this line of inquiry, we investigate not only whether hierarchical models like RNNGs better explain human brain activity than sequential models like LSTMs, but also which parsing strategy is more neurobiologically plausible, by developing a novel fMRI corpus where participants read newspaper articles in a head-final/left-branching language, namely Japanese, through the naturalistic fMRI experiment. The results revealed that left-corner RNNGs outperformed both LSTMs and top-down RNNGs in the left inferior frontal and temporal-parietal regions, suggesting that there are certain brain regions that localize the syntactic composition with the left-corner parsing strategy.

More from our Archive