The Society for Computation in Linguistics (SCiL) is devoted to facilitating and promoting research on computational and mathematical approaches in Linguistics. SCiL aims to provide a central forum for exchange of ideas and dissemination of original research results on computational approaches in any area of linguistics. In addition to providing a forum for researchers already working in these areas, SCiL hosts regular meetings (the first of which was co-located with LSA 2018 in Salt Lake City, Utah) that feature high-quality research presentations and peer-reviewed proceedings published with the Association for Computational Linguistics (ACL) Anthology as well as in our own standalone open access publication, the Proceedings of the Society for Computation in Linguistics.
Current Volume: Volume 5 (2022)
Papers
SCiL 2022 Editors' Note
Allyson Ettinger, Tim Hunter, and Brandon Prickett
A split-gesture, competitive, coupled oscillator model of syllable structure predicts the emergence of edge gemination and degemination
Francesco Burroni
ANLIzing the Adversarial Natural Language Inference Dataset
Adina Williams, Tristan Thrush, and Douwe Kiela
Evaluating Structural Economy Claims in Relative Clause Attachment
Aniello De Santo and So Young Lee
How Well Do LSTM Language Models Learn Filler-gap Dependencies?
Satoru Ozaki, Dan Yurovsky, and Lori Levin
Learning Argument Structures with Recurrent Neural Network Grammars
Ryo Yoshida and Yohei Oseki
Learning Stress Patterns with a Sequence-to-Sequence Neural Network
Brandon Prickett and Joe Pater
Linguistic Complexity and Planning Effects on Word Duration in Hindi Read Aloud Speech
Sidharth Ranjan, Rajakrishnan Rajkumar, and Sumeet Agarwal
Modeling human-like morphological prediction
Eric R. Rosen
Parsing Early Modern English for Linguistic Search
Seth Kulick, Neville Ryant, and Beatrice Santorini
Remodelling complement coercion interpretation
Frederick G. Gietz and Barend Beekhuizen
Extended Abstracts
Analysis of Language Change in Collaborative Instruction Following
Anna Effenberger, Eva Yan, Rhia Singh, Alane Suhr, and Yoav Artzi
Can language models capture syntactic associations without surface cues? A case study of reflexive anaphor licensing in English control constructions
Soo-Hwan Lee and Sebastian Schuster
Learning Constraints on Wh-Dependencies by Learning How to Efficiently Represent Wh-Dependencies: A Developmental Modeling Investigation With Fragment Grammars
Niels Dickson, Lisa Pearl, and Richard Futrell
Masked language models directly encode linguistic uncertainty
Cassandra Jacobs, Ryan J. Hubbard, and Kara D. Federmeier
MaxEnt Learners are Biased Against Giving Probability to Harmonically Bounded Candidates
Charlie O'Hara
Universal Dependencies and Semantics for English and Hebrew Child-directed Speech
Ida Szubert, Omri Abend, Nathan Schneider, Samuel Gibbon, Sharon Goldwater, and Mark Steedman
Horse or pony? Visual Typicality and Lexical Frequency Affect Variability in Object Naming
Eleonora Gualdoni, Thomas Brochhagen, Andreas Mädebach, and Gemma Boleda
Abstracts
When Classifying Arguments, BERT Doesn't Care About Word Order... Except When It Matters
Isabel Papadimitriou, Richard Futrell, and Kyle Mahowald
The interaction between cognitive ease and informativeness shapes the lexicons of natural languages
Thomas Brochhagen and Gemma Boleda
Learning Input Strictly Local Functions: Comparing Approaches with Catalan Adjectives
Alexander Shilen and Colin Wilson