OpenAlex Citation Counts

OpenAlex Citations Logo

OpenAlex is a bibliographic catalogue of scientific papers, authors and institutions accessible in open access mode, named after the Library of Alexandria. It's citation coverage is excellent and I hope you will find utility in this listing of citing articles!

If you click the article title, you'll navigate to the article, as listed in CrossRef. If you click the Open Access links, you'll navigate to the "best Open Access location". Clicking the citation count will open this listing for that article. Lastly at the bottom of the page, you'll find basic pagination options.

Requested Article:

Neural language models as psycholinguistic subjects: Representations of syntactic state
Richard Futrell, Ethan Wilcox, Takashi Morita, et al.
(2019), pp. 32-42
Open Access | Times Cited: 170

Showing 1-25 of 170 citing articles:

On the Opportunities and Risks of Foundation Models
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, et al.
arXiv (Cornell University) (2021)
Open Access | Times Cited: 1553

What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models
Allyson Ettinger
Transactions of the Association for Computational Linguistics (2020) Vol. 8, pp. 34-48
Open Access | Times Cited: 544

Syntactic Structure from Deep Learning
Tal Linzen, Marco Baroni
Annual Review of Linguistics (2020) Vol. 7, Iss. 1, pp. 195-212
Open Access | Times Cited: 178

Lack of selectivity for syntax relative to word meanings throughout the language network
Evelina Fedorenko, Idan Blank, Matthew Siegelman, et al.
Cognition (2020) Vol. 203, pp. 104348-104348
Open Access | Times Cited: 154

Large Language Models Demonstrate the Potential of Statistical Learning in Language
Pablo Contreras Kallens, Ross Deans Kristensen‐McLachlan, Morten H. Christiansen
Cognitive Science (2023) Vol. 47, Iss. 3
Closed Access | Times Cited: 72

A Systematic Assessment of Syntactic Generalization in Neural Language Models
Jennifer Hu, Jon Gauthier, Peng Qian, et al.
(2020)
Open Access | Times Cited: 130

So Cloze Yet So Far: N400 Amplitude Is Better Predicted by Distributional Information Than Human Predictability Judgements
James A. Michaelov, Seana Coulson, Benjamin Bergen
IEEE Transactions on Cognitive and Developmental Systems (2022) Vol. 15, Iss. 3, pp. 1033-1042
Closed Access | Times Cited: 46

A fine-grained comparison of pragmatic language understanding in humans and language models
Jennifer J. Hu, Sammy Floyd, Olessia Jouravlev, et al.
(2023)
Open Access | Times Cited: 29

Computational Language Modeling and the Promise of In Silico Experimentation
Shailee Jain, Vy A. Vo, Leila Wehbe, et al.
Neurobiology of Language (2023) Vol. 5, Iss. 1, pp. 80-106
Open Access | Times Cited: 27

Strong Prediction: Language Model Surprisal Explains Multiple N400 Effects
James A. Michaelov, Megan D. Bardolph, Cyma K. Van Petten, et al.
Neurobiology of Language (2023) Vol. 5, Iss. 1, pp. 107-135
Open Access | Times Cited: 23

A-maze of Natural Stories: Comprehension and surprisal in the Maze task
Veronica Boyce, Roger Lévy
Glossa Psycholinguistics (2023) Vol. 2, Iss. 1
Open Access | Times Cited: 23

SyntaxGym: An Online Platform for Targeted Evaluation of Language Models
Jon Gauthier, Jennifer Hu, Ethan Wilcox, et al.
(2020), pp. 70-76
Open Access | Times Cited: 66

Assessing Phrasal Representation and Composition in Transformers
Lang Yu, Allyson Ettinger
(2020), pp. 4896-4907
Open Access | Times Cited: 56

Mechanisms for handling nested dependencies in neural-network language models and humans
Yair Lakretz, Dieuwke Hupkes, Alessandra Vergallito, et al.
Cognition (2021) Vol. 213, pp. 104699-104699
Open Access | Times Cited: 53

Cross-Linguistic Syntactic Evaluation of Word Prediction Models
Aaron Mueller, Garrett Nicolai, Panayiota Petrou-Zeniou, et al.
(2020), pp. 5523-5539
Open Access | Times Cited: 50

Single‐Stage Prediction Models Do Not Explain the Magnitude of Syntactic Disambiguation Difficulty
Marten van Schijndel, Tal Linzen
Cognitive Science (2021) Vol. 45, Iss. 6
Open Access | Times Cited: 46

Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models
Matthew Finlayson, Aaron Mueller, Sebastian Gehrmann, et al.
(2021)
Open Access | Times Cited: 41

Structural Persistence in Language Models: Priming as a Window into Abstract Language Representations
Arabella Sinclair, Jaap Jumelet, Willem Zuidema, et al.
Transactions of the Association for Computational Linguistics (2022) Vol. 10, pp. 1031-1050
Open Access | Times Cited: 37

Prompting is not a substitute for probability measurements in large language models
Jennifer Hu, Roger Lévy
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (2023)
Open Access | Times Cited: 17

Exploring BERT’s Sensitivity to Lexical Cues using Tests from Semantic Priming
Kanishka Misra, Allyson Ettinger, Julia Taylor Rayz
(2020), pp. 4625-4635
Open Access | Times Cited: 47

Using Priming to Uncover the Organization of Syntactic Representations in Neural Language Models
Grusha Prasad, Marten van Schijndel, Tal Linzen
(2019), pp. 66-76
Open Access | Times Cited: 43

Semantic Structure in Deep Learning
Ellie Pavlick
Annual Review of Linguistics (2021) Vol. 8, Iss. 1, pp. 447-471
Open Access | Times Cited: 34

How well does surprisal explain N400 amplitude under different experimental conditions?
James A. Michaelov, Benjamin Bergen
(2020), pp. 652-663
Open Access | Times Cited: 36

Page 1 - Next Page

Scroll to top