Michael Hahn

I am a Tenure-Track Professor (W2) at Saarland Informatics Campus at Saarland University, where I direct the Language, Computation, and Cognition Lab (LaCoCo). I'm affiliated with the Departments of Language Science and Technology and Computer Science.

I received my PhD from Stanford University in 2022, advised by Judith Degen and Dan Jurafsky.

Email: mhahn@lst.uni-saarland.de

Michael Hahn

Group: Language, Computation, and Cognition Lab (LaCoCo)

Please send me an email if you are interested in joining our group! We're always looking for highly motivated student research assistants.

See our Lab Website for more information!

Research Interests

Our research studies the following topics:

  • Machine Learning for Language: How can neural language models comprehend language? What explains their success, and what are their limitations? For example, we have investigated the expressive power of transformer models (TACL 2020, ACL 2024) and probed the linguistic knowledge of language models (TACL 2019).

  • Computational Cognitive Science: How does the human mind process information? How does this process shape language? In recent work, we have augmented GPT-2 with memory limitations to examine what makes recursion difficult for humans (PNAS 2022a), propose that grammar reflects pressures towards efficient language use (PNAS 2020, PNAS 2022b), and have developed a unifying theory of biases in human perception across domains (Nature Neuroscience 2024).

Teaching

Publications

In Preparation
A theory of emergent in-context learning as implicit structure induction
Michael Hahn, Navin Goyal
arXiv Preprint, In Preparation.
[bibtex] [pdf]
2024
A unifying theory explains seemingly contradicting biases in perceptual estimation
Michael Hahn, Xue-Xin Wei
Nature Neuroscience, 2024.
[bibtex] [pdf] [preprint] [code] [supplement]
More frequent verbs are associated with more diverse valency frames: Efficient language design at the lexicon-grammar interface
Siyu Tao, Lucia Donatelli, Michael Hahn
Proceedings of the 2024 Annual Conference of the Association for Computational Linguistics, 2024.
[bibtex]
Why are Sensitive Functions Hard for Transformers?
Michael Hahn, Mark Rofin
Proceedings of the 2024 Annual Conference of the Association for Computational Linguistics, 2024.
[bibtex] [pdf] [preprint]
2023
Modeling task effects in human reading with neural network-based attention
Michael Hahn, Frank Keller
Cognition, 2023.
[bibtex] [pdf] [preprint] [code]
A Cross-Linguistic Pressure for Uniform Information Density in Word Order
Thomas Hikaru Clark, Clara Meister, Tiago Pimentel, Michael Hahn, Ryan Cotterell, Richard Futrell, Roger Levy
Transactions of the Association for Computational Linguistics, 2023.
[bibtex] [pdf] [doi]
2022
A resource-rational model of human processing of recursive linguistic structure
Michael Hahn, Richard Futrell, Roger Levy, Edward Gibson
Proceedings of the National Academy of Sciences of the United States of America, 2022.
[bibtex] [pdf] [doi] [code] [supplement]
Crosslinguistic word order variation reflects evolutionary pressures of dependency and information locality
Michael Hahn, Yang Xu
Proceedings of the National Academy of Sciences of the United States of America, 2022.
[bibtex] [pdf] [preprint] [doi] [code] [supplement]
Morpheme ordering across languages reflects optimization for processing efficiency
Michael Hahn, Rebecca Mathew, Judith Degen
Open Mind: Discoveries in Cognitive Science, 2022.
[bibtex] [pdf] [doi] [code] [supplement]
Explaining patterns of fusion in morphological paradigms using the memory--surprisal tradeoff
Neil Rathi, Michael Hahn, Richard Futrell
Proceedings of the 44th Annual Meeting of the Cognitive Science Society (CogSci), 2022.
[bibtex] [pdf]
Modeling fixation behavior in reading with character-level neural attention
Songpeng Yan, Michael Hahn, Frank Keller
Proceedings of the 44th Annual Meeting of the Cognitive Science Society (CogSci), 2022.
[bibtex] [pdf]
Information theory as a bridge between language function and language form
Richard Futrell, Michael Hahn
Frontiers in Communication, 2022.
[bibtex] [pdf] [doi]
2021
Modeling word and morpheme order in natural language as an efficient tradeoff of memory and surprisal
Michael Hahn, Judith Degen, Richard Futrell
Psychological Review, 2021.
[bibtex] [pdf] [preprint] [doi] [code] [supplement] [slides]
Sensitivity as a complexity measure for sequence classification tasks
Michael Hahn, Dan Jurafsky, Richard Futrell
Transactions of the Association for Computational Linguistics, 2021.
[bibtex] [pdf] [preprint] [code] [slides]
An Information-Theoretic Characterization of Morphological Fusion
Neil Rathi, Michael Hahn, Richard Futrell
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021.
[bibtex] [pdf] [code]
2020
Universals of word order reflect optimization of grammars for efficient communication
Michael Hahn, Dan Jurafsky, Richard Futrell
Proceedings of the National Academy of Sciences of the United States of America, 2020.
[bibtex] [pdf] [code] [supplement]
Theoretical limitations of self-attention in neural sequence models
Michael Hahn
Transactions of the Association for Computational Linguistics, 2020.
[bibtex] [pdf] [preprint] [doi] [supplement] [slides]
RNNs can generate bounded hierarchical languages with optimal memory
John Hewitt, Michael Hahn, Surya Ganguli, Percy Liang, Christopher Manning
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020), 2020.
[bibtex] [pdf] [preprint] [code]
2019
Tabula nearly rasa: Probing the linguistic knowledge of character-level neural language models trained on unsegmented text
Michael Hahn, Marco Baroni
Transactions of the Association for Computational Linguistics, 2019.
[bibtex] [pdf] [preprint] [code]
Estimating predictive rate-distortion curves via neural variational inference
Michael Hahn, Richard Futrell
Entropy, 2019.
[bibtex] [pdf] [code] [reviews]
Character-based surprisal as a model of human reading in the presence of errors
Michael Hahn, Frank Keller, Yonatan Bisk, Yonatan Belinkov
Proceedings of the 41st Annual Meeting of the Cognitive Science Society (CogSci), 2019.
[bibtex] [pdf] [preprint] [code]
2018
An information-theoretic explanation of adjective ordering preferences
Michael Hahn, Judith Degen, Noah Goodman, Dan Jurafsky, Richard Futrell
Proceedings of the 40th Annual Meeting of the Cognitive Science Society (CogSci), 2018.
[bibtex] [pdf] [code] [slides]
Wreath Products of Distributive Forest Algebras
Michael Hahn, Andreas Krebs, Howard Straubing
Logic in Computer Science (LICS), 2018.
[bibtex] [pdf] [preprint] [code] [slides]
2016
Modeling human reading with neural attention
Michael Hahn, Frank Keller
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 2016.
[bibtex] [pdf] [code]
2015
Visibly Counter Languages and the Structure of NC1
Michael Hahn, Andreas Krebs, Klaus-Jörn Lange, Michael Ludwig
Proceedings of Mathematical Foundations of Computer Science (MFCS) 2015, 2015.
[bibtex] [pdf]
Henkin Semantics for Reasoning with Natural Language
Michael Hahn, Frank Richter
Journal of Language Modeling, 2015.
[bibtex] [pdf] [code]
2014
Predication and NP Structure in an Omnipredicative Language: The Case of Khoekhoe
Michael Hahn
Proceedings of the 21st International Conference on Head-Driven Phrase Structure Grammar (Stefan Müller, ed.)CSLI Publications, 2014.
[bibtex] [pdf]
On deriving semantic representations from dependencies: A practical approach for evaluating meaning in learner corpora
Michael Hahn, Detmar Meurers
Dependency Theory (Kim Gerdes, Eva Hajicová, Leo Wanner, eds.)IOS Press, 2014.
[bibtex]
2013
CoMeT: Integrating different levels of linguistic modeling for meaning assessment
Niels Ott, Ramon Ziai, Michael Hahn, Detmar Meurers
Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval), 2013.
[bibtex] [pdf]
Word Order Variation in Khoekhoe
Michael Hahn
Proceedings of the 20th International Conference on Head-Driven Phrase Structure Grammar (Stefan Müller, ed.), 2013.
[bibtex] [pdf]
2012
Evaluating the Meaning of Answers to Reading Comprehension Questions: A Semantics-Based Approach
Michael Hahn, Detmar Meurers
Proceedings of the 7th Workshop on Innovative Use of NLP for Building Educational Applications (BEA7)Association for Computational Linguistics, 2012.
[bibtex] [pdf]
Arabic Relativization Patterns: A Unified HPSG Analysis
Michael Hahn
Proceedings of the 19th International Conference on Head-Driven Phrase Structure Grammar, 2012.
[bibtex] [pdf]
2011
Null Conjuncts and Bound Pronouns in Arabic
Michael Hahn
The Proceedings of the 18th International Conference on Head-Driven Phrase Structure Grammar, 2011.
[bibtex]
On deriving semantic representations from dependencies: A practical approach for evaluating meaning in learner corpora
Michael Hahn, Detmar Meurers
Proceedings of the Int. Conference on Dependency Linguistics (Depling 2011), 2011.
[bibtex] [pdf]
Powered by bibtexbrowser