A Measure-Theoretic Characterization of Tight Language Models

Published in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

Find paper here

Language modeling, a central task in natural language processing, involves estimating a probability distribution over strings. In most cases, the estimated distribution sums to 1 over all finite strings. However, in some pathological cases, probability mass can “leak” onto the set of infinite sequences. In order to characterize the notion of leakage more precisely, this paper offers a measure-theoretic treatment of language modeling. We prove that many popular language model families are in fact tight, meaning that they will not leak in this sense. We also generalize characterizations of tightness proposed in previous works.

@inproceedings{du-etal-2023-measure,
    author = {
        Li Du and
        Lucas Torroba Hennigen and
        Tiago Pimentel and
        Clara Meister and
        Jason Eisner and
        Ryan Cotterell
    },
    booktitle = {Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
    title = {A Measure-Theoretic Characterization of Tight Language Models},
    year = {2023},
    url = {https://aclanthology.org/2023.acl-long.543/},
    pages = {9744--9770},
}