Language models assign probability to sequences of words. They have many applications, including machine translation, smartphone typing, information retrieval, though I’m familiar with them through speech recognition.
For many years, the probabilities of N-Grams – that’s words or sequences of words – have been estimated by counting occurrences.
One of the key problems for speech recognition is obtaining text that represents the way we speak. The web and other archived resources contain a large amount of written text, but the probabilities estimated from these do not match the way that people speak ungrammatically, and with hesitation, correction, um’s, ah’s and er’s etc. It is much more expensive and labour intensive to obtain transcribed spontaneous speech.
More recently, neural network models have had some success for language modelling, there’s a publicly released toolkit available. The amount of data available for language modelling has increased, and Google have recently released a 1 billion word language model project.