A paper on which I am joint-first author, which I previously posted here as a preprint, has just (after a long time!) been published in the journal Language, Cognition and Neuroscience.
Take a look!
It should be open-access, but if for some reason you can't see it properly there, I can give you a copy.
Telephone is a game in which participants whisper a phrase person-to-person, and see how it evolves as people guess at words they mishear.
The following music video for True Thrush takes this a step further, giving participants one shot to view and memorise a short video, before asking them to recreate it.
Telephone is entertaining because people's natural automatic error correction (tendency to recognise and reproduce actual words) fights with the noisy communication channel of a quiet whisper. The True Thrush video is more about the unreliability of memory and creativity, and what details seem salient.
I have recently submitted a paper based on some work I have been doing at my job at the Embodied Cognition Lab at Lancaster University. In it, we look at a large set of linguistic distributional models commonly used in cognitive psychology, evaluating each on a benchmark behavioural dataset.
Linguistic distributional models are computer models of knowledge, which learn representations of words and their associations from statistical regularities in huge collections of natural language text, such as databases of TV subtitles. The idea is that, just like people, these algorithms can learn something about the meanings of words by only observing how they are used, rather than through direct experience of their referents. To the degree that they do, they can then be used to model the kind of knowledge which people could gain in the same way. These models can be made to perform various tasks which rely on language, or predict how humans will perform these tasks under experimental conditions, and in this way we can evaluate them as models of human semantic memory.
We show, perhaps unsurprisingly*, that different kinds of models are better or worse at capturing different aspects of human semantic processes.
A preprint of the report is available on Psyarxiv.
*unsurprising to you as you read this, perhaps, but actually this is the largest systematic comparison of models as-yet undertaken, and thereby the first to actually effectively weigh the evidence on this question.
I'm curious/trepidatious about this recurrent theme in the icons of successful free-to-play iOS games.
Like, there must be a reason. But I'll bet it's either superstition or awful psychology.