New paper in Language, Cognition and Neuroscience: Understanding the role of linguistic distributional knowledge in cognition

A paper on which I am joint-first author, which I previously posted here as a preprint, has just (after a long time!) been published in the journal Language, Cognition and Neuroscience.

Take a look!

It should be open-access, but if for some reason you can't see it properly there, I can give you a copy.

"Have one fewer child"

I’ve been looking for suggestions of individual actions one can take to reduce CO2 emissions. In particular, ones which actually make a big difference (unlike, e.g., switching to LED lightbulbs). Everywhere I look, all I see is "have one fewer child" dwarfing all other actions by effectiveness. It's all anyone talks about.

Graph showing comparative impacts of different climate actions. "Have one fewer child" so overshadows others that the y-axis has been clipped to accommodate it.
Wynes & Nicholas (2017, Environmental Research)

The Web is full of figures like this one. It shows the one-fewer-child recommendation being equivalent to about 60 t/y, where the next-best options like “sell your cars”, “stop flying” or "be vegan" are mostly in the 1–2 t/y range. So high, they had to clip and compress the range on the graph so you could even see anything else. That looks pretty stark. That figure is so high, it seems as if nobody could ever hope live sustainably if they had even one child. They might as well be taking a long-haul flight every 10 days for the rest of their life! Now, we definitely shouldn't discount conclusions just because they are surprising or uncomfortable, but we should scrutinise them. That feeling of "…really?" is the first hint something might not be right.

New preprint: "Understanding the role of linguistic distributional knowledge in cognition"

I have recently submitted a paper based on some work I have been doing at my job at the Embodied Cognition Lab at Lancaster University. In it, we look at a large set of linguistic distributional models commonly used in cognitive psychology, evaluating each on a benchmark behavioural dataset.

Linguistic distributional models are computer models of knowledge, which learn representations of words and their associations from statistical regularities in huge collections of natural language text, such as databases of TV subtitles. The idea is that, just like people, these algorithms can learn something about the meanings of words by only observing how they are used, rather than through direct experience of their referents. To the degree that they do, they can then be used to model the kind of knowledge which people could gain in the same way. These models can be made to perform various tasks which rely on language, or predict how humans will perform these tasks under experimental conditions, and in this way we can evaluate them as models of human semantic memory.

We show, perhaps unsurprisingly*, that different kinds of models are better or worse at capturing different aspects of human semantic processes.

A preprint of the report is available on Psyarxiv.


*unsurprising to you as you read this, perhaps, but actually this is the largest systematic comparison of models as-yet undertaken, and thereby the first to actually effectively weigh the evidence on this question.

New(ish) paper: "Entrainment to the CIECAM02 and CIELAB colour appearance models in the human cortex"

Not so long ago I had a paper published in Vision Research.  It's on some work I did some years ago with my friend and collaborator Andrew Thwaites.  In it we look at the entrainment of magnetoencephalographic activity in early visual cortex to colour information in visual stimulus using two competing computational models of colour.  In other words, when and where people's brainwaves directly track the colour of moving images they were seeing on a screen, using two theories about how colour could be represented in the brain.

The paper is in Elsevier's "open archive", which hopefully means you can read it for free.  If not, hit me up.

I don't talk about my work too much here, but if you're interested you can read more about what I do on my more professional website.

New paper: "Relating dynamic brain states to dynamic machine states: Human and machine solutions to the speech recognition problem"

Hey!

I just had a paper published in PLOS Computational Biology.  It's on some work I did with the Centre for Speech, Language and the Brain at Cambridge University.  In it, we used a machine model of speech recognition to map phonetic sensitivities in human auditory cortex using magnetoencephalography neuroimaging data.

The paper is open-access, so you can read it here.

If you're interested, you can read more about the kind of research I do over at my "professional" website.

Working as a postdoc at Lancaster University

 

I've just started a new job at Lancaster University, working as a postdoc in Louise Connell's lab.  I'll be looking at the roles of linguistic representations and sensory simulations in human cognition.  You can read more about the lab's research aims on its website.

Update: Louise has now moved to Maynooth University, and the old lab's page at Lancaster has ceased to exist. But you can easily find out the kind of things Louise's colleagues worked on at her new page.