Guess what?! Eye gaze and conversation in Parkinsons research published

This month we published a paper in the International Journal of Language and Communication Disorders entitled “Gaze-speech coordination during social interaction in Parkinson’s disease“. The research used mobile eye tracking to examine how people with and without Parkinsons use eye movements during spoken conversation. In order to get people talking we asked them to play a card guessing game, based on the children’s guessing game “HedBanz” in which one player has to describe an object written on a card and the other player has to guess. A link to the full open access publication is available here.

Previous research has shown that eye movements are important in signalling “turn taking” in conversation, whereby the speaker indicates to their conversation partner that the end of the speech turn is coming. Although you might not be aware that you are doing it, we often direct our eye gaze towards the other persons face to indicate it is their turn to speak next. Other work by myself and others has shown how the voluntary control of eye movements is affected in Parkinsons disease. Taking these findings together, we wondered if there were differences in how patients used their eyes during speech and conversation.

The results showed that people with Parkinsons tend to make longer duration periods of eye fixation on the other persons face and elsewhere. They did less well when describing cards to the other player, suggesting problems with speech, but guessed just as many objects when listening to someone else describing, suggesting that their condition didn’t affect their ability to understand others. We also found that the timing of speech turns was subtly different when a patient was playing the game, with a tendency towards shorter gaps and more interruptions in speech, indicating people with Parkinsons may be slightly more impulsive and sometimes “jump in” to interrupt others more.

The results confirm something I have personally noticed about people with Parkinsons over the years: That they sometimes have a subtly different pattern of gaze during conversation including extended periods of eye contact. This can be slightly disconcerting if you are not aware of it and together with phenomena like reduced facial expression might adversely affect social interaction and communication. We think that wider public knowledge and awareness of some of these more subtle features of the condition could itself improve the quality of life and social connectedness of people with Parkinsons.

The research was conducted in collaboration with Gemma Ezard (Lincolnshire NHS) and Frouke Hermans (formerly University of Lincoln now at the Open University in the Netherlands) and was funded by BA/Leverhulme small grant Ref: SG152231

Eyelander at CVRS London 2023

Last week I attended the 18th Biennial Child Vision Research Society meeting at UCL Great Ormond Street Institute of Child Health. The meeting included a symposium in honour of the late Oliver Braddick, co-founder of the UCL/Oxford Visual Development Unit together with Janette Atkinson who gave an impressive overview of the unit’s pioneering work on infant vision over the decades as part of the session.

Amongst the many other highlights of the meeting, Cathy Williams (University of Bristol), presented evidence for sub-types of Cerebral Visual Impairment (CVI) in children. It has become apparent recently that a surprisingly high proportion of children in main stream classrooms may have such brain based visual problems.

My own talk about the Eyelander game was the last presentation of the conference! But everyone stayed right to the end and seemed to really enjoy it! Eyelander is a gamified version of compensatory visual search training for children with loss of vision on one side (hemianopia) . Selective loss of vision to the left or right might occur following brain injury or neurosurgery, but may also be found in many children with CVI (see above).

I really enjoyed the CVRS meeting. It was great to meet so many new people. In fact, it was one of the friendliest and most enjoyable meetings I’ve ever been to, so I will be sure to go again in 2 years time!

Eyelander game at VIEW conference

Recently I attended the VIEW conference for Visual impairment (VI) specialist teachers and professionals in Birmingham, to raise awareness of our Eyelander online game for children with partial visual field loss (Hemianopia).

Eyelander is based on visual search training programmes that have been shown to be effective in improving functional visual abilities in adults that suffer from Homonymous visual field loss following stroke. In the game you have to search for coloured shapes amongst “distractor” shapes, with varying levels of task difficult as the game progresses. As you complete more searches your character (the “Eyelander”) make progress in escaping a volcanic desert island.

The game is available to play via a web browser, either on a computer with a mouse / laptop or a touch screen device. Its efficacy was validated via a published small scale trial in a group of children and young people with well defined hemianopia, suggesting playing the game every day over a period of 4 to 6 weeks can lead to improved visual abilities.

There was lots of interest in the game from VI teachers, who feel that partial visual field loss and problems seeing on one side is very common in the children they work with. Estimates for number of children with some form of brain based visual impairment (also known as Cerebral Visual Impairment or CVI) and many of these may have visual field problems. Given this we are keen for more parents, children, teachers and others working with children with VI to know about Eyelander and to give it a go.

So visit and register to play!

Visual Attention and Paeleolithic Stone Tools

This month we published a study describing how people direct eye movements during viewing of ancient stone tools. The work was led by Maria Silva Gago, Emiliano Bruner and other researchers from CENIEH University of Burgos Spain together with myself and former Lincoln PhD student Flora Ioannidou (now at University of Aberdeen).

The research used eye tracking and the “mouse click” attention tracking technique to measure which areas of stone tools attracted peoples interest most. High resolution photographs of Paeleolithic hand axes (example right), along with much more ancient (approximate 2 million year old) roughly worked tools (example below) were shown to participants. We found that people’s attention was drawn to the “knapped” surfaces of  tools, as well as areas such as the base where the tool would normally be grasped. This was the case even though participants were just viewing pictures and not handling the objects and weren’t told the objects were tools.

We also ran the images through a computer model which calculated the”salience” of different regions of the images.  Some very influential models of vision suggest that attention is simply attracted to regions that stand out visually from the background (i.e. are more “salient”), but the computer model did not seem to explain our results very well. Instead, we think it was how the different parts of the tool might be grasped and used, rather than just how visually interesting it was, that was directing the viewers’ eyes.

The results support the theory of object-action affordances proposed by James J Gibson in the 1970s. He suggested that the brain very quickly detects features of objects that are relevant for action, priming a tendency for us to carry out the associated action. One possibility is that through evolution our ancestors’ brains became increasingly sophisticated at detecting action affordances in stone objects. This lead in turn to the manufacture of more sophisticated tools with enhanced features designed to activate action affordances. This in turn may have caused further development in the brain’s object-action affordance network in an ongoing process of coevolution between human cognitive capacities and tool complexity.

The full paper is published in the journal Perception and can be read online here.

Some example eye movement sequences recorded in the study are shown below.