I am very honoured to have been invited to be Branch President for the Lincoln and District Branch of Parkinsons UK for 2017-18.
I have been working with members of the local group over the last 4 years investigating the control of eye movements in Parkinsons. This has been a long standing research interest off and on since my days as a post-doctoral research fellow at Charing Cross Hospital in London.
Our research has shown that there is a particular “marker” of Parkinsons in eye movements, namely “multi-stepping” or jerkiness of eye movements measurable under certain conditions with a computerised eye tracker. In other situations some people with Parkinsons appear to be slightly more distractable and not organise eye movements as efficiently as healthy people when carrying out problem solving and memory tasks.
The research has potential in the future to help in the early diagnosis of Parkinsons, the assessment of cognitive impairments in Parkinsons as well as helping people with Parkinsons understand the subtle ways in which the condition might affect them beyond the obvious symptoms seen in other sort of movement.
I’ve also observed that People with Parkinson are extra-ordinarily nice and generous people with an enthusiasm for research. I continue to do whatever I can in my own little way to go the extra mile (or 20!) to support them, so I was very pleased to accept the appointment as Branch President. I am looking forward to meeting established and new members at the Annual General Meeting next month in Bracebridge Heath and giving a short update on recent research.
We have a paper published this month in the journal Experimental Brain Research based upon 2 years of data collected at the Lincoln Summer Scientist event which looked at how children’s saccadic eye movements are affected by directional socio-biological cues (see previous posts here and here).
We report results from 137 children who performed a pro-saccade task presented as a computer game in which they had to keep their eyes on a cartoon bee that jumped unpredictably from the middle to the left or right of a computer screen. The Eyelink II system was used to examine how quickly and accurately the children followed the bee while pictures of arrows and photos of pointing hands and eyes appeared in middle the screen just before the “buzy bee” character moved (see Youtube video).
We found that children were distracted by the direction of the pointing pictures such that their eyes were quicker to move towards the cartoon bee when she jumped in the same (Congruent) as opposed to the opposite (Incongruent) direction to the pointing finger, eye or arrow. Interestingly for the youngest group of children (3-5 years) this effect was found most strongly for pointing fingers. Only older children showed the effect for eyes and arrows. The paper makes the case for the view that children have to learn to link what they see in the world around them with the direction of interesting information and events. One of the first “cues” to attention that young children learn may be the direction of an adult’s pointing index finger.
Another interesting finding of the work was that for the youngest group of children when eye gaze cues overlapped with the onset of the peripheral target bee a large proportion of “omission errors” were made such that they missed the target completely and didn’t make a saccade. I was involved in the testing myself at Summer Scientist and I found this feature of young children’s behaviour particularly fascinating. It seems strikingly similar to stimulus “extinction” and neglect seen in adult stroke patients. Rather than just not moving their eyes I think 3-4 year olds didn’t “see” the Bee under these conditions and this is something I’d hope to follow up at future summer Scientist weeks.
The paper is full open access and available here . See here for another recent Experimental Brain Research study I’ve only recently seen by a group in Oslo showing ERP response to finger pointing cues in babies.
Last month we published a paper in Frontiers of Human Neuroscience. This was the latest and last functional magnetic imaging (fMRI) study to come out of a Wellcome Trust funded project run by myself (whilst at University of Exeter) and Dr Ben Parris (now at Bournemouth), investigating the organisation and function of cognitive control processes in the human frontal cortex.
Humans are perhaps uniquely able to represent information in an abstract way which allows them to generalise rules and knowledge across different situations and tasks. For example we might learn to get a reward by pressing a button on the left with our finger when we see a blue shape, and a button on the right when we see a yellow shape. But we can also generalise rules to different situations e.g. instead of pressing a left or right button,make an eye movement to the left or right when we see the right colour…or say the word “left” or “right”…or wiggle the toes on your left and right feet!
This problem might seem trivial at first, but it is actually very difficult to see how neurons in the brain could achieve this, when ultimately they just make connections (“synapses”) linking inputs (a sensory stimulus) with outputs (a specific muscle movement). Imagine a neuron that truly represented the abstract concept of Blue things =Leftness. The only way this clever “concept neuron” could make us contract just our left interosseous muscle (the muscle in your index finger), or just our lateral extraocular muscle would be to have synapses that change their configuration completely within a fraction of a second so that they just sent signals down the connections to the finger or the eye muscles. Otherwise the signals from the neuron would either do nothing or have to make our whole body (including our toes!) move left. The idea that cells can change their synapses very rapidly (<1 second) is called neural functional pleomorphism and we don’t think it happens in the human brain.
In our study we got participants to perform a Blue / Yellow colour – Left/Right response rule switching task in the brain scanner. Sometimes they had to make just an Eye movement and other times just a left/right button press. We found areas of the brain that responded just for Eyes and just for Hands as you might expect. But other areas in the prefrontal cerebral cortex didn’t care about the response and showed activity more related to the rules. But when we zoomed in on these same areas and analysed the pattern of activity at a finer scale during Hand and Eye “epochs”, there were significant differences between the two. So whats going on?
The answer, we think, is that there aren’t any “concept neurons” in your brain. Instead rules and concepts are embedded in the distributed pattern of activity across networks of neurons. Some of these individual cells might prefer eye movements to the right when we see a Blue Square, others may send signals to our left finger when we see a Blue Circle and others which make connections relating to all sorts of different versions and combinations of the general rule: blue things=left. When all the neurons representing the various different examples of the general rule work together, that is when we experience ourselves thinking about the general rule concept. But in order to actually do anything useful only the sub-set of neurons coding one specific “exemplar” of the rule is active.
So there is no single place in our brain where task rules can be said to be represented and exist!
The actual paper is less philosophical but it is Open Access and you can download it here