When we listen to somebody talking, we not only rely on hearing,
we also see speech.
Talking faces carry speech information, which is helpful when
the acoustic information is degraded, e.g. in noisy surroundings,
or, for the hearing-impaired. For instance, vision may give cues as
to when there is speech to listen in a noisy background.
But also in normal, quiet conditions, vision interacts with
In the so-called McGurk effect, visual lip movements change the
way we hear an acoustic speech token. This is another phenomenon of
audiovisual integration, here changing the phonetic percept. But is
this due to vision changing the way the brain processes sensory
inputs from our ears, or, is it due to an integration of two
conflicting pieces of phonetic information?
My project investigates what actually happens when vision
changes the way we hear.
To start answering this question, we need to know how early in
the underlying brain processes that vision starts to merge into
hearing. To measure this, EEG is the perfect method, as it allows
accurate tracking of brain processes down to the millisecond.
The aim is to be able to map where hearing and vision meet in
the brain and how they interact.