IEEE Member-only icon Lloyd Watts: Reverse-Engineering the Human Auditory Pathway -WCCI 2012 Plenary talk Lloyd Watts: Reverse-Engineering the Human Auditory Pathway -WCCI 2012 Plenary talk

Lloyd Watts: Reverse-Engineering the Human Auditory Pathway -WCCI 2012 Plenary talk

47 views
  • Share
Create Account or Sign In to post comments
#WCCI 2012 #Lloyd Watts

Abstract: By 2003, we had a good understanding of the characterization of sound which is carried out in the cochlea and auditory brainstem, and computer models capable of running these processes in isolation at near biological resolution in real-time. By 2007, these advances had permitted the development of products in the area of two-microphone noise reduction for mobile phones, which led to viable business by 2010. During 2003-2011, new fMRI, multi-electrode, and behavioral studies are illuminating the cortical brain regions responsible for separating sounds in mixtures, understanding speech in quiet and in noisy environments, producing speech, recognizing speakers, and understanding music. During the same period, advances in computing and visualization hardware have permitted more advanced models of auditory brain processes to be simulated and displayed simultaneously, giving a rich perspective on the concurrent and interacting representations of sound and meaning which are developed and maintained in the brain. While there is much still to be discovered and implemented in the next 15 years, we can show demonstrable progress on the scientifically ambitious and commercially important goal of reverse-engineering the human auditory pathway.

Abstract: By 2003, we had a good understanding of the characterization of sound which is carried out in the cochlea and auditory brainstem, and computer models capable of running these processes in isolation at near biological resolution in real-time. By 2007, these advances had permitted the development of products in the area of two-microphone noise reduction for mobile phones, which led to viable business by 2010. During 2003-2011, new fMRI, multi-electrode, and behavioral studies are illuminating the cortical brain regions responsible for separating sounds in mixtures, understanding speech in quiet and in noisy environments, producing speech, recognizing speakers, and understanding music. During the same period, advances in computing and visualization hardware have permitted more advanced models of auditory brain processes to be simulated and displayed simultaneously, giving a rich perspective on the concurrent and interacting representations of sound and meaning which are developed and maintained in the brain. While there is much still to be discovered and implemented in the next 15 years, we can show demonstrable progress on the scientifically ambitious and commercially important goal of reverse-engineering the human auditory pathway.

Advertisment

Advertisment