Exploring the functional organization and computational properties of human auditory cortex
By S. Norman-Haignere

Sam Norman-Haignere talks to ADASP about two of his studies that provide a framework for understanding the organization and computations of human auditory cortex.


Humans derive a remarkable amount of information from sound. Auditory cortex is critical to this process, but its organization and functional properties remain controversial. In this talk, I describe two studies that provide a framework for understanding the organization and computations of human auditory cortex. The first study tested whether a standard model, based on tuning for spectrotemporal modulations in a spectrogram, can account for human cortical responses to natural sounds. We used ‘model-matched’ stimuli to test the model: for each natural sound we synthesized a new sound that was constrained to yield the same neural response under the model. In primary auditory cortex, responses to natural and model-matched sounds were nearly equivalent, but model-matched sounds produced little response in non-primary regions, likely because they lack higher-order structure these regions are sensitive to and which are not made explicit by the spectrotemporal model. This finding is suggestive of a hierarchy, in which non-primary regions compute higher-order properties from responses to spectrotemporal features in primary regions.

In the second study, we identified primary dimensions of tuning in auditory cortex by modeling voxel responses to natural sounds as a weighted combination of a small number of canonical response profiles (‘components’). This analysis revealed six components, each with interpretable response properties, despite not being constrained by prior functional hypotheses. Four components reflected selectivity for acoustic features captured by the spectrotemporal model. Two other components were highly selective for speech and music, and were not explainable by tuning for spectrotemporal modulations. We found that music-selectivity spatially overlapped other components, diluting the music selectivitiy evident in raw voxels and potentially explaining why prior studies have not observed such selectivity. Anatomically, music and speech selectivity concentrated in distinct non-primary regions, suggesting that representations of speech and music diverge in distinct non-primary pathways.