- This event has passed.
David Medler

Presenter: David Medler
Title: Using Contrastive Hebbian Learning to Model Early Auditory Processing
Abstract:
We present a model of early auditory processing using the Symmetric Diffusion Network (SDN) architecture, a class of multi-layer, parallel distributed processing model based on the principles of continuous, stochastic, adaptive, and interactive processing. From a computational perspective, a SDN can be viewed as a continuous version of the Boltzmann machine; that is, time is intrinsic to the dynamics of the network. Furthermore, SDNs embody Bayesian principles in that they develop internal representations based on the statistics of the environment. One of the main advantages of SDNs is that they are able to learn probabilistic mappings (i.e., mapping from m→n, where m<<n) for a single input pattern, a task impossible for many other classes of neural networks. SDNs are trained using the Contrastive Hebbian Learning (CHL) algorithm which is based on positive and negative learning phases. The basic model has been trained on two separate tasks: (i) a signal detection task, and (ii) a phonetic/nonphonetic discrimination task. In the signal detection task, the model was able to capture the accuracy data of human participants, but only grossly approximated participants’ reaction time data. Reanalysis of the human data, however, showed that the network correctly predicted the reaction times in the early phases of the experiment. In the phonetic/nonphonetic discrimination task, the network was able to show both categorical and continuous perception of the stimuli. Importantly, the model predicted learning curves for categorical perception of nonphonetic stimuli that was subsequently confirmed in a human learning study. It is concluded that this simple type of network based on correlational learning is able to effectively model early auditory processing.