next up previous
Next: Past to Present: More Up: Statement of major areas Previous: Statement of major areas

Brief summary of major research

How we learn to recognize objects has been a major focus of my research. The brain has to learn to recognize and categorize an object as the same over a wide range of circumstances that can lead to very different activation patterns across the sensory receptors. For example, the activation pattern across the retina for two different views of the same face can be very different, yet our perception and the neural responses far along the temporal visual pathway can be almost identical. The problem of transforming from the retinal representation to that observed in the higher visual cortical areas is the problem of invariant visual recognition. Much is known about the transformation from the retina to the primary visual cortex (V1) but relatively little is known about the transformation from V1 to the end of the temporal pathway. To understand invariant visual recognition, we need to understand the transformation that occurs within the cortex and how it develops.

To study the learning aspect of recognizing objects, I created a model of unsupervised learning of object recognition. In the model, correlations between different sensory modalities steer the learning process. Feedback connections play a crucial role in bringing processed information from the other sensory modalities, so I also study the enigmatic properties of cortical feedback connections (using physiology and modeling) as well as the properties of sensory modality integration (using psychophysics and modeling). Careful analysis of the ``self-supervised'' algorithm reveals that there are crucial architecture constraints for optimal performance. This has lead me to examine other machine learning algorithms to look at similar issues of optimal architecture for learning.

To study the transformation along the cortical visual pathway in more detail, I design stimuli and analysis techniques to apply to neural recordings (collected by collaborators including my student Hsin-Hao). We have started studying the receptive fields in V1 as a first step towards studying the computation (both feedforward and feedback) between the first two visual cortical areas (V1 and V2). We use models to constrain the search for stimuli and machine learning to better obtain and extract information from the physiological recordings.

To learn more about how these receptive fields give rise to perception, we try to design clever psychophysical experiments and infer structure from the results. We study visual illusions to reveal the assumptions that the brain makes. We study aftereffects to reveal processing interactions; their properties can sometimes address the level of interaction between different features. Finally we study cross-modal interactions as they can inform our model of self-supervised learning.

Finally, we have recently started using machine learning and signal processing to improve machine reading of EEG signals for improved Brain-Computer Interfaces. In this work, we are using our neuroscience knowledge to choose tasks to increase the signal strength and our machine learning techniques to improve the decoding of the noisy signals. We hope that this work will eventually enable us to help those who are unable to interact directly with the world through their muscles.


next up previous
Next: Past to Present: More Up: Statement of major areas Previous: Statement of major areas
Virginia de Sa 2007-08-10