next up previous
Next: About this document ... Up: Statement of major areas Previous: Computational Modeling

Future

My research plan is to continue a multi-pronged attack towards understanding the nature of human perception and learning. I plan to continue my studies of cortical computation and the theoretical underpinnings of machine learning algorithms. We will also continue to use machine learning techniques to study neural coding through analysis of data collected by Hsin-Hao in Professor Sereno's laboratory as well as a planned new collaboration with the Neurophysiology laboratory of Professor Chiba. In addition, I plan to expand our use of visual and multi-sensory psychophysics as another view to the neural basis of perception.

In order to continue investigating the hypothesis that sensory modalities interact for learning, I am developing experiments to test the effect of auditory stimuli on the perception of visual stimuli and vice versa. We will examine the way different sensory modalities interact in classification tasks and aftereffects and compare these to what our modeling work predicts. We are starting to examine whether an auditory stimulus can bias the perception of a synchronous visual stimulus in a dichoptic viewing situation (where one eye views the synchronous visual stimulus, and the other eye views a phase shifted version). These matching experiments should shed light on the nature of crossmodal integration.

I am also interested in psychophysical experiments to investigate the striking difference between cross-modal contingent aftereffects and contingent aftereffects within one modality. In a cross-modal contingent aftereffect, prior exposure to paired stimuli leads to an increased probability of perceiving the partner stimulus in Modality B when its mate is presented to Modality A. For example, after prolonged exposure to red stimuli occurring with high frequency tones and green stimuli with low frequency tones, there is a small but significant tendency to perceive stimuli occurring with a high tone as a little reddish. In contrast, contingent aftereffects within a single modality such as the McCollough effect show the opposite effect. Prior exposure to red on black vertical bars and green on black horizontal bars leads to a strong, long-lasting perception that white (on black) vertical bars are greenish. I wish to explore the reasons for these opposite forms of interaction to help clarify the special nature of cross-modal interaction and integration. This work is important because it will help uncover the details of cortical computation. As cross-modal interactions in cortex require some feedback component, this may also help to shed more light on the properties and purpose of cortical feedback connections. In fact a recent abstract at the Society for Neuroscience meeting shows that when syneshetes are shown McCollough effect style stimuli but with black letters (that they perceive as colored through their synesthesia) instead of the colored bars, the McCollough effect is inverted. This fits with the idea that interactions brought together through feedback (or cross-area) connections are treated exactly opposite to those that are brought together through feedforward connections.

An analysis or theoretical study that will go along with this work is to study computationally the relative benefits of self-supervised (or co-training) style algorithms vs clustering in the joint space. This work will build on my recent multi-view spectral clustering work which easily enables the comparison of clustering in the joint space with the idea of minimizing the number of times co-occuring patterns appear in differently labeled clusters. I hope to be able to compare the two different methods under different statistical data distributions.

We have recently started to explore the issue of how eye movements influence perception. How do we experience the visual world seamlessly despite the constant discrete nature of our saccadic sampling? We believe that one clue to this important question lies in recent findings of temporal compression (and even inversion) just prior to an eye movement (Concetta-Morrone et. al. Nature Neuroscience 2005). We have performed psychophysics studies that reveal more about the temporal processing of individual flashes at various times relative to saccade onset. We believe that the temporal distortions are related to the findings of receptive field remapping (Duhamel, Colby and Goldberg 1992) and saccadic supression. It is known that prior to and during a saccade the visual system begins to anticipate the stimulus to be perceived by the end of the saccade. This "remapping" means that briefly flashed stimuli presented during an eye movement have a neural signature that persists after the eye movement. The hypothesis is that this remapping results in a perception of a pre-saccadic flashed simulus persisting past the eye movement. Perception and remapping are however severely attenuated just prior (approximately 60 msec) to the eye movement and this may result in a perception that these later peri-saccadic stimuli do not continue as long after the eye movement. This would mean that stimuli later in the eye movement preparation window would be perceived as ending earlier than those earlier in the window (temporal inversion). Our goal is to create a computational model that relates these findings and then explains them in terms of optimal information extraction with active sensation from saccades.

In addition, I will continue to look for new ways to use machine learning to help uncover new knowledge about the brain. We will continue to apply information theory, machine learning, and signal processing techniques to EEG recordings to improve brain computer interfaces. I will also pursue collaborations with physiologists that record multiple neurons in an awake behaving animal in order to look for neural firing patterns that we can correlate with behavioral states. These two projects will strengthen the link between behavior


next up previous
Next: About this document ... Up: Statement of major areas Previous: Computational Modeling
Virginia de Sa 2007-08-10