How do humans conceptualize time? One clear pattern is that temporal concepts are based on spatial ones, however how this is done is not universally determined in the human brain and varies significantly across cultures.
What information can young children use to aid them in understanding spoken language? Recent work in the Creel lab shows that preschoolers are able to use who is talking to limit the set of things that person might talk about.
Though prediction has been proposed across a variety of neural domains, language has not traditionally been one of them - until recently. Using event-related brain potentials, we show that prediction is part and parcel of sentence comprehension.
Artificial agents such as humanoid robots and interactive animated characters are rapidly becoming participants in many aspects of social and cultural life. With applications in domains such as education and health care, we need to understand human factors guiding our perceptions of and interactions with these agents.
Inhibitory control is the ability to withhold or modify prepotent or planned actions that are no longer appropriate in a behavioral context. We are studying the computational and neurophysiological basis of inhibitory control in healthy individuals and those affected by conditions such as ADHD and stimulant abuse.
The ability to recall our experiences as they evolved over time is truly an impressive feat accomplished in large part through the working of a thumb-sized portion of the brain called the hippocampus. How the brain encodes memories is a difficult, but exciting and burgeoning area of neuroscientific research.
The introduction of computer workstations into the medical interview process makes it important to consider the impact of such technology on older patients as well as new types of interfaces that may better suit the needs of older adults.
ChronoViz is a system to aid annotation, visualization, navigation, and analysis of multimodal time-coded data. Exploiting interactive paper technology, ChronoViz also integrates researcher's paper notes into the composite data set. The goal is to decrease the time and effort required to analyze multimodal data by providing direct indexing and flexible mechanisms to control data exploration.
What factors constrain whether tool use modulates the user's body representations? To date, studies on representational plasticity following tool use have primarily focused on the act of using the tool. Here, we investigated whether the tool's morphology also serves to constrain plasticity. In 2 experiments, we varied whether the tool was morphologically similar to a target body part (Experiment 1, hand; Experiment 2, arm). Participants judged the tactile distance between pairs of points applied to their tool-using target body surface and forehead (control surface) before and after tool use. We applied touch in 2 orientations, allowing us to quantify how tool use modulates the representation's shape. Significant representational plasticity in hand shape (increase in width, decrease in length) was found when the tool was morphologically similar to a hand (Experiment 1A), but not when the tool was arm-shaped (Experiment 1B). Conversely, significant representational plasticity was found on the arm when the tool was arm-shaped (Experiment 2B), but not when hand-shaped (Experiment 2A). Taken together, our results indicate that morphological similarity between the tool and the effector constrains tool-induced representational plasticity. The embodiment of tools may thus depend on a match-to-template process between tool morphology and representation of the body.
Many studies have examined language acquisition under morphosyntactic or semantic inconsistency, but few have considered word-form inconsistency. Many young learners encounter word-form inconsistency due to accent variation in their communities. The current study asked how preschoolers recognize accent-variants of newly learned words. Can preschoolers generalize recognition based on partial match to the learned form? When learning in two accents simultaneously, do children ignore inconsistent elements, or encode two word forms (one per accent)? Three- to five-year-olds learned words in a novel-word learning paradigm but did not generalize to new accent-like pronunciations (Experiment 1) unless familiar-word recognition trials were interspersed (Experiments 3 and 4), which apparently generated a familiar-word-recognition pragmatic context. When exposure included two accent-variants per word, children were less accurate (Experiment 2) and slower to look to referents (Experiments 2, 5) relative to one-accent learning. Implications for language learning and accent processing over development are discussed.
recent technologies have made it cost-effective to collect diverse types of genome-wide data. computational methods are needed
to combine these data to create a comprehensive view of a given disease or a biological process. similarity network fusion (snF)
solves this problem by constructing networks of samples (e.g., patients) for each available data type and then efficiently fusing
these into one network that represents the full spectrum of underlying data. For example, to create a comprehensive view of a
disease given a cohort of patients, snF computes and fuses patient similarity networks obtained from each of their data types
separately, taking advantage of the complementarity in the data. We used snF to combine mrnA expression, dnA methylation
and micrornA (mirnA) expression data for five cancer data sets. snF substantially outperforms single data type analysis and
established integrative approaches when identifying cancer subtypes and is effective for predicting survival.
Children seem able to efficiently interpret a variety of linguistic cues during speech comprehension, yet have difficulty interpreting sources of nonlinguistic and paralinguistic information that accompany speech. The current study asked whether (paralinguistic) voice-activated role knowledge is rapidly interpreted in coordination with a linguistic cue (a sentential action) during speech comprehension in an eye-tracked sentence comprehension task with children (ages 3–10 years) and college-aged adults. Participants were initially familiarized with 2 talkers who identified their respective roles (e.g., PRINCESS and PIRATE) before hearing a previously introduced talker name an action and object (“I want to hold the sword,” in the pirate’s voice). As the sentence was spoken, eye movements were recorded to 4 objects that varied in relationship to the sentential talker and action (target: SWORD, talker-related: SHIP, action-related: WAND, and unrelated: CARRIAGE). The task was to select the named image. Even young child listeners rapidly combined inferences about talker identity with the action, allowing them to fixate on the target before it was mentioned, although there were developmental and vocabulary differences on this task. Results suggest that children, like adults, store real-world knowledge of a talker’s role and actively use this information to interpret speech.
In everyday life there is a boundary between our bodies and the external environment. Is this perceived boundary fixed or can it be altered? What happens to your body perception when you use a tool? What about when you immersed in virtual reality? The Cognitive Neuroscience and Neuropsychology Lab (http://www.sayginlab.org) ...
The perception and comprehension of others’ actions and body movements is ubiquitous and important. Our lab carries out a range of behavioral, neuroimaging, and neuropsychological experiments on how people perceive others' body movements. In many experiments, we use body movements depicted by point-lights (like this: http://sayginlab.org/bio-highkick.gif). We are also exploring ...
While it is clear that people around the world talk and think about time in terms of spatial concepts, many questions remain regarding the link between spatial and temporal concepts. The Embodied Cognition lab is interested in understanding cognition from the perspective of the embodied mind, investigating how the peculiarities ...
The Center for Human Development (CHD) at UCSD conducts research projects focusing on factors that influence developing minds and personalities. For example, researchers at the CHD ask questions like how and why do we become individuals? What role is played by our experiences? By our genes? How does developing behavior ...
We are evaluating two interventions for dyslexia that involve training the temporal dynamics of the visual system (magnocellular pathway) and the auditory system, and whether the two interventions together have super-additive effects. As a Research Assistant, you would be traveling to one or two of five participating local elementary schools ...
Movement through the environment demands constant change in how we take in information (our attentional set) and how we use that information to make decisions. The Nitz laboratory studies this dynamic process at its core, by directly examining the neural substrates of attention and spatial cognition through multiple single neuron ...
“Raednig thees wrods semes to be esaeir tahn you mgiht hvae tohuhgt; waht colud epxlian tihs?” Could you read the sentence above? Having any trouble understanding or recognizing these words? How possible it could be to understand such a sentence, with/without recognize words? What could you explain your effortless ability ...
Lab: Foundation for Learning Equality @ Calit2 Can information technology radically change the way we learn? Who can gain the most from free, open access to resources? At the Foundation for Learning Equality, a non-profit based at Calit2, we are harnessing the power of technology for education to take it ...