How do humans conceptualize time? One clear pattern is that temporal concepts are based on spatial ones, however how this is done is not universally determined in the human brain and varies significantly across cultures.
What information can young children use to aid them in understanding spoken language? Recent work in the Creel lab shows that preschoolers are able to use who is talking to limit the set of things that person might talk about.
Though prediction has been proposed across a variety of neural domains, language has not traditionally been one of them - until recently. Using event-related brain potentials, we show that prediction is part and parcel of sentence comprehension.
Artificial agents such as humanoid robots and interactive animated characters are rapidly becoming participants in many aspects of social and cultural life. With applications in domains such as education and health care, we need to understand human factors guiding our perceptions of and interactions with these agents.
Inhibitory control is the ability to withhold or modify prepotent or planned actions that are no longer appropriate in a behavioral context. We are studying the computational and neurophysiological basis of inhibitory control in healthy individuals and those affected by conditions such as ADHD and stimulant abuse.
The ability to recall our experiences as they evolved over time is truly an impressive feat accomplished in large part through the working of a thumb-sized portion of the brain called the hippocampus. How the brain encodes memories is a difficult, but exciting and burgeoning area of neuroscientific research.
The introduction of computer workstations into the medical interview process makes it important to consider the impact of such technology on older patients as well as new types of interfaces that may better suit the needs of older adults.
ChronoViz is a system to aid annotation, visualization, navigation, and analysis of multimodal time-coded data. Exploiting interactive paper technology, ChronoViz also integrates researcher's paper notes into the composite data set. The goal is to decrease the time and effort required to analyze multimodal data by providing direct indexing and flexible mechanisms to control data exploration.
Previous research suggests that preschool-aged children use novel information about talkers’ preferences (e.g. favorite colors) to guide on-line language processing. But can children encode information about talkers while simultaneously learning new words, and if so, how is talker information encoded? In five experiments, children learned pairs of early-overlapping words (geeb, geege); a particular talker spoke each word. Across experiments, children learned labels for novel referents, showing an advantage for original-voice repetitions of words which appeared to stem mainly from semantic person-referent mappings (who liked what referent). Specifically, children looked to voice-matched referents when a talker asked for their own favorite (‘‘I want to see the geege’’) or when the liker was unspecified (‘‘Point to the geege’’), but they looked to voice-mismatched referents when a talker asked on behalf of the other talker (‘‘Conor wants to see the geege’’). Initial looks to voice-matched referents were flexibly corrected when later information became available (Anna saying ‘‘Find the geege for Conor’’). Voice-matching looks vanished when talkers labeled the other talker’s favorite referent during learning, possibly because children had learned two conflicting person-referent mappings: Anna-likes-geeb vs. Anna-talks-about-geege. Results imply that children’s language input may be conditioned on talker context quite early in language learning.
Children seem able to efficiently interpret a variety of linguistic cues during speech comprehension, yet have difficulty interpreting sources of nonlinguistic and paralinguistic information that accompany speech. The current study asked whether (paralinguistic) voice-activated role knowledge is rapidly interpreted in coordination with a linguistic cue (a sentential action) during speech comprehension in an eye-tracked sentence comprehension task with children (ages 3–10 years) and college-aged adults. Participants were initially familiarized with 2 talkers who identified their respective roles (e.g., PRINCESS and PIRATE) before hearing a previously introduced talker name an action and object (“I want to hold the sword,” in the pirate’s voice). As the sentence was spoken, eye movements were recorded to 4 objects that varied in relationship to the sentential talker and action (target: SWORD, talker-related: SHIP, action-related: WAND, and unrelated: CARRIAGE). The task was to select the named image. Even young child listeners rapidly combined inferences about talker identity with the action, allowing them to fixate on the target before it was mentioned, although there were developmental and vocabulary differences on this task. Results suggest that children, like adults, store real-world knowledge of a talker’s role and actively use this information to interpret speech.
Concepts of data and its role in science will be introduced, as well as the ideas behind data-mining, text-mining, machine learning, and graph theory and how scientists and companies are leveraging those methods to uncover new insights into human cognition.
Music is ubiquitously present in human culture. As much as it is ubiquitous, music is diverse in both form and usage. From sacred ritual to war, music is a component of many human activities. Free from the semantic necessities of language, music is constrained only by the aesthetics of those making it. Ethnomusicology seeks to understand music in its cultural context--how and why people make the specific types of music they do.
Cognitive ethnomusicology takes a broad approach to the study of musical culture, perception, and processing. The course will explore fundamental components of musical behavior, such as synchronized rhythm or the use of visual symbols to enhance recall of musical ideas, while also exploring specific genres or styles of music that have unique characteristics, such as the timbre-melodies of Tuvan vocal music or the complex rhythmic patterns of Carnatic Mrdangam playing.
Students learn how to use Matlab and the Psychophysics toolbox for experimental research in cognitive science, neuroscience, psychology, linguistics and related fields… Topics include stimulus presentation, response collection, analyzing and displaying data. Programming is an applied skill; like playing an instrument or a sport, it needs to be practiced. COGS219 provides information, support, motivation, structure, and "coaching”. Students acquire skills they can apply at graduate school and beyond, feel more confident in their programming and research abilities, and develop code that can be adapted for research projects. For cognitive science students, 219 counts as a methods course; with professor approval, it can count as behavioral or computational issues course.
Prepares students to conduct original HCI research by reading and discussing seminal and cutting-edge research papers. Topics include design, social software, input techniques, mobile, and ubiquitous computing. Student pairs perform a quarter-long mini research project that leverages campus research efforts. TuTh 3:30pm-4:50pm in CSE 2154. Prerequisites: (Cogs14a or CSE20) and (an A- or higher in Cogs120 or Cogs102C). Please contact Thanh Maxwell at email@example.com for departmental approval.
We are evaluating two interventions for dyslexia that involve training the temporal dynamics of the visual system (magnocellular pathway) and the auditory system, and whether the two interventions together have super-additive effects. As a Research Assistant, you would be traveling to one or two of five participating local elementary schools ...
We study human cognition and decision making: how do people combine sparse information with their prior knowledge about the world to make decisions? And how do limitations of memory and attention influence this process? Different projects investigate these issues in different domains; examples include: visual attention, consumer behavior, intuitive reasoning, ...
Help shape the future of entrepreneurship at UCSD. What is E-Connect? Part LinkedIn, part Kickstarter, UCSD Entrepreneur Connect (E-Connect) is the future for entrepreneurialminded students. Meet, share ideas, form teams, create - this is the aim of E-Connect. E-Connect is an idea. We need creative individuals looking to broaden their ...
It is often suggested that people make predictions about the world by simulating how the world might unfold, but considerably less is known about how people make retrodictions: inferring the past state of the world based on the present. The current research investigates the shared and differing cognitive processes underlying ...
Movement through the environment demands constant change in how we take in information (our attentional set) and how we use that information to make decisions. The Nitz laboratory studies this dynamic process at its core, by directly examining the neural substrates of attention and spatial cognition through multiple single neuron ...
The Center for Human Development (CHD) at UCSD conducts research projects focusing on factors that influence developing minds and personalities. For example, researchers at the CHD ask questions like how and why do we become individuals? What role is played by our experiences? By our genes? How does developing behavior ...
“Raednig thees wrods semes to be esaeir tahn you mgiht hvae tohuhgt; waht colud epxlian tihs?” Could you read the sentence above? Having any trouble understanding or recognizing these words? How possible it could be to understand such a sentence, with/without recognize words? What could you explain your effortless ability ...
The Language Acquisition & Sound Research lab is seeking enthusiastic, motivated, and reliable undergraduate research assistants to assist with a study. The study investigates how different people interpret sounds when processing language. Successful applicants will receive course credit and gain valuable experience with language research! Interested students should contact Prof. ...
Andrea Chiba, associate professor of cognitive science, for “Socially Situated Neuroscience: Creating a Suite of Tools for Studying Sociality and Interoception.” “The “interoceptive” system is said to be a neural system that is critical to our physiological self-awareness and the feelings we share with others,” said Chiba.“This project aims to co-develop light, wireless, flexible recording sensors, an iRat (a robotic ‘animat’ with rat-like social behavior) and a set of experiments to interrogate the ‘interoceptive system’ by simultaneously examining physiological measures, neural activity and complex social behavior.” Primary researchers on the grant, in addition to Chiba, are Laleh Quinn, Todd Coleman and Marcelo Aguilar-Rivera of UC San Diego and Janet Wiles of the University of Queensland Australia.
Former graduate student Ben Cipollini received a modeling award and $1000 prize at the 36th annual meeting of the Cognitive Science Society, for his work with Prof. Gary Cottrell on lateralization in visual processing.
Sebastian Thrun, CEO of Udacity, will tell his story of accidentally creating an early MOOC, all the way to a mid-size company offering alternatives to college degrees to millions of online learners. Udacity focuses on education for jobs in the tech industry. Its content is built by leading Silicon Valley companies, like Cloudera, Facebook, and Google. Thrun will discuss a new style of pedagogy for learning on mobile devices, online services, and new ...
(click for details)
Wed, Oct 1st, 4:00pm-5:00pm (Atkinson Hall Auditorium)
(1 week, 1 day from now)
We are entering an era where we are surrounded by devices, environments and vehicles that aim to support and assist us. These services, however, are only useful to the everyday consumer if the interaction is simple and easily understandable. By researching how people in public social spaces act as ephemeral teams to jointly perform short tasks--like opening the door for one another, offering a drink, or taking away trash--we can better understand how to design the interactions ...
(click for details)
Mon, Oct 6th, 4:00pm-5:00pm (Atkinson Hall Room 1601)
(1 week, 6 days from now)
A single model explains both visual and auditory precortical coding Precortical neural systems encode information collected by the senses, but the driving principles of the encoding used have remained a subject of debate. We present a model of retinal coding that is based on three constraints: information preservation, minimization of the neural wiring, and response equalization. The resulting novel version of sparse principal components analysis successfully captures a number of known characteristics of the retinal coding system, such as center-surround ...
(click for details)
Wed, Sep 24th, 3:00pm-4:00pm (Sanford Consortium - Duane J. Roth Auditorium)
(23 hours, 37 minutes from now)