Traditional approaches to meaning in cognitive science have imported the assumption from linguistics that natural language is a formal system, or at least an approximation of one. Perhaps due to the interdisciplinary nature of the field, this limited view of language has spread to other areas, and even become institutionalized in its practices. For example, the idea that the meaning of a sentence is built from the meanings of component parts suggests the research program can and should be divided accordingly. An unfortunate consequence of this assumption is that much of modern psycholinguistics research is directed at processes of word recognition and sentence processing in a way that assumes it can be addressed independently of the formulation of the message-level representation.
Moreover, it is the assumption that expression meaning is the fundamental component of utterance meaning that underlies cognitive scientists' tendency to build and test models that explain simple language use in minimal contexts. Unfortunately, by restricting our attention to cats on mats in typical settings, (complete with their gravitational fields and their standard orientational frames), we are prevented from seeing how contextual knowledge and background assumptions interact, and their constitutive role in on-line meaning construction. For the speaker, it's quite a nice thing that we can rely on knowledge of typical situations for meaning construction. But, for the researcher, it's often more valuable to turn the cat upside-down, and look at the atypical, and even the exotic. It is these examples that allow us to more adequately assess the nature of the information and the processes speakers recruit in on-line meaning construction.
Our investigation of the semantic leaps in jokes, arguments, counterfactuals, and analogies, suggests on-line meaning construction requires a multi-tiered integration system that involves hierarchical slot-filler structures all the way down. At bottom, sentential integration is a process in which speakers integrate abstract grammatical constructions with more specific frames evoked by lexical items. Grammatical information such as space builders and verbal morphology can cue the construction of new spaces. Similarly, other sorts of grammatical information such as clause and sentence boundaries can cue their delimitation. However, both the opening and closing of spaces can go unmarked by grammatical features, relying on language users to detect the changing background assumptions necessary for continued interpretation.
Data which suggest that meaning is underspecified by the grammar, indicate that grammatical information is not always necessary for the computation of on-line meanings, nor is it sufficient. Such data reveal the constructive nature of comprehension and point to the crucial role played by noncompositional processes. Rather than a deterministic process of composition in which the satisfaction of truth conditions for the resultant sentence can be seen to be truth-functionally related to the meaning of the parts, the space structuring model appeals to the constructional productivity of the imaginative processes of conceptual integration.
The space structuring model does not eliminate the need for combinatorial mechanisms of meaning construction, nor does it preclude the possibility of there being dedicated structures and/or specialized processes for parsing. However, by casting the noncompositional processes of blending in a leading role, it does undermine the significance of the parser. Structural regularities in the language still help us to figure out who did what to whom, but our understanding of how they do so is quite different from the original vision of, say, Chomsky (1965) or Montague (1974), in which an autonomous syntax composes semantic representations that connect up with the language of thought. Instead, structural regularities are construed as having semantic content of their own, albeit a very abstract one.
In many ways, structure in the blended space can be seen as an overly literal interpretation of the linguistic input. However, while the assumption in many traditional accounts of meaning is that such interpretations are rejected as irrelevant to the resultant interpretation, in the space structuring model, the implausible representations constructed in the blend are related to resultant message-level representations in principled ways. For instance, speakers often recruit metaphoric, metonymic, and other sorts of cross-space mappings that rely on the induction of shared relational structure.
A recurring theme has been that certain processes appear to operate at a number of different levels: partitioning, mapping, contextual variation of meaning, conceptual integration, scalar implicature, frame-shifting, and the relevance of salient counterfactuals. In chapters 3 and 4 we reviewed empirical data from psychology and cognitive neuroscience that suggest higher-level discourse factors operate in the processing of single words. Moreover, we have seen analytically how the demands of sentential integration are not qualitatively different from those of text processing.
Of course, the finding that discourse-level considerations affect the processing of individual words is counterintuitive only from the traditional building blocks approach to meaning construction. That is, once we abandon the notion that contextual and background knowledge are brought to bear after the assembly of a context-invariant meaning, the finding that the same factors operate at the lexical, sentential, and intersentential levels should come as no surprise. If language is designed to prompt the construction of cognitive models, the cuing of projections, and the elaboration of blends, we should actually predict that words, sentences, and groups of sentences can prompt the same sorts of operations.
In fact, as cognitive scientists we need to make our own leap, akin to the shift Suchman recommends in the quote at the chapter's outset. This leap involves abandoning the old assumption that the systematicity and productivity of human cognition are the necessary result of a system which formally composes static symbols. Moreover, it involves embracing the situated character of on-line meaning, the constructive nature of comprehension, and the constitutive role of context. Because language use, in particular, is firmly rooted in human experience and social interaction, we need to construe meaning construction as a set of routines for assembling cognitive models that enable interpretation, action, and interaction. Besides acknowledging the crucial role of physical and social world within which we function, the leap towards situativity is congruent with the rising consciousness in cognitive neuroscience of the importance of the motor system (see e.g. Rizzolatti & Craighero, 1998), and the growing realization that attention, perception, and memory, are all intimately connected with action (see e.g. Arbib & Rizzolatti, 1996; Ballard, 1997; Milner & Goodale, 1995; ).
Neuroscience gives us a picture of information processing as involving partitioning of sensory information into parallel streams, each computing different sorts of information, and each with its own hierarchical structure (Van Essen, Anderson, & Felleman, 1992; Ungerleider & Mishkin, 1982). The massively inerconnected systems allow for information to be continuously mapped and remapped between intertwined processing streams. Similarly, the space structuring model, though motivated by very different issues and sorts of data, portrays meaning construction in an analogous way: partitioning of information into parallel streams, extensive mapping, and the integration of disparate information needed for an adequate message-level comprehension. While the establishment of abstract mappings in mental space theory is not directly comparable to mapping in the visual system, perhaps computationally similar mechanisms of information regulation underlie flexibility evident in both meaning construction and visual processing.
In fact, recent research in cognitive psychology (Barsalou, 1999; Glenberg et al., 1994, Mandler, 1993) points to the import of what Barsalou (1999) calls perceptual symbols. Perceptual symbols are mental representations which are neither perceptual, that is, strictly dependent on sensory input systems, nor symbolic, that is completely amodal. As outlined in Barsalou (1999), schematic representations of perceptual experience are stored around a common frame which promotes schematized simulations. Importantly, such simulations need not be accompanied by the experience of visual imagery, and are not to be construed as mental ``pictures.'' Indeed, perceptual symbols recruit neural machinery activated in perceptual experience from all modalities, auditory, olfactory, somatosensory, and kinesthetic, as well as from the visual modality. As abstracted perceptual experience, perceptual symbols develop in order to support categorization, inference, and interaction with the world around us. Frames built from perceptual symbols present themselves as representations which can sustain the creative blending mechanisms of composition, completion, and elaboration while maintaining the representational advantages of hierarchically organized slot-filler structures.