723
Views
9
CrossRef citations to date
0
Altmetric
Articles

The automatic and the ballistic: Modularity beyond perceptual processes

 

Abstract

Perceptual processes, in particular modular processes, have long been understood as being mandatory. But exactly what mandatoriness amounts to is left to intuition. This paper identifies a crucial ambiguity in the notion of mandatoriness. Discussions of mandatory processes have run together notions of automaticity and ballisticity. Teasing apart these notions creates an important tool for the modularist's toolbox. Different putatively modular processes appear to differ in their kinds of mandatoriness. Separating out the automatic from the ballistic can help the modularist diagnose and explain away some putative counterexamples to multimodal and central modules, thereby helping us to better evaluate the evidentiary status of modularity theory.

Notes

[1] Siegel's focus, like Macpherson's (Citation2012), pertains to how perceptual experience can be modulated by one's cognitive and conative states. These discussions differ from traditional debates about modularity which are focused on the modularity of perceptual processing. Questions of processing, not experience, are at issue in what follows in this essay.

[2] Initially, modules were also posited to be neurally localized and to have “shallow outputs,” but these properties have since dropped out of most presentations of modularity. To my eyes, the neural localization criterion has been dropped both because of the increasing amount of anti-localization evidence (see, e.g., Anderson, 2010) and also because of all of the confusion it engendered between neurological modularity and psychological modularity, the latter being the topic at issue here. I suspect that discussions of shallow outputs have ceased in part because people had trouble making sense of exactly what constituted a shallow output and in part because, insofar as one does have an idea of what it is, it's hard to see how what would constitute a shallow output of, say, an auditory or gustatory module.

[3] There is also one further sort of ambiguity here that won't be the focus of the discussion; see note 9 for further details.

[4] I'll leave out the ‘through psychological means’ modifier from here on. Of course, non-psychological variables could shortcut either of these processing types; a mental process might be stunted because of a number of lower-level factors such as aneurysms, unfortunately aimed projectiles, or untimely death. This is just to say that here, as elsewhere in the special sciences, ceteris paribus clauses abound.

[5] Could there be a non-ballistic processor that is unable to use relevant outside information? We can certainly envision a processor that stops all the time yet is still encapsulated—it would just be a relatively inefficient processor. Nonetheless, there are some considerations that suggest human vision is indeed ballistic. For example, vision will process stimuli that only appear for incredibly short presentation times. Take the classic masked priming effect (e.g., Forster & Davis, Citation1984; the following numbers are from there, but are common to many such demonstrations). A forward-masking stimulus (e.g., a barcode) will appear for 500 ms, and immediately be followed by a target prime (e.g., a word) which is presented for a mere 30 ms. The target is then overtaken by a masking stimulus which is presented for a much longer amount of time (e.g., 500 ms). In these paradigms, subjects successfully visually process the target stimulus. That means that the stimulus is being processed to completion even when the stimulus has already faded, and even though there are heavy interference effects. If vision were to have halting points, then presumably the masked priming paradigms would be the sort of place to detect them.

[6] In fact, dichotic listening experiments show that you needn't even be conscious of the stream in order to unconsciously process it as language and encode its semantics (Bentin, Kutas, & Hillyard, Citation1995).

[7]CitationBurnston and Cohen (forthcoming) cleverly suggest that we reinterpret the conception of modularity because of what they take to be the evidence for integration of central (or at least “higher,” hierarchically speaking) information's effect on perceptual processing. They argue that informational encapsulation shouldn't be understood in terms of a proprietary database of intramodular information, but rather it should be interpreted as the rigidity with which a processor integrates information regardless of the locus of the information. Thus, for them, anisotropic processes are modular because they can only process a delimited set of informational inputs regardless of which other processes can also access those inputs. But the introduction of contextual information in linguistic processing would spell doom for their conception of modularity too, since contextual information can be any information a person has access too, in which case the language parser would be as isotropic as any process could be. Hence they too will need to capture something like the automatic/ballistic distinction in order to deal with putative counterexamples to traditional modules. They could of course also just decide to accept that language processing isn't modular, but that wouldn't quite help patch up a lacuna in their view: their reconceptualization of modularity leaves out the mandatory and fast nature of modular processing altogether. Anisotropy crosscuts both the automaticity and ballisticity criteria (for example, an automatic process can, for all automaticity cares, take in inputs from everywhere, as long as it starts its processing once it encounters those inputs, mutatis mutandis for ballisticity). Thus, for all their view cares, a modular process could take a week to complete its processing. But surely the introduction of modularity is there to separate out processes like long-term planning, from not just encapsulated but also fast and mandatory processes like vision. The fact that their criteria cannot capture this difference is problematic.

[8] And parsing a garden path is a special circumstance if anything is. Part of what makes garden paths so jarring is how infrequently they are actually encountered in everyday speech. People are just not apt to utter, for example, sentences with doubly center-embedded relative-clause constructions. The rarity of garden paths in the wild gives us some reason to think that they are very much an exception to the rule of normal speech processing (and production).

[9] Perhaps it's worth adding that there is at least one further disambiguation of the notion of mandatoriness, at least as it's used in the literature. Most confusingly, Fodor not only ran both the automaticity and ballisticity readings together in the original presentations of modularity, but he also introduced a third, wholly separate notion and claimed that it was the most conservative one, even though it appears to be the most tendentious. When discussing mandatoriness, Fodor writes:

Perhaps the most conservative claim is this: input analysis [i.e., modular processing] is mandatory in that it provides the only route by which transducer outputs can gain access to central processes; if transduced information is to affect thought at all, it must do so via the computations that input systems perform. (Citation1983, p. 54)

Fodor's basic idea is that no raw sensations (i.e., transduced information) can get into central cognition without first moving through a modular process. I suspect that this supposedly conservative test of mandatoriness has been the source of much confusion in debates about modularity. For example, Bar (Citation2003); Fenske, Aminoff, Gronau, and Bar (Citation2006); and Kverga, Ghuman, and Bar (Citation2007) all claim that modularity is false because of certain sorts of top-down facilitation of object recognition. However, these claims are made on the basis of data using low-spatial-frequency images (that is, images that are very hard to see as images of anything). The modularist might object (as in fact many do; see, e.g., Fodor, Citation1988) on the basis that degraded stimuli encourage guessing, and guessing is cognitive, not perceptual; degraded stimuli would thus not count as proper inputs. However, for this type of explanation to work, there has to be a bound on what can set off the module; that is, it has to be allowed that certain sorts of raw, transduced information, such as the retinal array of low-spatial-frequency images, can bypass a module completely. It seems that the “conservative” sense is thus much more tendentious than either the automatic or the ballistic sense. Moreover, I am satisfied that the work of Bar and others shows that the putatively conservative sense is quite probably false, and therefore have ignored this third sense in this essay.

Additional information

Notes on contributors

Eric Mandelbaum

Eric Mandelbaum is Assistant Professor of Philosophy at Baruch College, City University of New York.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.