Essays and Stories
by Seyed P. Razavi

Are our minds like computers? [3]

This posts concludes my current exploration of whether our minds are like computers (see parts 1 & 2). In this final instalment, I look at the mechanistic theory of computation and its role in cognitive neuroscience. I argue that despite some limitations this is a favourable approach for research into cognition.

What is a mechanistic explanation?

The account of computationalism I will introduce hinges on mechanistic explanations. This is to analyse a system through its components, their organisation and functional relations [1]. Such a multi-level analysis applies to the whole system (e.g. the human organism), down to the most basic component (e.g. neuronal structures). Components play a different role in an analysis depending on which perspective we adopt for explaining a system [2]. For example, the mouth considered as a part of the digestive, respitory, or speech systems.

A mechanism with certain capacities is a set of components whose functions, plus their relevant relations, are such that the mechanism possesses these capacities because:
(a) the mechanism contains the set of components;
(b) the set has functions organised in the relevant relations, and
(c) because of the way the functions are so organised they constitute the capablities of the mechanism [3].
To put it yet another way, the functional organisation of the mechanism is necessary for the mechanism to do what it does.

For example, the human body (mechanism) can circulate blood (capacity). This is because of the heart (component) whose function is to pump blood. Also, because of the way the heart connects to the arterial network (organisation).

Computation as mechanism

The mechanistic account says computations are best explained as a special mechanical process. Defined in part by manipulating strings of discrete symbols [4]. The most familiar example would be digital computers. These execute programs, or strings of instructions. Yet, the mechanistic account isn’t only limited to this kind of computing. It can also accommodate universal Turing machines [5]. As well as more novel constructs such as Gandy machines [6] and hypercomputers [7]. Even connectionist systems, such as neural networks, decompose to explainable mechanisms. Even if only some neural networks are performing computations as such [8].

Mechanistic theory is another non-semantic account of computation. Such computational analysis doesn’t depend on representation to determine the computational function. Let’s take David Marr’s paradagmatic computational analysis of vision as an example [9]. Marr provides three levels to describe a system:
(a) computationally as the function computed by the system;
(b) algorithimically as the way specified for the system to compute the function;
(c) finally, the implementation of the algorithm on the physical system.

Frances Egan argues representation isn’t necessary to explain the computational core, the first two levels of Marr’s description [10]. These levels only need a mathematical function-theoretic description. Of course, for a full description of the vision system, Marr does use representation. Particularly to explain how what we perceive refers to objects in the world. But these are not necessary for the computational aspects of his description. The computations are pure mathematical functions concerned with mathematical entities.

The mechanistic account also resists efforts at trivialising computation. One trivialising account is to say that any physical system interpreted in a certain way is in a computational state. John Searle argued that we could describe a complex enough object as implementing a program at the molecular level [11]. His example was a concerete wall in his room, claiming it could be said to implement a word processing program. The molecular structure of the wall appropriately labelled would represent the internal states of the program. Then at some later time, the shifting molecules would be in some other appropriate internal state matching the program. It was all a matter of interpretation, Searle claimed. Hilary Putnam had a different account which didn’t rely on interpretation. But it shared the aim of demonstrating that every physical object is a potential realisation of every possible program [12]. Therefore, computation didn’t really explain anything.

The mechanistic account is far more restrictive about what makes a computational system. At least, as compared with the classic view of computation inherited from Alan Turing. It is not the representational structures involved that defines the computing system. The arrangements of the molecules or changes of states do not make it computation. Rather the functional properties as specified makes it computational. This view of computation also resists the bolder ambitions of those who claim everything can be explained computationally. At root of every mechanism are non-computational components.

Cognitive Neuroscience

A signifcant advantage of the mechanistic account is its compatibility with cognitive neuroscience. This is an interdisciplinary investigation of cognition. It brings together psychology, computer science, linguistics, anthropology, neuroscience, and philosophy [13]. This approach describes cognitive phenomena using different levels of explanation. It abandons the traditional separation between psychological and neuroscientific explanations. Psychological factors are no longer described without recourse to mechanisms. Instead, we iterate over the structures of cognitive systems; explaining each component’s mechanisms in turn [14].

A full description within cognitive neuroscience has the following characteristics [15]:

  1. It identifies the molecular events that contribute to neural events.
  2. It explains how neural events contribute to neural networks and circuits.
  3. It explains how these networks and circuits in turn contribute to the relevant system events.
  4. Finally, how the relevant systems, such as a human being in her environment, produce behaviour.

Together these criteria aim at giving a full account of cognition, drawing from the methods of their various disciplines. Underpinning it all is the methodology of mechanistic explanation.

Limits of mechanistic theory

There are two current limits of mechanistic theory. First, mechanistic theory does not obviously accommodate analog computing. This is the manipulation of continuous rather than discrete variables [16]. If biological neural networks are analog then digital computation cannot give an exact account of how these work. [Update: Since I wrote this, I’ve seen a comment by Gualtiero Piccinini on my previous post that there is an account of analog computing within mechanastic theory. An updated version will be in his upcoming book.]

Second, putting questions of representation to one side makes computation tractable. But this comes at a price. Many of our paradigmatic cases of computation involve representation. The received view is that computation depends on representation. This is particularly important in cases where two systems provide the same function in different ways (what is called input/output equivalence) [17]. For example, program A and program B may both scale an image. Yet, the programs use different scaling algorithms. Further the output file, encoded differently, may still represent the same image. The programs are not identical but they fulfil the same functional role. This is only apparant when the representation is part of the analysis. Or as likely, when we observe the output using a final medium such as a computer monitor.

Conclusion

Despite these limitations, the mechanist theory of computation has its advantages. Not least its role in a progressive research programme such as cognitive neuroscience. Further work in providing mechanistic explanations for analog computing may overcome its limitation. As for the role of representation in cognition, we can still consider this separately as discussed in the case of Marr’s analysis of vision. The computational core and representation can jointly form the full account without the former being dependent on the latter.

To the extent that the mechanistic theory of computation proves explanatory, the mind is at least somewhat like a computer. But this comes with the modesty of knowing how unlike digital computers our brains truly are. Even so, as a tool for considering cognitive systems on the way to a better explanation, the computer is an analogy that has yet to run its full course.

Notes

[1] Piccinini, 2007, pp. 505-6.

[2] Piccinini, 2007, p. 516.

[3] Piccinini, 2010, p. 82.

[4] Piccinini, 2009, p. 21.

[5] Turing machines are hypothetical machines which apply rules to input variables to alter internal states and generate outputs in order to perform mathematical calculations (Turing, 2004 [1936]).

[6] A Gandy machine is a parallel computing machine sometimes conceptualised as multiple Turing machines running in tandem with a shared input/output stream (Fresco, 2012, p. 361).

[7] A theoretical machine which solves the ‘halting problem’ of determining prior to execution whether any arbitrary program will finish running or continue indefinitely (Siegelmann, 2003).

[8] Piccinini, 2008.

[9] Marr, 2010 [1982].

[10] Egan, 1995.

[11] Searle, 1990.

[12] Putnam, 1988.

[13] Boone and Piccinini, 2016, p. 1510.

[14] Boone and Piccinini, 2016, p. 1515.

[15] Boone and Piccinini, 2016, pp. 1514.

[16] Piccinini, 2009, p. 522.

[17] Sprevak, 2010.

References

Boone, Worth, and Gualtiero Piccinini. 2016. ‘The Cognitive Neuroscience Revolution’. Synthese 193 (5): 1509–34.

Egan, Frances. 1995. ‘Computation and Content’. The Philosophical Review 104 (2): 181–203.

Fresco, Nir. 2012. ‘The Explanatory Role of Computation in Cognitive Science’. Minds and Machines 22 (4): 353–80.

Marr, David. 2010 [1982]. Vision: A Computational Investigation Into the Human Representation and Processing of Visual Information. MIT Press.

Piccinini, Gualtiero. 2007. ‘Computing Mechanisms’. Philosophy of Science 74 (4): 501–26.

———. 2008. ‘Some Neural Networks Compute, Others Don’t’. Neural Networks: The Official Journal of the International Neural Network Society 21 (2-3): 311–21.

———. 2009. ‘Computationalism in the Philosophy of Mind’. Philosophy Compass 4 (3): 515–32.

———. 2010. ‘The Mind as Neural Software? Understanding Functionalism, Computationalism, and Computational Functionalism’. Philosophy and Phenomenological Research 81 (2): 269–311.

Putnam, Hilary. 1988. Representation and Reality. MIT Press.

Searle, John R. 1990. ‘Is the Brain a Digital Computer?’ In Proceedings and Addresses of the American Philosophical Association, 64:21–37. JSTOR.

Siegelmann, Hava T. 2003. ‘Neural and Super-Turing Computing’. Minds and Machines 13 (1): 103–14.

Sprevak, Mark. 2010. ‘Computation, Individuation, and the Received View on Representation’. Studies in History and Philosophy of Science. Part B. Studies in History and Philosophy of Modern Physics 41 (3): 260–70.

Turing, Alan M. 2004 [1950]. ‘Computing Machinery and Intelligence’. In The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life plus The Secrets of Enigma, edited by J. Copeland, 433–64. Oxford: Oxford University Press.