Essays and Stories
by Seyed P. Razavi

Are our minds like computers? [1]

Human beings have used different metaphors to explain the mind over the years. Plato talked about a charioteer guiding two horses. The charioteer was reason controlling the horses of desire and spiritedness. The pre-industrials talked about hydraulics. Then came steam engine metaphors and telegraph switchboards. Now of course, the ubiquitous computer analogy. To what extent ‘the mind is like a computer’ I explore through this and later posts.

How the brain is not like a computer

Let’s start by stating the ways the brain is not like a computer. There are the obvious differences in the materials and their organisation. The neurons in the brain interconnect in ways by comparison that no human creation does. Nor are the hundred trillion (1014) connections between neurons like the wirings of silicon chips. Their firings are exact real numbers that we cannot specify by finite means [1]. They’re analogue not digital.

No two brains, even those of identical twins, are the same. We talk about general patterns and behaviour in the brain using statistics. But at scales much coarser than the actual neuronal groupings.

Each brain is unique in ways that makes a nonsense of the idea it is following instructions like a computer. Evolution and development make each brain rich in individual variability. The complexity of activity is beyond anything human beings have devised. This variability is essential to how the brain deals with the diverse situations of everyday animal life.

There are also special features of the brain that distinguishes it further from computers. First, the brain processes many signals from the environment, body and other cognitive processes in parallel. In ways most digital computers cannot. It regulates biological functions as it moves the body. It categorises perceptions and manages memory in ways we don’t understand in full yet.

Second, the brain contains a core ‘value system’ which tells the nervous system that some event is significant. This changes the strength of various synaptic connections aiding learning and adaptation. The brain of each animal suits its particular body type. Altogether there a set of unique constraints on animal development and species-specific perception.

The final feature that makes the brain startlingly different to any computer is ‘reentry’. Reentry may be the most significant feature of the way the brain works. It is also difficult to appreciate. Edelman and Tononi [2] provide a metaphor that may help:

Imagine a string quartet. Each individual player improvises according to their own ideas and environmental cues. There is no score so each player provides their own tune. At first, these are not coordinated with the other players. Now imagine each of the players become connected to the others through a mesh of fine threads. Every motion by one player would be immediately conveyed to the others through this mesh. Each player would then take these as further cues to guide their own playing. Over time these signals would lead to coordination. The quartet produces an integrated sound. Whilst each player would remain independent, they would become ever more coordinated. New emergent tunes would arise. The music would become more coherent, all without a conductor directing the players.

Signals in the brain, particularly in the thalamocortical meshwork, are in constant parallel interchange. Recursion of signals coordinates activity across interconnected areas. This maps them together in space and time. Unlike a feedback loop, there are many parallel paths and nothing like an error function aiding synchronicity. Yet widespread synchronisation is its dramatic consequence. Dispersed neurons over a wide area are thus connected by reentry. An integrated system of neuronal connections across different brain regions emerges. This helps explains perceptual categorisation and timely motor responses.

These features differentiate the way the brain works compared to a computer. There is no conductor and no central processing unit in the brain. It dwarfs computers in scale of complexity and ways of organisation. Digital computers are closer to telegraph switches and hydraulic pumps than they are to the animal brain. So thinking about brains as computers seems misguided.

Is cognition based on computation?

Not so fast you might say. Sure, the brain is not like a computer in that sense. But that’s not what we’re talking about when we say the mind is computational. Or more modestly, cognition is at least partly computation. A brain, or an artificial system, capable of cognition is in some sense computing. Perception, language processing, and reasoning are paradigmatic cases of computation. They’re good examples of symbol processing and rule-following. Let’s call this the Computational Basis of Cognition thesis [3].

To show why this view is mistaken, Daniel Hutto plus collaborators argue it is circular in reasoning. Computation depends on representation. Representation in turn depends on socio-cultural factors. Socio-cultural factors depend on cognition. So claiming cognition depends on computations leads to a viscious circle.

Let’s unpack this a bit, starting with computation depending on representation. This is the view that computations are always operations carried out on symbols. Symbols have two essential properties: representation and syntax.

Words and numbers are quintessential symbols. A word stands for something else. Such as when ‘bird’ represents something out in the world with wings, a beak and so forth. Numbers expressed as digits, for example, also stand for quantities. In the most basic sense, a number stands for how many things we can count.

Syntax is how symbols relate to each other. How we infer what word fits into a sentence or how to apply a series of arithmetical operations. When I say ‘X is wet’, you know what kind of words should fit where I have placed X (e.g. a noun). Likewise, syntax is what makes ‘10 + 1’ make sense to those familiar with basic arithmetic. Whilst ‘1++’ won’t make sense unless you’re familiar with certain programming languages.

Two important assumptions underlie the representational theory of computation. One, is that a symbol is picked out as an individual partly by its participation in syntax. The symbol on its own doesn’t give us everything we need to know to make sense of it.

Consider the word ‘and’. On its own it is not much use. Placed between two words it cojoins them. Depending on what kinds of words it joins together its function changes. It may be enumerating a list of objects or describing the shared properties of one object. It may be putting together two different events or two separate clauses in a sentence. With the right punctutation it may be a prompt for more information (‘and?’).

Second, and as important, symbols are partly picked out by what they are about. The non-semantic part of the symbol is not enough to determine what role a symbol is playing in a computational function. The content matters as much as the syntax.

For example, consider an electrical system which receives 0V or 5V as inputs [4]. It will output 5V only if its two input nodes are 5V. This seems like classic AND-gate behaviour. But this starts with the assumption that 5V represents ‘1’ or ‘true’. If instead 5V was to represent ‘0’ or ‘false’ then the system could be representing an OR-gate.

We can’t make sense of computation without at least some guidance from the content. This content is the representational component of the symbol. The syntax computers manipulate isn’t enough to tell us what computation is being performed. As Fodor puts it, “there is no computation without representation” [5].

If this is so, then how do we make sense of representation itself? What determines that 5V represents ‘0’ or ‘1’? At least when it comes to computers, we appeal to norms and practices outside of the computation itself. The engineer sets what the variable represents. By social convention, we agree that 5V represents ‘1’. It could have been otherwise, but it is what we as a community have decided to mean by 5V.

We can now see the problem of a looming vicious circularity. If representation depends on socio-cultural practice, what does socio-cultural practice depend upon? It depends on cognition. But we have said cognition depends on computation and so forth.

There are a few ways out of this. The temptation may be to deny the circle is vicious. Bootstrapping cognition with representation from socio-cultural practices that are basic such as gestures. In effect, we say ‘no cognition without sociality’. Thinking depends on language and that depends on culture. Whatever non-social animals are doing, it ain’t thinking. This strikes me as unnecessarily chauvinistic.

Another way is to deny Hutto et al’s claim that there are no tenable naturalized theories of content. In other words, content does not have to depend on socio-cultural practices at all. To find an explanation of semantics in some other way. Perhaps one of the promising avenues lies with teleosemantic or phenomenal intentionality theories.

Yet another way out may be to deny the idea that computation is always about symbol manipulation. Our models may be inescapably symbolic because of the way we currently build them. Connectionism is the idea that cognition doesn’t depend on symbolic representation [6]. Some connectionists deny cognition is computational at all. Others point towards a way of biocomputing that lacks the dependency on representation.

Provisional conclusion

At this point, I hope to have at least set out the limits of talk about the mind being like a computer. We can see that thinking of the brain as a computer may not be particularly helpful. It is also necessary to do some explaining when we say cognition is based on computation. At least if we want to avoid a vicious circle where we presume the thing for which we are arguing.

In the following posts, I will look at a couple of theories that put forward non-semantic theories of computation as the basis of cognition. First, the mechanistic computationalism advanced by Gualtiero Piccinini. Followed by the functional computationalism argued for by David Chalmers. Both attempt to break the vicious circle by removing the dependence on semantics. This may lead to a different understanding of how computationalism may yet be helpful when thinking about the mind.

Notes

[1] Siegelman, 2003, p. 105.

[2] Edelman and Tononi, 2008, p. 49.

[3] Hutto et al, 2018, p. 272.

[4] From Sprevak, 2010 in Hutto et al, 2018, p. 273.

[5] Fodor, 1987, p. 180.

[6] Piccinini, 2009. pp. 6-7.

References

Edelman, Gerald, and Giulio Tononi. 2008. A Universe Of Consciousness How Matter Becomes Imagination: How Matter Becomes Imagination. Basic Books.

Fodor, Jerry A. 1987. Psychosemantics: The Problem of Meaning in the Philosophy of Mind. MIT Press.

Hutto, Daniel D., Erik Myin, Anco Peeters, and Farid Zahnoun. 2018. ‘The Cognitive Basis of Computation: Putting Computation in Its Place’. In The Routledge Handbook of the Computational Mind, edited by Mark Sprevak and Matteo Colombo, 272–82. London: Routledge.

Koch, Christof. 2019. The Feeling of Life Itself: Why Consciousness Is Widespread but Can’t Be Computed. MIT Press.

Piccinini, Gualtiero. 2009. ‘Computationalism in the Philosophy of Mind’. Philosophy Compass 4 (3): 515–32.

Siegelmann, Hava T. 2003. ‘Neural and Super-Turing Computing’. Minds and Machines 13 (1): 103–14.