Essays and Stories
by Seyed P. Razavi

Are our minds like computers? [2]

In the previous post, I gave some reasons from neuroscience for why the brain is not like a computer. I also raised an issue with thinking of computation as the basis of cognition. If computation is dependent on representation this can lead to a vicious circle. This is a significant challenge to the semantic view of computation. In this post, I present the first of two possible non-semantic views.

Minimal Computationalism

David Chalmers [1] offers a minimal view of computationalism that stands on two thesis. The first is the thesis of computational sufficiency. This states that the right kind of computational structure is enough for possessing a mind. Including a wide range of different mental properties. This doesn’t mean everything mental needs such an explanation. But the bar is low enough that the right computational system would have a mind.

The second thesis is that of computational explanation. This underpins much of the work in computational neuroscience and artificial intelligence. With this idea, computation provides a framework for explaining cognitive processes and behaviour. Exactly what kinds of organisational structures are computational still needs an explanation.

A weak notion of computation is that it is information processing [2]. Most organisms gather and process information about their environment. Cognition does involve information processing but this doesn’t tell us much about how it works. With this view, computation doesn’t have much explanatory power.

Its important to note that information and computation are different concepts in the history of ideas. They have different underlying mathematical theories. They play different explanatory roles in cognition. Furthermore, the most relevant theories of computation are applicable to digital computers. It is from the notion that cognition works like digital computers that the idea of computationalism arose. To find a more explanatory version of computationalism, we need to trace the idea through history.

Is minimal enough?

Our modern idea of computationalism arises from the work of Warren McCulloch and Walter Pitts [3]. They built the idea of the artificial neuron upon the concept of a Turing machine. Such a hypothetical machine uses rules to perform mathematical calculations from a set of input variables. McCulloch and Pitts proposed a mathematical theory of the brain as a kind of Turing machine. Their artificial neural network was the basis of John von Neumann’s bold claim that anything that can be unambigiously described can be realised on a suitable neural network [4].

As Gualtiero Piccinini argues, this is a misunderstanding of the Church-Turing thesis [5]. The Church-Turing thesis says that anything mathematically computable can be computed by a Turing machine. It doesn’t follow that Turing machines can simulate everything. Not everything needs to be computational. For example, one of Turing’s motivations was to show that the decision problem for first order logic had no algorithmic solution [6].

Still, for some this is an article of faith in science’s eventual capacity to describe all phenomena in the way von Neumann hoped. This view leads to pancomputationalism. Roughly stated, everything is describable at some level as computation.

The worry with this kind of wide notion of computation is that it fails to explain cognition. If everything is computation then what is the mind above and beyond everything else? If we say rocks in space are performing computations and so is the brain, this only satisfies a very shallow inquiry into what minds are doing. We’re not saying very much at all when we say everything is computation.

Chalmers provides a useful distinction to show this isn’t a worry, at least in the case of minimal computationalism [7]. Consider the case of digestion. In some sense, the digestive system implements a computation in the way it works. Various digestive rules apply to food inputs and various outputs result. Yet, this is irrelevant to it being a digestive system. The same computation simulated by a digital computer wouldn’t make a digestive system. By contrast, a cognitive system is cognitive in virtue of it implementing some computation.

Computation without representation

I have raised the worry about computation depending upon representation before . Whether a 5V electrical input means ‘1’ or ‘0’ turns a simple circuit from an AND-gate to an OR-gate or vice versa. The semantics of the symbol matters for the operational role performed.

Chalmers denies such a role for semantics in computation [8]. He claims computations are syntax. Any semantic role is a consequence of the implementation. This keeps the prospects for AI and cognitive science open. We don’t have a good enough explanation for computational content yet, he claims. So if our computational theories depend upon an explanation of content then we will not progress very far.

Instead, Chalmers suggests we should strive forwards with a common foundation. One that works for both computation and computational content. To achieve this we need a shared notion of causation: how some causes give rise to certain effects. This common notion would avoid the two theories from drifting far apart. Yet it wouldn’t hold us up in our pursuit for a computational theory. When a good theory of computational content comes along we can adopt it because of its shared underlying causal explanation.

A role for functionalism

So far I’ve focused on what computationalism signs us up for under Chalmers’ view. An important element of his view that we still need to consider is its functionalism. Functionalism is the view that the mind is the functional organisation of the brain [9]. That mental states are functional states described by their causal relations. Each functional state plays a role within a network of functional relations.

Consider a fast food burger restaurant which has roles for the people who work there. Whether that’s on the sales point, flipping burgers, frying potatoes, or preparing milkshakes. The organisation of the restaurant determines what kind of service it provides. Likewise, the organisation of the brain determines what kind of mind it provides.

The specifics of Chalmers view on the organisation of the mind need not concern us too much for the moment[10]. The more important take away is a principle that underpins his functionalism. The principle of organisational invariance [11] states that there are certain properties of a functional system that hold even as the system changes. These invariant properties remain as long as the functional organisation remains.

Reconsider the fast food restaurant in the prior example. As long as you have someone taking orders, someone cooking burgers, and making fries then it remains a burger joint. It doesn’t matter if you chip the potatoes with this kind of machine or by hand using knives. It doesn’t matter if the patties are fresh or from frozen. But if you stop making burgers and fries, then it is no longer a burger joint.

Chalmers argues that most mental properties are these kinds of invariant properties. The only exceptions are those that are partly dependent on the external environment [12]. Knowledge, for example, depends on your belief function turning out to be a true proposition about the world.

What connects Chalmers’ functionalist view with computationalism are the thesis of minimal computationalism. Each functional state is explained as a computation (the thesis of computational explanation). The functional organisation is the right kind of structure for a mind (the thesis of computational sufficiency). The brain realises the functional organisation; the functional organisation is specified as computation.

Taking stock

Before wrapping things up there are a couple of concerns about Chalmers’ view worth putting out there [13]. First, functionalism as a whole holds that the material that makes the system is not critical to it being a mind. Its the organisation that is important. As long as the organisation of two things are equivallent then both realise the same cognitive process. Yet, he also denies that every physical process has this feature as is the case with digestion. Changing the material of the digestive system but keeping the same organisation does not guarantee digestion. So what gives cognition its special status? What makes it organisationally invariant in ways that other systems are not?

Following on from that, what if contrary to Chalmers’ view, the material which makes the brain turns out to be important. Even crucial. As Daniel Dennett says, “in order to detect light… you need something photosensitive” [14]. The neuronal material may turn out to have just the right properties for some kinds of cognition to occur. This doesn’t prevent the brain from being computational. But it may mean functionalism turns out to be false. The property of organisational invariance may not be enough to pick out mental properties.

This post has covered a lot of ground. Hopefully we’re a little clearer on what computationalism is about. We’ve seen there is some disagreement about how much explaining computation does. It should be clear that there is a lot at stake in the sciences of cognition on whether computationalism is a good explanatory framework. I’ve also considered how it would integrate with the metaphysical position of functionalism.

For the final post on this question, I will consider an alternative non-semantic theory from Gualtiero Piccinini. His approach avoids paying the price for committing to functionalism.

Notes

[1] Chalmers, 2011, p. 323.

[2] Piccinini, 2009, p. 518.

[3] Piccinini, 2010, p. 272.

[4] von Neumann, 1951, p. 311.

[5] Piccinini, 2010, p. 273.

[6] Piccinini, 2004, p. 377.

[7] Chalmers, 2011, p. 328.

[8] Chalmers, 2011, p. 330. c.f. Crane, 1990.

[9] Putnam, 1960, p. 149.

[10] Chalmers (2011) talks about a type of vector-based (combinatorial) state machine. The origins of this idea goes back to the 1990s and its not clear whether this is a view Chalmers currently holds.

[11] Chalmers, 1996, pp. 247-9.

[12] Chalmers, 2011, pp. 331-3.

[13] Hutto et al, 2018, pp. 275-6.

[14] Dennett, 1997, p. 73.

References

Chalmers, David J. 1996. The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.

———. 2011. ‘A Computational Foundation for the Study of Cognition’. Journal of Cognitive Science 12 (4): 323–57.

Dennett, Daniel C. 1997. Kinds Of Minds: Toward An Understanding Of Consciousness. Basic Books.

Hutto, Daniel D., Erik Myin, Anco Peeters, and Farid Zahnoun. 2018. ‘The Cognitive Basis of Computation: Putting Computation in Its Place’. In The Routledge Handbook of the Computational Mind, edited by Mark Sprevak and Matteo Colombo, 272–82. London: Routledge.

Neumann, John von. 1951. ‘The General and Logical Theory of Automata’. In John von Neumann: Collected Works, edited by A. H. Taub, 288–326. Oxford: Pergamon Press.

Piccinini, Gualtiero. 2004. ‘Functionalism, Computationalism, and Mental Contents’. Canadian Journal of Philosophy 34 (3): 375–410.

———. 2009. ‘Computationalism in the Philosophy of Mind’. Philosophy Compass 4 (3): 515–32.

———. 2010. ‘The Mind as Neural Software? Understanding Functionalism, Computationalism, and Computational Functionalism’. Philosophy and Phenomenological Research 81 (2): 269–311.

Putnam, Hilary. 1960. ‘Minds and Machines’. In Dimensions of Minds, edited by Sidney Hook, 138–64. New York, USA: New York University Press.