“[D]eep learning systems are ‘already pushing their way into real-world applications. Some help drive services inside Google and other Internet giants, helping to identify faces in photos, recognize commands spoken into smartphones, and so much more’. If deep learning networks systematically classify the world’s patterns in ways that are at variance with our ordinary human classifications, and if those networks are lodged in the workings of the technology that organizes and shapes our cognitive lives, then those lives will be organized and shaped by those variant classifications.
“However, surely we will notice this divergence, I hear you say. It is here that a second point becomes relevant. What if the networks in question are fluidly and expertly integrated into our everyday activities, such that they are transparent in use? Imagine such networks operating as part of a cognitive-assistant-style wearable that classifies situations and transmits the results via an optical head-mounted display… such behaviour-guiding technology, even though it enhances cognitive performance, and even though it is operating in cognition central, could be transparent. On some occasions, no doubt, its variant classifications of the world would lead to mismatches to which the human user will be sensitive before anything detrimental occurs. However, it seems just as likely that subtle changes in one’s engagement with the world—changes that, for example, have potentially damaging social consequences for how one classifies others…
“The final aspect of this worrying scenario comes to light once one realizes that [such technologies] are at least on the way to being correctly treated as genuine parts of the user’s own cognitive architecture… that unconsciously guide my behaviour… be part of what I unconsciously believe to be the case, and thus presumably will have the same status as my more familiar, internally realized unconscious beliefs when it comes to any moral judgments that are made about my resulting thoughts and actions.”
– Michael Wheeler
When some technology is used by a skilled person without difficulty, the person is no longer consciously aware of the technology. It disappears from her awareness in the same way that when in the flow of writing, she doesn’t notice the pen in her hand. This kind of lack of active awareness of the technology in use is labelled transparency.
The philosopher Heidegger analysed this everyday usage of tools and objects in our environment, in which we manipulate such equipment in a hitch-free manner. During use, equipment is no longer seen as objects independent of us. Instead, they are part of the process of our normal activity in much the same way as our bodies.
In an example provided by Merleau-Ponty, consider a blind person using her cane to feel her way around without being consciously aware of the cane itself. As Wheeler points out, this is not just an example of transparency, although it is that as well. It is also a case of the device being used by the blind person to discover facts about the world. The cane acts like one of her other biological senses. When we see or hear without difficulty, we are not aware of our eyes or ears doing the mediating for us and likewise for the blind person and her cane. “Put another way, the blind person’s experiential interface is with the world beyond the cane, not with the cane itself” (p. 3).
Wheeler provides two, more modern, examples of technologies which stand in for sensory inputs, both doing so by means of augmenting tactile experience. The first is the North Sense wearable which vibrates to indicate when it is facing magnetic north. The other is the tactile-visionary sensory substitution (TVSS) system which when equipped by a congenitally blind person conveys information from video images to vibrations on the wearer’s body. What these two technologies do is change the way the person wearing them perceives the world around them. After a period of adaptation, the technologies become transparent to their users. Whilst the TVSS wearer doesn’t see through their eyes, nor the North Sense wearer develop a distinct biological sense for magnetic north, the tactile stimulation is, over time, treated in much the same way.
However, it seems to me the similarity only goes so far. The TVSS or tactile wearable may disappear through the trick of familiarity and adaption, but an attentive person can bring it back to mind simply by remembering the device is present. The same is true of other immersive technologies such as virtual reality or even more mundane cases such as eye glasses. However, no matter how much we focus our attention on our actual eyes doing the seeing or our ears doing the hearing, it is impossible to bring them into focus as mediating organs without some external feedback. The difference between mediated (through another sensation) and immediate perception may be negligible in the relaxed, everyday use a person makes of such technologies but for the attentive person the difference becomes vast, a matter of kind rather than degree.
Even so, transparency is generally considered a desirable goal of interface design. The interface being “‘invisible’ goes hand in hand with ‘seamless, efficient and functionally optimized’ whilst ‘visible’ goes hand in hand with ‘cumbersome, inefficient and functionally suboptimal’” (Wendt, 2013, quoted on p. 5). However, this does come at a price in that what is invisible to us works on us in an “unconscious”, non-deliberate manner. As a case in point, consider the way 21st-Century media by-passes conscious decision making, “capturing our ‘attention’ without any awareness on our part” (Hensen, 2015, quoted on p. 8). This is certainly a worrying development. The way media operates on machine-to-machine networks at scales and at a tempo beyond our apprehension means “consciousness necessarily lags behind the operational effects of such media” (ibid).
The epigraph at the start of this post highlights the danger Wheeler sees in decision making being dependent on smart technologies that are transparent to us. If deep learning or other machine networks have potentially undesirable effects on our ability to make our own decisions, then what alternative approaches are available to us?
On this point, Wheeler brings up the role of ‘conversational’ smart technologies. The example provided is the smart building which interfaces with its human inhabitants in a constructive dialogue, providing structural and mood-enhancing alterations based on what the person expresses as her needs. Another example may be voice operated digital assistants such as Alexa.
What is called for then, according to Wheeler, is technology that is not hidden in the background of our experience but blended in without being either disruptive or obtrusive.
“We do not always want our smart technology to be muzak—a transparent factor that manipulates our activities in ways that we do not realize; sometimes, we want our smart technology to be music—not for the most part challenging music that interrogates us and makes us feel uncomfortable (although there is certainly a place for such music and for such technology), but music that solicits listening from us, and so in that way shows up in our conscious experience. In other words, just as music is, for the most part, designed precisely so as to attract our conscious attention, but without its invasion of conscious- ness being evidence of any breakdown in our engagement with it, so, the proposal goes, the kind of smart technology in which we are interested should be designed with a similar aim in mind, that is, to disrupt transparency without disrupting the skilled use of that technology. That way we can check whether the discriminatory verdicts reached by such technology coincide with our current take on the world.”
References
Wheeler, M. (2018) ‘The reappearing tool: transparency, smart technology, and the extended mind’, AI & society [Online]. DOI: 10.1007/s00146-018-0824-x.