Machinic Sensibility: A speculative venture into the sensorial horizon of self-tracking technologies
By Marius Senneville
A second year PhD student in the Social and Cultural Analysis program at Concordia University, Marius Senneville has been focusing his research on the overlap of science and technology studies, political economy and organization studies. His thesis project emphasizes the cultural and organizational/financial factors influencing how ethical and socially responsible research in artificial intelligence and machine learning is conceptualized and operationalized in entrepreneurial spaces of knowledge production.
In its initial, early 1960s popularization, the idea of a co-extensivity of technology and the human sensorial apparatus—where, for instance, the book was to be seen as the extension of the eye, and electric circuitry, the extension of the central nervous system (McLuhan, 1994)—had mostly been approached through the lens of the broader cultural and societal consequences these technological advances had at the scale of a species. Recent developments within the field of self-tracking technologies, in both their cultural and industrial prevalence and the sheer audacity of their integrative pretensions, now very much urge us to refocus on the more practical and phenomenological aspects of these systems. In effect, what Song-ha Hong (2016) suggests through the notion of machinic sensibility, is a reconfiguration of the reflexive layer through which humans attend to their technologically saturated environment in favor of a more prominent role being accorded to algorithmically produced recommendations—with all that this implies in terms of the kinds of opacities and black boxing dynamics this would increasingly force individuals to contend with.
Self-tracking technologies, at their core, are systems intent on producing observational data that render both our physiological processes and social behaviors representable, and also make this purportedly objective information available for the users to adjust their behaviors in a way that is conducive to the “better version of themselves”. This most recent iteration of the transhumanist project of human-machine integration originates within the techno-utopian movement of the “Quantified Self,” with Wired founding executive editor and born-again Christian Kevin Kelly one of the key figures of this community (Turner, 2008). If the Quantified Self’s initial emphasis was exclusively focused on “self-knowledge through numbers,” later iterations of the movement have extended this preoccupation for the quantifiable to more qualitative and experiential reporting as a way to gather even richer streams of data (see for instance Humphrey, 2019). Besides, and as Kelly expresses it in the following quotation, the Quantified Self’s vision is not simply one of statistical or even more qualitative appraisal of the physiological, but rather one of a rearticulation of the way humans are able to deal with this information:
“We’re … just not evolved to deal with numbers. Our brains aren’t really good with dealing with numbers, we don’t do statistics very well, we’re not really a number animal. […] But what I think the long term direction of this is, [we want] to use these sensors we’re talking about to give us new senses. To equip us with new ways to hear our body. […] Right now we have to see the data, the charts, the curves, but in the long term where we want to go is, we want to be able to feel, or see, or hear them”
(Kevin Kelly, cited in Hong, 2016: 21-22).
It is this very emphasis on the circumvention of the traditional channels of information integration which constitutes the core logic of machinic sensibility. Whereas the feedback information produced by self-tracking technologies was previously taken bit by bit and consciously acted upon by the user, in this new paradigm of human-machine integration, this same feedback is to be progressively routinized, invisibilized, and to eventually become part of the “normal” and pre-reflective sensorial background of the individual. Ultimately, the goal is to “extend” individuals’ sensoriums from their initial and “bounded” physiological apparatus to an ever-expanding panoply of (technological) sensors which might cover not only their biological markers, but also the indicators of their social, and even economic, well-being.
Markedly, machinic sensibility as an end-result of self-tracking technology is still very much in its promissory state, as a sort of sensorial horizon which even the most convinced Quantified-Selfers are still quite distant from. Yet, this state of phenomenological human-machine integration in effect remains the objective that informs an entire industry and populates the dreams of many Silicon Valley billionaires and other transhumanists enthusiasts hell-bent on attaining what they conceive of as the next step in human evolution (Borup et al., 2006). Meanwhile, design experiments such as the Life Automation System (LIAM) and Ambient Cycle (AC) grant us the opportunity to better illustrate where self-tracking technologies are intent on going, and how exactly they intend on doing it.
Quantified Self enthusiast Tahl Milburn developed the LIAM as a house-installation of USB-connected lights whose color-frequency varies in function of his weight, activity, sleep, age, the frequency of his interactions with family members, and even his net worth and the market performance of his investments (Hong, 2016; see also Quantified Self, 2015). Another example of this more “environmental” logic of communication is the Ambient Cycle, a menstrual tracking system developed by Sarah Homewood and Anna Vallgårda (2020), two scholars at the IT University of Copenhagen. Similar to the LIAM, it uses a system of softly glowing ambient lights to convey, based on self-reported data from the user, its predictions as to when the next phase of the menstrual cycle will occur.
In both the cases of LIAM and AC, the systems’ designers have tried to leverage the visual sense to convey, not so much statistical values or an explicit statement, but rather something more implicit, which we construe here as a more “mood-based” channel of communication. Through this more environmental and even “atmospheric” form of communication, the goal is in a way to hijack the users’ sensorial apparatus and directly “plug-in” to what designers understand to be their “pre-reflective brain.” Thenceforth, a fully realized machinic sensibility would portend to create an un-discontinuous loop between the devices’ recommendations and the users’ centers of decision-making—in effect, avoiding, or rather bypassing such issues as the lack of motivation the users might suffer from (which would obviously impede their acting upon these recommendations), and thus realizing this proverbial “augmented” self.
This hijacking of the sensorial and the decision-making is presented by Hong as a redistribution of epistemic authority from the human to the machine. His point is not so much to argue for the disappearance or “algorithmic absorption” of a previously pure and Cartesian consciousness, somehow entirely bereft of “outside” (i.e., social) interference. Rather, the argument for machinic sensibility is a Foucaldian one which sees in the emergence of self-tracking technologies a reconfiguration of the knowledge systems which we entrust with the truth(s) of our individualities: “Machinic sensibility is not new, but each historical rendition of its promises reorganises the social distribution of epistemic authority across machines and humans, texts and bodies” (Hong, 2016: 17; see also Foucault, 2012). We have always been appropriating outside influences in the constitution of our selves, and the apparition of self-tracking technologies is but one further instance of a redistribution of epistemic authority as to whose voices should be listened to in this process of finding our own.
In the case of self-tracking technologies, and the potential attainment of machinic sensibility, this redistribution is formulated as a general skepticism toward human intuition and experience, and a concomitant elevation of the inductive capacities of big data analytics combined with supposedly “raw” observational data. While some adherents stress the epistemic value of qualitative data, self-tracking as a sociotechnical horizon very much subscribes to the superiority of quantitative, “solid” empirical data. Such data are seen as confronting us with unassailable truths about the world—and about ourselves. But machinic sensibility involves more than the Modern hypervaluation of supposedly objective data. In the achievement of this horizon, self-tracking enthusiasts are in fact looking to exceed two key challenges of human-machine interactions—namely, the quantitative difficulty involved in processing an overwhelming amount of constantly renewed streams of information, and the qualitative challenge of the sheer unintelligible and/or uninterpretable complexity of this data (see Kitchin, 2014). While these issues were previously construed as inherent to communication technology itself, and to big data specifically, self-tracking succeeds in discursively rearticulating these flaws so as to reassign them to the human users rather than to data infrastructures per se; an operation which then allows it to present the potential of machinic sensibility as the solution to these human sensorial and interpretive deficiencies (Hong, 2016).
One key feature of the attainment of machinic sensibility that requires particular attention is the way it confronts individuals to its own specific kind of opacities and black-boxing dynamics and, through it, ultimately enforces universalizing understandings of both “the better version of humankind” and of the human sensorium. In effect, self-tracking technologies, like most other systems making use of machine learning and big data analytics, operate in an ambiguous space where, in a way, opacity becomes the precondition for intelligibility: where the interpretive abilities of the system are directly attributed some kind of “magical quality” (see Elish & boyd, 2018). That is, users are asked to give full credence to the inductive capabilities of these systems only insofar as they originate from algorithmic architectures of such complexity and recursive sophistication that even the systems’ designers (much less their users) cannot really expect to comprehend the exact, step-by-step workings of their computations (Burrell, 2016). In machinic sensibility, this opacity works in two closely intertwined ways, analytically distinguisable but operationally conjoined and acting at one and the same time. On the one hand, self-tracking systems are often sensing and recording phenomena whose complexity or “modalities of existence” are simply incomprehensible to humans—for instance, the human skin galvanic or electrodermal activity or, in the case of the LIAM, the exact and real-time fluctuations of an individual’s financial worth (see Kyriakou et al., 2019; Hong, 2016; Quantified Self, 2015). To the extent that self-tracking technologies have to translate for their human users phenomena which they cannot grasp under their original form, these systems are tasked with transforming and interpreting the empirical and “objective” data that they capture. To achieve this translation from the imperceptible to the intelligible, the systems’ engineers necessarily have to make use of some kind of added, external interpretive framework: a knowledge system that also acts as a complex of biases, cultural tropes and other representations which come to determine, for instance, what counts as “good sleep.” This complex is not only determined by the idiosyncrasies of the engineers, but also by more abstract conceptualizations of what is constitutive of “the better self”, which combine neoliberal understandings of adaptability, responsibility, and resiliency (Chandler & Reid, 2016), with the always moving scientific consensus over the optimal solutions to various sorts of biomedical issues—for instance with regard to nutrition, or to the “healthy” amount of sleep one requires.
On the other hand, the opacity of machinic sensibility also operates through the ability of self-tracking systems like the LIAM or Ambient Cycle to communicate in this more “mood-based” fashion to their users. Although these two specific devices are currently leveraging sight to communicate through a relatively clear-cut code of color, we can easily imagine future devices that would make use of other sensorial channels and where the gradation would be more nuanced. We can think of Elon Musk’s venture in brain-machine interface, Neuralink, which purports to bypass the habitual sensorial bottleneck through the installation of electrodes directly within the brain (Musk, 2019); or more prosaically, of music from a pre-determined genre that would be generated in real-time in function of the received data. In effect, the more these devices are tuned to provide an atmospheric or “mood-based” channel of communication, and the more abstract the transmission of the system’s recommendations, the stronger, in the end, will be the opacity with which the users are confronted when they try to clarify for themselves the exact content of the transmission, and what it is exactly that the system is trying to make them accomplish. Furthermore, and similarly to the way engineers import their own cultural specificities into their work, designers bring about their own representations over the meanings or feelings associated with a certain color, music, or general ambience. That is, in the act of abstracting a recommendation into a certain “mood” transmitted through whatever combination of our five canonical senses, designers come to enforce and universalize a certain sensorial framework. For instance, what seems to be a soothing change of ambient lights for Western people could be interpreted in an entirely different way by people from somewhere else—or, for that matter, from a different epoch. In this way, and according to the currently prevailing gender ideologies, blue is a suitable colour for little boys just as pink is for girls, and men are cool (meaning “rational”) while women are hot (meaning “sensual” and “emotional”). In premodernity, however, the associations were reversed: women were supposed to be cool and men hot, and little boys were clothed in pink frocks (pink being a diminutive form of red, and red being associated with men due to its connotation with heat) while little girls were decked out in sky blue (due to the association of this colour with the Madonna) (see Classen 1998: ch. 3). As the historical and cultural contingency of this example suggests, the horizon of machinic sensibility clearly incorporates the potentiality of universalizing views over the definition of “the better version of ourselves”, but also of the human sensorium; two issues which definitely should be taken into consideration in the further development of self-tracking technologies.
References
Borup, M., Brown, N., Konrad, K., & Van Lente, H. (2006). The Sociology of Expectations in Science and Technology. Technology Analysis & Strategic Management, 18(3–4), 285–298. https://doi.org/10.1080/09537320600777002
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 205395171562251. https://doi.org/10.1177/2053951715622512
Chandler, D., & Reid, J. (2016). The neoliberal subject: Resilience, adaptation and vulnerability. Rowman & Littlefield International.
Classen, C. (1998). The color of angels: Cosmology, gender, and the aesthetic imagination. Routledge
Elish, M. C., & boyd, danah. (2018). Situating methods in the magic of Big Data and AI. Communication Monographs, 85(1), 57–80. https://doi.org/10.1080/03637751.2017.1375130
Foucault, M. (2012). The courage of truth. The government of self and others II: lectures at the Collège de France, 1983-1984. Picador : Palgrave Macmillan.
Homewood, S., & Vallgårda, A. (2020). Putting Phenomenological Theories to Work in the Design of Self-Tracking Technologies. Proceedings of the 2020 ACM Designing Interactive Systems Conference, 1833–1846. https://doi.org/10.1145/3357236.3395550
Hong, S. (2016). Data’s Intimacy: Machinic Sensibility and the Quantified Self. https://doi.org/10.7275/R5CF9N15
Kitchin, R. (2014). Big Data, new epistemologies and paradigm shifts. Big Data & Society, 1(1), 205395171452848. https://doi.org/10.1177/2053951714528481
Kyriakou, K., Resch, B., Sagl, G., Petutschnig, A., Werner, C., Niederseer, D.,
Liedlgruber, M., Wilhelm, F., Osborne, T., & Pykett, J. (2019). Detecting Moments of Stress from Measurements of Wearable Physiological Sensors. Sensors, 19(17), 3805. https://doi.org/10.3390/s19173805
McLuhan, M. (1994). Understanding media: The extensions of man. MIT Press.
Musk, E. (2019). An integrated brain-machine interface platform with thousands of channels. Neuroscience. https://doi.org/10.1101/703801
Quantified Self. (2015). Tahl Milburn: “How My Life Automation System Quantifies My Life.” Vimeo. https://vimeo.com/147799609
Turner, F. (2008). From counterculture to cyberculture: Stewart Brand, the Whole Earth Network, and the rise of digital utopianism. Univ. of Chicago Pr.