Feed on
Posts
Comments

Human Echolocation: FlashSonar and Seeing with Our Ears

By Jillian Nauss

Jillian Nauss is a first-year student in the Social and Cultural Analysis doctoral program at Concordia University, Montreal. The main focus of her research is on the April 2020 mass shooting in Nova Scotia as a form of collective trauma and its effects on emotions, identity, and community resilience.

In his classic essay, “What Is It Like to Be a Bat?” philosopher Tom Nagel (1974) contends that the bat, due in part to its reliance on echolocation, is “a fundamentally alien form of life” (p. 438, italics in original). He goes on to explain that “bat sonar, though clearly a form of perception, is not similar in its operation to any sense that we possess, and there is no reason to suppose that it is subjectively like anything we can experience or imagine” (Nagel, 1974, p. 438). But how valid is this statement? Can we, as humans, really not possess this ability? Or have we merely been ignorant of how to develop and hone this capacity? In the paper that follows, I will explore these questions, and make a particular argument that echolocation is a natural human sense in the same way touch, sight, smell, hearing, and taste are also within our sensory experiences. For these reasons, my paper will begin with a history of our knowledge regarding human echolocation and then go on to explain how this capacity can be defined and employed today with the help of Daniel Kish, an expert human echolocator. Finally, I will conclude with a brief description of the future implications of this sensory ability.

According to Thaler and Goodale (2016), the term ‘echolocation’ was first used by a Harvard physiologist by the name of Donald Griffin. However, in the context of his use of the word, Griffin was referring to bats’ ability to avoid obstacles in the dark (Thaler & Goodale, 2016). Since then, in addition to bats, dolphins, whales, and other animals have been extensively studied for their echolocating capacities (Schwitzgebel & Gordon, 2000). However, unlike research into animal echolocation, research into humans’ capacity to echolocate is limited, especially within the field of anthropology (Stoffregen & Pittenger, 1995; Thaler & Goodale, 2016). As a result, most of our current knowledge about human echolocation comes to us from the disciplines of psychology and neuroscience. Yet, this is not to say that echolocation in humans was non-existent previously, only that it was addressed using different conceptions of the notion. Indeed, according to some researchers, French philosopher Denis Diderot was one of the first to write about this phenomenon in 1749 (Supa et al., 1944). Diderot relates how he had a blind friend with the ability to perceive not just obstacles in his way but also the ability to perceive the object’s distance from the blind man himself (Supa et al., 1944). Diderot attributed this ability to avoid obstacles to “the action of air on [the blind man’s] face – that is by the increased sensitivity of the facial nerves and end-organs” (Supa et al., 1944, p. 133). In other words, Diderot believed that changes to the air on the individual’s face could account for this change in perception. This theorization of air pressure and ‘the distance sense’ soon became not only accepted but widely supported by others in the field under the term “facial vision” (Supa et al., 1944; Kellogg, 1962; Schwitzgebel & Gordon, 2000).

As Diderot’s theory and explanation took hold, experimentation into this phenomenon began. A German psychologist by the name of Heller is credited with beginning this scientific inquiry (Supa et al., 1944). Through his experiments, he concluded “that ‘the perception of changes in the sound of [the blind person’s] footsteps leads to careful attention for sensations of pressure in the forehead. If these characteristic sensations then arise, [the blind person] is sure that an obstacle is in his path and he turns aside in good time’” (as quoted in Supa et al., 1944, p. 134-135). American psychologist William James hypothesized that pressure sensations in the tympanic membrane may explain the sense of obstacles, which was then tested by his colleague, Dresslar (Supa et al., 1944). From these experiments, Dresslar concluded that “the basis for judgment [that is, perception of the environment] was due to differences in sound” (Supa et al., 1944, p. 135).

Around this time – that is, the beginning of the twentieth century – some researchers began to refer to facial vision as a sixth sense. This idea was first introduced by French ophthalmologist, Émile Javal, who contended that this sixth sense “was akin to touch and aroused by ether waves,” while the German psychologist Hauptvogel supposed it was the result of “stimulation of the ear drum [sic] by some mysterious substance in the ether” (Supa et al., 1944, p. 135). In short, these researchers could reconcile neither which organ produced this sensation nor determine whether it was one organ alone.

While some researchers continued to argue that facial vision derived from pressure sensations and temperature changes to nerves in the face, other researchers began focusing their attention on auditory stimulation and object perception. Through his own work with the blind, Pierre Villey, a French pedagogue, believed that facial vision was closely associated with sound (Supa et al., 1944). Specifically, he believed that “when there is some change in these sounds [that is, the sounds all around us in our environment] we interpret an obstacle between us and the source of the sound” (Supa et al., 1944, p. 137). Feelings of pressure on the face, he contended, were only an illusion (Supa et al., 1944). As a result, despite years of inquiry, “not only are blind who possess the ‘sense of obstacles’ unable to explain the basis of their performance, but, as this review shows, the investigators of the phenomenon are themselves unable to come to any agreement regarding it” (Supa et al., 1944, p. 138). For these reasons, psychologists at Cornell University, led by Dallenbach, began a three-part investigation in facial vision.

Known throughout the echolocation literature as the Cornell Studies, Dallenbach and his colleagues made significant discoveries regarding facial vision in the blind that inform our knowledge of human echolocation today. With their first study of multiple experiments published in 1944, Supa, Cotzin, and Dallenbach came to two major conclusions regarding facial vision. First, “the pressure theory of the ‘obstacle sense,’ insofar as it applies to the face and other exposed areas of the skin [as Diderot suggested], is untenable,” and second, “aural stimulation is both a necessary and a sufficient condition for the perception of obstacles by our [participants]” (Supa et al., 1944, p. 183, emphasis in original). From here, Worchel and Dallenbach (1947) examined facial vision in deaf-blind individuals. They found that deaf-blind participants neither had nor were capable of learning an ‘obstacle sense,’ while the cutaneous surfaces of the external ears also lacked the ability to make this perception (Worchel & Dallenbach, 1947). As a result, they concluded that “the aural mechanism involved in the perception of obstacles by the blind is audition” and, in saying this, “auditory theory, sustained by the results of this study, should no longer be regarded as theory but as established fact” (Worchel & Dallenbach, 1947, p. 553).

Spurred on by these findings, Cotzin and Dallenbach (1950) continued to look into the ‘obstacle sense’ as it related to the pitch and loudness of sound. Through this endeavor, they determined continuous sounds, such as a shhh, or intermittent sounds, such as tongue-clicks, were both adequate to gather environmental information (Cotzin & Dallenbach, 1950). As a result, they claimed that pitch, and not loudness, was both a necessary and sufficient condition for blind individuals to perceive objects (Cotzin & Dallenbach, 1950). Cotzin and Dallenbach (1950) attribute the importance of pitch in the ‘obstacle sense’ to the Doppler effect; as the perceiver moves closer, their position to the sound source moves, and so too does the frequency of its pitch. These results were further supported by Winthrop Kellogg (1962) when he suggested that auditory scanning helped improve echolocating capabilities. Put simply, Kellogg (1962) noticed that, unlike blindfolded-sighted people in the study, blind participants would move their heads in order to better gauge the echoes bouncing off of the obstacle. In addition to this scanning movement, Kellogg (1962) also discovered that, through echolocation, blind participants could gather knowledge on object distance, size, texture, and density. In fact, blind participants could “perceive difference in distance better than a [sighted] person using only one eye” (Kellogg, 1962, p. 402). The results from Kellogg’s (1962) study indicated that “the echoes returned from many of these substances are sufficiently distinctive for skilled observers [i.e., blind persons with echolocating experience] to identify them” (p. 404). In other words, put simply, blind participants, more than their blindfolded-sighted counterparts, could gather information on object size, texture, and density by emitting a sound and listening for its echoes.

Today, echolocation is seen as operating through pulse-to-echo relations. A pulse refers to a sound that is both generated by the perceiver and heard directly from the sound source (Stoffregen & Pittenger, 1995). An echo, on the other hand, refers to a sound registering in the ear following its reflection from on object or surface (Stoffregen & Pittenger, 1995). When an echolocator clicks their tongue, for example, they initially hear the sound as produced from their mouth; this is the pulse. However, due to a pulse-to-echo gap, there is a time delay between the sound of a pulse and its echo off of an object (Downey, 2020). Although this delay is often small enough to go unnoticed, our sense of hearing can distinguish between these sounds in our environment (Downey, 2020). In fact, Downey (2020) argues that, due to this gap between sensation and consciousness, “we have to realize that we ‘sense’ many things that we do not consciously perceive” (Downey, 2020, n.p.). While this may be true for many sighted people, many people who are blind consciously perceive the world using echolocating abilities.

One of the leading world experts on human echolocation is Daniel Kish. Completely blind by the age of thirteen months, Kish is not only a self-taught echolocator but founder and president of the organization World Access for the Blind (Kish, 1997). Having completed his master’s degree in developmental psychology on the effects of echolocation in blind children, Kish has gone on to present these findings and other accounts of human echolocation at several forums and conferences concerning movement and navigation (Kish, 1997). Since then, he has acquired a master’s degree in special education/orientation and mobility (Kish, 1997). Through this work, and due in part to his own childhood cultivation of echolocation, Kish has taught countless other children and adults alike how to improve their echolocating capacities through his program at World Access for the Blind (Kish, 1997).

For Kish, there exists a difference between echolocation and the ability he uses and teaches, referred to as FlashSonar (Kish, 2013). Unlike FlashSonar, Kish (2013) believes that echolocation addresses “relatively rudimentary abilities” (n.p.), such as the ability to differentiate an object’s distance, size, texture, or density (e.g., Supa et al., 1944; Worchel & Dallenbach, 1947; Cotzin & Dallenbach, 1950; Kellogg, 1962). Kish (2013) contends that “the actual ability of echolocation ranges far beyond this,” and proposes FlashSonar as its advanced alternative (n.p.) In this way, unlike the passivity often associated with our traditional understanding of human echolocation, FlashSonar is a conscious, or active, strategy for interpreting the world (Kish, 2013). This use of active FlashSonar, Kish (2013) explains, is essential in activating the brain’s auditory imaging process (e.g., Thaler, Arnott, & Goodale, 2011). In other words, because the direction of our actions relies on our perception of the world, the quality of this perception is significant (Kish, 2013). “The more information we can access,” Kish (2013) explains, “the more adaptive and more varied is our interaction with the world” (n.p.). By actively, and therefore consciously, probing the world using self-generated sounds and their echoes, we gain sensory information about our environment, and for people who are blind, this means the ability to move through space both independently and purposely (Kish, 2013). Through these means, with the help of FlashSonar, we can learn to visualize and navigate the world through sound.

Today, due in part to Kish’s work, people who are blind use echolocation and FlashSonar in their daily lives to navigate unfamiliar places, explore new environments, and even play sports (Downey, 2020). Despite its utility among the blind population, research into human echolocation has almost all but excluded sighted people (Teng & Whitney, 2011; Downey, 2020). Moreover, most of this research has been exclusive to psychology, neuroscience, and experiential settings with a limited number of participants (Teng & Whitney, 2011; Downey, 2020). In saying this, however, through their work, Teng and Whitney (2011) have concluded that echolocation is not as rare as we may have imagined in humans. In fact, they argue that sighted people can echolocate just as well as blind people, so long as they are trained properly (Teng & Whitney, 2011). As to why sighted people choose not to use this capacity, Greg Downey (2020) suggests two reasons; first, unlike blind echolocators, sighted people are often unaware of this experience; and second, sight often dominates other sensory experiences, including pulse-to-echo relations. For these reasons, researchers encourage blind and sighted people alike to develop this capacity in addition to other sensory experiences (Stoffregen & Pittenger, 1995).

While most of our knowledge on human echolocation stems from scientific inquiries, these results are still beneficial to our understanding of the sensation. In their study, for example, Thaler, Arnott, and Goodale (2011) discovered that, in the brains of echolators, echoes were processed in areas that are fundamentally visual in sighted individuals. This not only speaks to the plasticity of the brain, but suggests that echolocation is a distinct feature of our sensory phenomenology (Schwitzgebel & Gordon, 2000). In other words, hearing and sight do not necessarily work in isolation, but taken together, can form our sense of echolocation as its own unique sensory experience.

With increased use, awareness, and support, some researchers argue that echolocation can become not only an accepted sense (whatever its number might be – sixth, seventh, eighth, etc.) but more commonplace and visible in our society (Teng & Whitney, 2011; Downey, 2020). This is particularly beneficial as technological advances have created new forms of wearable sonar for the blind (Thaler & Goodale, 2016). Unlike these advances, however, “natural echolocation has several clear advantages; it does not need batteries, it is cheap, it cannot be forgotten at home, it does not break – and importantly, it can be learned by children” as shown through the work of Daniel Kish’s program (Thaler & Goodale, 2016, p. 390).

References

Cotzin, M., & Dallenbach, K. M. (1950). “Facial vision:” The role of pitch and loudness in the perception of obstacles by the blind. The American Journal of Psychology, 63, 485-515.

Downey, G. (2020, September 16). Getting around by sound: Human echolocation (first published, 14 June 2011). Retrieved April 3, 2021, from https://neuroanthropology.net/2020/09/16/getting-around-by-sound-human-echolocation-first-published-14-june-2011/

Kellogg, W. N. (1962). Sonar system of the blind. Science, 137, 399-404.

Kish, D. (1997). When darkness lights the way: How the blind may function as specialists in movement and navigation. Los Angeles, CA: California State University.

Kish, D. (2013). FlashSonar program: Learning a new way to see. World Access for the Blind. https://waftb.net/sites/default/files/snr-pgm-rv1113.html

Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83, 435-450. Doi: 10.2307/2183914

Schwitzgebel, E., & Gordon, M. S. (2000). How well do we know our own conscious experience? The case of human echolocation. Philosophical Topics, 28, 235-246.

Stoffregen, T. A., & Pittenger, J. B. (1995). Human echolocation as a basic form of perception and action. Ecological Psychology, 7, 181-216.

Supa, M., Cotzin, M., & Dallenbach, K. M. (1944). “Facial vision”: The perception of obstacles by the blind. The American Journal of Psychology, 57, 133-183.

Thaler, L., Arnott, S., & Goodale, M. (2011). Neural correlates of natural human echolocation in early and late blind echolocation experts. PLOS One, 5, 1-16. doi: 10.1371/journal.pone.0020162

Thaler, L., & Goodale, M. A. (2016). Echolocation in humans: An overview. WIREs Cognitive Science, 2016, 382-393. doi: 10.1002/wcs.1408

Teng, S., & Whitney, D. (2011). The acuity of echolocation: Spatial resolution in sighted persons compared to the performance of an expert who is blind. Journal of Visual Impairment & Blindness, 105, 20-32. doi: 10.1177/0145482X1110500103

Worchel, P., & Dallenbach, K. M. (1947). “Facial vision:” Perception of obstacles by the deaf-blind. The American Journal of Psychology, 60, 502-553.