Friday, May 6, 2011
Wednesday, May 20, 2009
May 2009 – Final Report
listening without ears [project codename]
by Juan Cantizzani and Pablo Sanz Almoguera
- a description of the extra sense you have made
It consists in a system based in a bone conduction interface for sound and the sonification of electromagnetic emissions.
- what does it sense ?
electromanetic emissions (VLF close range or high frequency -100mhz to 2.5GHz aprox-)
- how is this information translated to something we can perceive ?
the electromagnetic emissions are sonified within the audible range and transmitted to the choclea via bone conduction through the skull.
- what can be communicated through this sense ?
the pressence of electromagnetic fields, devices, transmissions, etc (wifi networks, electronic appliances, phone calls, antennas, etc)
- how does it work technically ?
To pick up the em emissions there are a couple of different EM sniffers (one for VLF and one for high frequency), attached to the body with a bracelet. The close range VLF sniffer has a coil that works as a detector. This detector is attached to a stick, operated by the user in order to scan electronic devices and perceive their EM emissions.
The audio output of these devices is feed through a bone conduction/tactile wearable sound system. The interface is based in three small solenoids sewed to a elastic head band, they are directly feed from an ampllifier (not portable in this version).
- how did you expect this sense to change our everyday perception and behaviour ?
hypothetically it could increase our awareness towards the pervasive electromagnetic activity in our everyday environments.
- how does the extra sense change the perception of the world around you ? - (how) does the extra sense change the way you behave in the world around you ?
It adds an extra layer of sound to our everyday perception, that besides of the aesthetical pleasure (if you like noise) can provide information about the EM emissions of specific electronic devices and EM fields present in some areas. The final outcome of this depends of the possible reaction of each person to this. It could be addictive for noise enthusiasts or scare other people about electromagnetic emissions.
- how did your project develop and change over time ?
We focused for most of the time in the research and building of the bone conduction interface, that was our main curiosity for this project. We found different solutions and tried a couple of them. After finding the solenoids as a nice and doable solution within our time constraints, we focused in getting several of them and building the interface. Regarding the input, it worked quite straightforward from what it was planned (also because we got a already made circuit), even though we think some improvements and changes could be tried, for example regarding the coil to probe devices, limiting the frequency ranges or doing a more complex sonification of the input.
- what happend as you expected and what unexpected things took place?
We learned a lot researching about bone conduction/tactile sound for this project and also building the interface, on which process some tests were more successful that others. Even if it was not suitable for this device, we found quite interesting our first tests with piezo drivers as valuable for others projects. Finding the quite simple diy solution of the solenoids for our interface was a surprise. Getting the EM sniffers is nice since they will be useful for other projects too.
Monday, May 11, 2009
here a link to an interesting project, with some links to the course:
this project is part of a larger undertaking:
Thursday, May 7, 2009
first thing you need is 'getting started'; this page explains how to install software, how to connect and to get the first example working:
good for starters are the different examples on the arduino site:
another tutorial for starters you can find here:
other references are:
here is a pdf-guide covering the arduino programming language:
and here the book by Massimo Banzi, the brain behind the arduino (I have it if anyone is interested to take a look at it)
a (much) earlier version of this, not all of it is still correct:
(this link will disappear after the course, since Massimo Banzi has stopped distributing this version)
ArtScience students are probably familliar with the well known optical illusion as depicted below. We see either a vase or the faces of two people. What we observe depends on the patterns of neural activity going on in our brains and entirely depends on changes that occur in our brain, since the image always stays exactly the same. When viewing ambiguous images such as optical illusions, patterns of neural activity within specific brain regions systematically change as perception changes. More importantly, patterns of neural activity in some brain regions are very similar when observers are presented with comparable ambiguous and unambiguous images. The fact that some brain areas show the same pattern of activity when we view a real image and when we interpret an ambiguous image in the same way implicates these regions in creating the conscious experience of the object that is being viewed.
Findings from these studies may further contribute to scientists’ understanding of disorders such as dyslexia - a case in which individuals are thought to suffer from deficiencies in processing motion - by providing information about the functional role that specific brain regions play in motion perception.
Wednesday, May 6, 2009
Take a one eyed film maker, an unemployed engineer, and a vision for something that's never been done before and you have yourself the EyeBorg Project. Rob Spence and Kosta Grammatis are trying to make history by embedding a video camera and a transmitter in a prosthetic eye. That eye is going in Robs eye socket, and will record the world from a perspective that's never been seen before.
NIW will investigate possibilities for the integrated and interchangeable use of the haptic and auditory modality in floor interfaces, and for the synergy of perception and action in capturing and guiding human walking. Its objective is to provide closed-loop interaction paradigms, negotiated with users and validated through experiments, enabling the transfer of skills that have been previously learned in everyday tasks associated to walking, and where multi-sensory feedback and sensory substitution can be exploited to create unitary multimodal percepts.
NIW will expose walkers to virtual scenes presenting grounds of different natures, populated with natural obstacles and human artefacts, in which to situate the sensing and display of haptic and acoustic information for interactive simulation, and where vision will play an integrative role. Experiments will measure the ecological validity of such scenarios, investigating also on the cognitive aspects of the underlying perceptual processes. Floor based interfaces will be designed and prototyped by making use of existing haptic and acoustic sensing and actuation devices, comprising interactive floor tiles and soles, with special attention to simplicity of technology. Its applicability to navigation aids such as land-marking, guidance to locations of interest, signalling, warning about obstacles and restricted areas, will be assessed. NIW will nurture floor and shoe designs which may impact the way we get information from the environment.
FET-Open will further benefit from the discovery of cross-modal psychophysical phenomena, the design of ecologically valid walking interaction paradigms, the modelling of motion analysis and multimodal display synthesis algorithms, the study of non visual floor-based navigation aids, and the development of guidelines for the use of existing sensing and actuation technologies to create virtual walking interaction scenarios.
Tuesday, May 5, 2009
There is a continuing need for a portable, practical, and highly functional navigation aid for people with vision loss. This includes temporary loss, such as firefighters in a smoke-filled building, and long term or permanent blindness. In either case, the user needs to move from place to place, avoid obstacles, and learn the details of the environment.
The core system is a small computer--either a lightweight laptop or an even smaller handheld device--with a variety of location and orientation tracking technologies including, among others, GPS, inertial sensors, pedometer, RFID tags, RF sensors, compass, and others. Sophisticated sensor fusion is used to determine the best estimate of the user's location and which way she is facing. See the SWAN architecture figure
Once the user's location and heading is determined, SWAN uses an audio-only interface (basically, a series of non-speech sounds called "beacons") to guide the listener along a path, while at the same time indicating the location of other important features in the environments (see below). SWAN includes sounds for the following purposes:
- Navigation Beacon sounds guide the listener along a predetermined path, from a start point, through several waypoints, and arriving at the listener's destination.
- Object Sounds indicate the location and type of objects around the listener, such as furniture, fountains, doorways, etc.
- Surface Transition sounds signify a change in the walking surface, such as sidewalk to grass, carpet to tile, level corridor to descending stairway, curb cuts, etc.
- Locations, such as offices, classrooms, shops, buildings, bus stops, are also indicated with sounds.
- Annotations are brief speech messages recorded by users that provide additional details about the environment. For example, "Deep puddle here when it rains."
Sunday, May 3, 2009
The initial idea is to attempt the development of some kind of device/s based in the principles of bone conduction and tactile sound, in order to use them as interfaces for the perceptualization of data/sensory information.
CALL FOR COLLABORATION
we are completely open for anybody that would like to join us, even if maybe just for collaborating in any aspect of the project or trying to combine it with any other of the projects developed within the group. In addition, any help, suggestions and advise of any kind you may have will be very welcomed and appreciated.
Bone conduction is the conduction of sound to the inner ear directly through the bones of the skull, bypassing the eardrum. Tactile sound is the sensation of sound transmitted directly to the body by contact. We are interested in exploring the frontiers between acoustic and tactile perceptions and to examine if these principles could be used to receive sensory information non-perceivable normally. Being something quite unobtrusive but complementary with the rest of the existing sensory modalities, we find it as an interesting way of extending our perception, making use of this possibility of 'listening without ears' that we all possess, and which its not very known.
So far we have been researching a little bit about those concepts and compiling info about sound art projects. commercial products and existing technologies that make use of them. Those are some of the links with further info that we have found:
Wired´s article 'High-tech hearing bypasses ears'
forum thread on bone-earphones
bone-conduction speaker device paper
bone-phone (Sanyo) 
bone-phone (Finger-whisper) 
previous post in our 'extra-senses' blog with related art projects
WHAT TO PERCEPTUALISE / LIMITATIONS
In addition, we started to think about what would we interesting to feed through these devices, considering also both the limitations and possibilities we could eventually find out. In any case, the device will receive sound, so its likely that any kind of sonification or direct audification processes will have to be necessarily applied to whichever input we intend to use.
We also found differences between using direct bone conduction via transducers attached to the skull, and those attached to other areas. These differences include the perceivable frequency range, creating in areas of the body others than the skull a mostly pure tactile feeling, similar to what we would get using small vibrators.
Another issue is the possibility of receiving spatial information. In the case of bone conduction through the skull, the source of the stimuli seems to be inside our head, with no perceived spatial cues, therefore this would not be suitable for any input with crucial information that requires precise localization (orientation, navigation, etc) , but could work to sense something relative for example to the state of an environment. Regarding this, we think that maybe an hybrid device could be built, including a combination of both bone conduction through the skull and tactile stimuli in other areas of the body.
So, possible things to be made perceivable through this device coud be: any range of the non-perceivable electromagnetic spectrum, infra-ultrasound, magnetic/electric fields, movement, augmented reality marks... ¿? We are thinking about these possibilities and checking how to implement any of them, but so far the priority is to start to work in the device itself, since later different inputs could be connected to it to try them out, and probably more accurate ideas about what could work will arise when starting to experiment.
Therefore, first plans include starting immediately this coming week to find out how to build wearable bone-conduction/tactile small transducers... which electronic components are more suitable, etc... in order to have asap something to start to experiment with in practice.
Any specific info about this you might have would be very much appreciated, we found some possible solutions, but none of them really small, so this technical issue is still not completely clear for a pair of electro-dummies like us.
Frontispiece of John Bulwer's Philocophus' The Deaf and Dumbe Mans Friend. Printed for Humphrey Moseley, London, 1648. Note the kneeling man who is “hearing” music through his teeth via bone conduction.