Wednesday, May 20, 2009
May 2009 – Final Report
listening without ears [project codename]
by Juan Cantizzani and Pablo Sanz Almoguera
- a description of the extra sense you have made
It consists in a system based in a bone conduction interface for sound and the sonification of electromagnetic emissions.
- what does it sense ?
electromanetic emissions (VLF close range or high frequency -100mhz to 2.5GHz aprox-)
- how is this information translated to something we can perceive ?
the electromagnetic emissions are sonified within the audible range and transmitted to the choclea via bone conduction through the skull.
- what can be communicated through this sense ?
the pressence of electromagnetic fields, devices, transmissions, etc (wifi networks, electronic appliances, phone calls, antennas, etc)
- how does it work technically ?
To pick up the em emissions there are a couple of different EM sniffers (one for VLF and one for high frequency), attached to the body with a bracelet. The close range VLF sniffer has a coil that works as a detector. This detector is attached to a stick, operated by the user in order to scan electronic devices and perceive their EM emissions.
The audio output of these devices is feed through a bone conduction/tactile wearable sound system. The interface is based in three small solenoids sewed to a elastic head band, they are directly feed from an ampllifier (not portable in this version).
- how did you expect this sense to change our everyday perception and behaviour ?
hypothetically it could increase our awareness towards the pervasive electromagnetic activity in our everyday environments.
- how does the extra sense change the perception of the world around you ? - (how) does the extra sense change the way you behave in the world around you ?
It adds an extra layer of sound to our everyday perception, that besides of the aesthetical pleasure (if you like noise) can provide information about the EM emissions of specific electronic devices and EM fields present in some areas. The final outcome of this depends of the possible reaction of each person to this. It could be addictive for noise enthusiasts or scare other people about electromagnetic emissions.
- how did your project develop and change over time ?
We focused for most of the time in the research and building of the bone conduction interface, that was our main curiosity for this project. We found different solutions and tried a couple of them. After finding the solenoids as a nice and doable solution within our time constraints, we focused in getting several of them and building the interface. Regarding the input, it worked quite straightforward from what it was planned (also because we got a already made circuit), even though we think some improvements and changes could be tried, for example regarding the coil to probe devices, limiting the frequency ranges or doing a more complex sonification of the input.
- what happend as you expected and what unexpected things took place?
We learned a lot researching about bone conduction/tactile sound for this project and also building the interface, on which process some tests were more successful that others. Even if it was not suitable for this device, we found quite interesting our first tests with piezo drivers as valuable for others projects. Finding the quite simple diy solution of the solenoids for our interface was a surprise. Getting the EM sniffers is nice since they will be useful for other projects too.
Monday, May 11, 2009
here a link to an interesting project, with some links to the course:
this project is part of a larger undertaking:
Thursday, May 7, 2009
first thing you need is 'getting started'; this page explains how to install software, how to connect and to get the first example working:
good for starters are the different examples on the arduino site:
another tutorial for starters you can find here:
other references are:
here is a pdf-guide covering the arduino programming language:
and here the book by Massimo Banzi, the brain behind the arduino (I have it if anyone is interested to take a look at it)
a (much) earlier version of this, not all of it is still correct:
(this link will disappear after the course, since Massimo Banzi has stopped distributing this version)
ArtScience students are probably familliar with the well known optical illusion as depicted below. We see either a vase or the faces of two people. What we observe depends on the patterns of neural activity going on in our brains and entirely depends on changes that occur in our brain, since the image always stays exactly the same. When viewing ambiguous images such as optical illusions, patterns of neural activity within specific brain regions systematically change as perception changes. More importantly, patterns of neural activity in some brain regions are very similar when observers are presented with comparable ambiguous and unambiguous images. The fact that some brain areas show the same pattern of activity when we view a real image and when we interpret an ambiguous image in the same way implicates these regions in creating the conscious experience of the object that is being viewed.
Findings from these studies may further contribute to scientists’ understanding of disorders such as dyslexia - a case in which individuals are thought to suffer from deficiencies in processing motion - by providing information about the functional role that specific brain regions play in motion perception.
Wednesday, May 6, 2009
Take a one eyed film maker, an unemployed engineer, and a vision for something that's never been done before and you have yourself the EyeBorg Project. Rob Spence and Kosta Grammatis are trying to make history by embedding a video camera and a transmitter in a prosthetic eye. That eye is going in Robs eye socket, and will record the world from a perspective that's never been seen before.
NIW will investigate possibilities for the integrated and interchangeable use of the haptic and auditory modality in floor interfaces, and for the synergy of perception and action in capturing and guiding human walking. Its objective is to provide closed-loop interaction paradigms, negotiated with users and validated through experiments, enabling the transfer of skills that have been previously learned in everyday tasks associated to walking, and where multi-sensory feedback and sensory substitution can be exploited to create unitary multimodal percepts.
NIW will expose walkers to virtual scenes presenting grounds of different natures, populated with natural obstacles and human artefacts, in which to situate the sensing and display of haptic and acoustic information for interactive simulation, and where vision will play an integrative role. Experiments will measure the ecological validity of such scenarios, investigating also on the cognitive aspects of the underlying perceptual processes. Floor based interfaces will be designed and prototyped by making use of existing haptic and acoustic sensing and actuation devices, comprising interactive floor tiles and soles, with special attention to simplicity of technology. Its applicability to navigation aids such as land-marking, guidance to locations of interest, signalling, warning about obstacles and restricted areas, will be assessed. NIW will nurture floor and shoe designs which may impact the way we get information from the environment.
FET-Open will further benefit from the discovery of cross-modal psychophysical phenomena, the design of ecologically valid walking interaction paradigms, the modelling of motion analysis and multimodal display synthesis algorithms, the study of non visual floor-based navigation aids, and the development of guidelines for the use of existing sensing and actuation technologies to create virtual walking interaction scenarios.
Tuesday, May 5, 2009
There is a continuing need for a portable, practical, and highly functional navigation aid for people with vision loss. This includes temporary loss, such as firefighters in a smoke-filled building, and long term or permanent blindness. In either case, the user needs to move from place to place, avoid obstacles, and learn the details of the environment.
The core system is a small computer--either a lightweight laptop or an even smaller handheld device--with a variety of location and orientation tracking technologies including, among others, GPS, inertial sensors, pedometer, RFID tags, RF sensors, compass, and others. Sophisticated sensor fusion is used to determine the best estimate of the user's location and which way she is facing. See the SWAN architecture figure
Once the user's location and heading is determined, SWAN uses an audio-only interface (basically, a series of non-speech sounds called "beacons") to guide the listener along a path, while at the same time indicating the location of other important features in the environments (see below). SWAN includes sounds for the following purposes:
- Navigation Beacon sounds guide the listener along a predetermined path, from a start point, through several waypoints, and arriving at the listener's destination.
- Object Sounds indicate the location and type of objects around the listener, such as furniture, fountains, doorways, etc.
- Surface Transition sounds signify a change in the walking surface, such as sidewalk to grass, carpet to tile, level corridor to descending stairway, curb cuts, etc.
- Locations, such as offices, classrooms, shops, buildings, bus stops, are also indicated with sounds.
- Annotations are brief speech messages recorded by users that provide additional details about the environment. For example, "Deep puddle here when it rains."
Sunday, May 3, 2009
The initial idea is to attempt the development of some kind of device/s based in the principles of bone conduction and tactile sound, in order to use them as interfaces for the perceptualization of data/sensory information.
CALL FOR COLLABORATION
we are completely open for anybody that would like to join us, even if maybe just for collaborating in any aspect of the project or trying to combine it with any other of the projects developed within the group. In addition, any help, suggestions and advise of any kind you may have will be very welcomed and appreciated.
Bone conduction is the conduction of sound to the inner ear directly through the bones of the skull, bypassing the eardrum. Tactile sound is the sensation of sound transmitted directly to the body by contact. We are interested in exploring the frontiers between acoustic and tactile perceptions and to examine if these principles could be used to receive sensory information non-perceivable normally. Being something quite unobtrusive but complementary with the rest of the existing sensory modalities, we find it as an interesting way of extending our perception, making use of this possibility of 'listening without ears' that we all possess, and which its not very known.
So far we have been researching a little bit about those concepts and compiling info about sound art projects. commercial products and existing technologies that make use of them. Those are some of the links with further info that we have found:
Wired´s article 'High-tech hearing bypasses ears'
forum thread on bone-earphones
bone-conduction speaker device paper
bone-phone (Sanyo) 
bone-phone (Finger-whisper) 
previous post in our 'extra-senses' blog with related art projects
WHAT TO PERCEPTUALISE / LIMITATIONS
In addition, we started to think about what would we interesting to feed through these devices, considering also both the limitations and possibilities we could eventually find out. In any case, the device will receive sound, so its likely that any kind of sonification or direct audification processes will have to be necessarily applied to whichever input we intend to use.
We also found differences between using direct bone conduction via transducers attached to the skull, and those attached to other areas. These differences include the perceivable frequency range, creating in areas of the body others than the skull a mostly pure tactile feeling, similar to what we would get using small vibrators.
Another issue is the possibility of receiving spatial information. In the case of bone conduction through the skull, the source of the stimuli seems to be inside our head, with no perceived spatial cues, therefore this would not be suitable for any input with crucial information that requires precise localization (orientation, navigation, etc) , but could work to sense something relative for example to the state of an environment. Regarding this, we think that maybe an hybrid device could be built, including a combination of both bone conduction through the skull and tactile stimuli in other areas of the body.
So, possible things to be made perceivable through this device coud be: any range of the non-perceivable electromagnetic spectrum, infra-ultrasound, magnetic/electric fields, movement, augmented reality marks... ¿? We are thinking about these possibilities and checking how to implement any of them, but so far the priority is to start to work in the device itself, since later different inputs could be connected to it to try them out, and probably more accurate ideas about what could work will arise when starting to experiment.
Therefore, first plans include starting immediately this coming week to find out how to build wearable bone-conduction/tactile small transducers... which electronic components are more suitable, etc... in order to have asap something to start to experiment with in practice.
Any specific info about this you might have would be very much appreciated, we found some possible solutions, but none of them really small, so this technical issue is still not completely clear for a pair of electro-dummies like us.
Frontispiece of John Bulwer's Philocophus' The Deaf and Dumbe Mans Friend. Printed for Humphrey Moseley, London, 1648. Note the kneeling man who is “hearing” music through his teeth via bone conduction.
Friday, May 1, 2009
Heiner Goebbels’ Stifter’s Dinge is a “performance without performers”, meticulously choreographed with a series of modified instruments and machines to create a meditative – and sometimes frenzied – piece on, among other things, the awe-inspiring power of nature. Experimental German composer Goebbels was inspired by the early 19th-century Romantic writings of Adalbert Stifter, and in particular his novel My Great Grandfather’s Portfolio – extracts of which are played over the speakers in the opening minutes of the show.
Poetic musings on nature fill the space. “The weight and splendour of the ice hanging from the trees was indescribable,” say the speakers. “The pine trees stood like the candelabra of innumerable and huge inverted candles.” A painting of a dense forest fills the far screen, which eventually rises to reveal a mass of musical instruments, trees and technology, gleaming gold and sharply reflected in the dark pools of water that fill the stage. Five pianos start to play themselves. The pipes chip in with their deep string-plucking sounds and a cacophony fills the space. Invisible raindrops pour from the ceiling into the pools of water below (or are they bubbling up from beneath the floor?), the pianos become quieter and more melodious, and a beautifully serene moment is created – like the rhythmic sounds of heavy rain beating down outside as you sit, warm, dry and protected inside your home, contemplating nature in its fury all around you.
As the piece progresses, the special effects become more elaborate. The suspended mass of instruments, machines and trees starts edging towards the audience, the pianos suddenly becoming more and more frenetic until they are almost upon us. The elements of the approaching mass break apart as the music gets faster and faster, looming over the audience like a giant music box about to envelop us, horror movie-style. Smoke begins to seep out from beneath them as they slowly retreat again, and dry ice fills the stage, bubbling up from the pools of water. The mesmerising steam bubbles then become the focus of the performance, bouncing and erupting as if dancing to the music.
As they finally disperse, the forms take on the feel of a Monet waterlilies painting, serene once more. As the space goes quiet, the audience tentatively clap the mechanised performers, which move forwards as if to take a bow, gratefully absorbing the praise. As the lights come up, they continue to clunk and flutter and jitter as the audience moves around them, examining them like exhibits in a museum, attempting to discover how they all work.
At 80 minutes long, there were times when the show pushed the patience a little. But overall, and particularly towards the crescendo of the ending, Stifter’s Dinge is an enchanting piece that allows you to focus on the purity of natural elements such as the sound of rain, while creating a childlike wonder at instruments that play themselves, with moments of horror, beauty and awe.
images Nick Cobbing
Wednesday, April 29, 2009
Lygia Clark (Belo Horizonte, October 23, 1920 – Rio de Janeiro, April 25, 1988) was a Brazilian artist best known for her painting and installation work. She was often associated with the Brazilian Constructivist movements of the mid-20th century and the Tropicalia movement. Even with the changes in how she approached her artwork, she did not stray far from her Constructivist roots. Along with Brazilian artists Hélio Oiticica, Ivan Serpa, and Lygia Pape, Clark co-founded the Neo-Concretist art movement. The Neo-Concretists believed that art ought to be subjective and organic. Throughout her career trajectory, Clark discovered ways for museum goers (who would later be referred to as "participants") to interact with her art works. She sought to redefine the relationship between art and society. Clark's works dealt with inner life and feelings.
Caminhando- The participants are each invited to take a pair of scissors, twist a strip of paper and form a Mobius loop out of it and continuously cut alone the plane. This is the art experiment in which Clark commented that the meaning of this particular experience lies in actually engaging in the activity.
1966: Air & Stone- A small plastic bag was filled with air. A stone is then placed on the bag and the participant squeezes the bag to experience the heft of the space and the weightlessness of the air inside the bag. The rock begins to show qualities of a living organism. In this experiment, Clark played with the concept of opposites such as emptiness and fullness and air versus solid.
1967: Sensorial Hoods- This experiment involved eye pieces, ear covers, and a small bag that would be affixed over the participant's nose. The participants would also have helmets with small mirrors affixed to them. The purpose of this experiment was to utilize all of the senses at one time. The outcome of this experiment might be that a participant would use his senses in a way he would never have thought possible.
Abyss-Masks- The participant's eyes were blindfolded and large bags of air weighed down with stones could be touched giving off the sensation of empty space from within the body.
The I and the You: Clothing/Body/Clothing- A man and a woman wear hoods over their eyes and a full body suit and during this experiment, each would come to understand their own gender by feeling through their pockets
Tuesday, April 28, 2009
in resonse to the project idea of Alfredo, here some info about one of the projects Cati Vaucelle did for her graduation at MIT:
here is a paper on it:
here some info from her site:
and here a documentation video of the project:
an interesting visual phenomenon is the Pulfrich effect, used in some of the films and performances by experimental filmmaker Ken Jacobs:
His patents are here:
and a reflection on this work:
Bela Julesz did a lot of very interesting research into depth perception without object cues, in random dot stereograms:
He wrote a classic book on the subject:
"Foundations of Cyclopean perception",
The University of Chicago Press, Chicago, 1971.
This book has recently been republished by MIT press:
and this brings us to the intriguing subject of 'Random Dot Cinematography':
On 13 June 2009, the whole city of Groningen will be devoted to the theme of arts and science during the Arts & Sciences Night, which will be organized by the University of Groningen and the Groninger Museum . A wide range of activities will be organized between the Museum Bridge and the Broerplein and at the Harmonie complex, and there will be a lot to do, to see and to experience. How about a haunted house in the UB? A writers’ workshop by Ronald Giphart in the Groninger Museum? A huge stage with well-known artists on the Vismarkt? The Arts & Sciences Night is a night filled with concerts, stand-up comedy, lectures, debates, music, dance, exhibitions, guided tours, workshops, night lectures and much, much more!
CREW is a Belgium-based performance group. Eric Joris being its key figure, this production team has brought together people from different domains depending on the projects that were being made. On the whole, CREW has insisted on making performances at the melting point of live art and technology.
CREW's activities are situated in between art and science and are focused on creation and research. Both activities are strongly connected and mutually influential. But creation and research both also have their own logics and respond to different patterns. The rhythm of research hardly ever parallels the pace of creation. Budgets and working circumstances of creation and research are not the same, etc. Therefore, CREW, as a structurally subsidized theatre company, has set up a parallel structure called CREW_lab that mainly concentrates on research. CREW_lab enables us to look for extra (non-artistic) means to organize and finance research, research that is not only valuable in its own right but also supports and feeds the artistic production via an intensive reciprocity.
Finding in experimental theatre a laboratory where they can test the progress of their own work, researchers from different universities develop original technologies for CREW to use in the performances. Permanent dialogue with the developments in robotics and computer sciences triggers the theatrical imagination of design and production, text and sound.
The artistic outcome tends to be hybrid; technological live art troubles installed categories of theatricality. CREW wants to explore how these hybrid ties can be operated, both on a theoretical and on a practical level. What happens when digital technology really merges production and reflection within the context of the stage - insofar as one can still speak of a stage?
The birth of new technologies always provokes new questions and deeply influences our perception of man and reality. Our relationship with current technology is of a tense nature; we find ourselves attracted and repulsed at the same time. These tensions can be extrapolated to a relation between technology and theatre, in which man is traditionally at the center.
CREW is a collective wanting to face the new technological condition of man. This 'pool' of artists wants to be a pioneer in setting up experiments that blur the border between theatre and technology. This investigation resulted in a series of performances showing an evolution from a cautious exploration of possibilities to a radical symbiosis with sophisticated technology.
Monday, April 27, 2009
article at BPS Research Digest
In blind people, the part of the brain usually used for vision can be commandeered by other senses, resulting in improved hearing and touch. It’s an amazing testament to the brain’s ability to adapt. But now, Jorg Lewald reports that prolonged blindness isn’t needed for this kind of adaptation to occur – just ninety minutes blindfolded can enhance your hearing ability!
Twenty participants donned a blindfold and were surrounded by a semi-circle of 21 stereo speakers. Each time one of the speakers made a noise, the participants’ task was to turn their head, and to face the speaker that made the noise as accurately as possible. As has been shown before in tasks like this, the participants tended not to turn their head far enough, underestimating just how far around each noise had originated.
Next the participants spent 90 minutes sitting quietly with the blindfold on. Crucially, when they repeated the task after this, their accuracy was improved as they no longer underestimated the location of the sounds as much, especially when the sound was from a more central speaker. In fact their performance had become more typical of a blind person performing this kind of task. However, the enhancement was easily reversible - 180 minutes without the blindfold returned their performance back to normal.
A control group of twenty participants who were only blindfolded during testing, showed no such improvements from one session to the next.
Lewald argues his finding is consistent with the idea that the visual cortex is actually a multi-sensory area, with short-term light deprivation serving to jump-start the auditory circuits found in this brain region.
“Processes of short-term crossmodal plasticity may thus be based on rapid enhancement of these pre-existing neural circuits that, possibly, play a role also in the development of long-term plastic changes with blindness”, he said. The current finding is consistent with earlier research showing enhanced touch after short-term sight deprivation.
Lewald, J. (2007). More accurate sound localisation induced by short-term light deprivation. Neuropsychologia, 45, 1215-1222.
a modified type of magic lantern used to project images onto the walls, smoke or semi-transparent screens.
Rosângela Rennó 'Experiencing Cinema'(2004)
Radiohead 'house of cards'
One note I have to make, mistakenly I was talking about cascading, but the term is actually saccading.... sorry, kYra
One of the ideas I've mentioned has to do with the supposed reading of colors by the fingers. Below are two links:1st link:
FINGER PERCEIVES THE LIGHT
Rosa feels the colour of the light, penetrating through light filters and falling onto her fingers. Rosa says: " This ray is red, that ray is green; that one is orange, and the other one is blue". Moreover, she is able to identify not only a bright ray of light, but a weak one as well. She can even better identify coloured rays, let through the lens filled with water and then reflected on her hand with a mirror....
(a more critical) 2nd link:
....From 1960 to the present, research conducted in the USSR, United States, England and France, have showed that the skin is sensitive to far infrared invisible radiations of the electromagnetic spectrum.
Dermo-optical sensitivity refers to the human organism's capacity to respond to colored surfaces, hidden from sight by being placed under screens, even when the latter are held at some distance in the dark.
Dermo-optical perception refers to the ability of subjects to succeed in consciously differentiating these surfaces through their hands by non-visual subjective impressions. It is estimated this can only be done by one in six subjects. Controlled studies indicate support for the theory of dermo-optical sensitivity and perception. This finding provides a new potential confounding variable in color research. (int j Biosocial Res., 7(2); 76-93,1985.)....
Saturday, April 25, 2009
Topic: controlling music and sound using the recognition of physical gestures and emotional state
Time: Tuesday April 28, 2009, 14h00 - 15h30
Place: Snellius building, Niels Bohrweg 1, 2333 CA Leiden, room 413
The Sonic Arts Research Centre of Queen's University, Belfast, is dedicated to the research of music technology. This interdisciplinary institute unites internationally recognized experts in the areas of musical composition, signal processing, performance, internet technology and digital hardware.
He will speak about controlling music and sound using the recognition of physical gestures and emotional state. His talk explores the broad area of using kinematic and physiological sensors (e.g. EMG, EKG) for interacting with sound. The details of the measurement and recognition of these signals and the patterns within them during performance are discussed. The talk will focus on three areas:
1) Understanding gestures and emotion
2) Simple pattern recognition techniques
3) The SARC Eyesweb Toolkit
Read about SARC at http://www.sarc.qub.ac.uk/main.php
More about Ben Knapp at http://184.108.40.206/main.php?page=people&pID=53
Thursday, April 23, 2009
Laurie Anderson’s Handphone Table. 1978 Visitors were invited to perceive sound through the bones in their arms by placing their elbows on the table.
touched echo from Markus Kison on Vimeo.
Touched echo (2007, by Markus Kison) is a minimal medial intervention in public space. The visitors of the Brühl's Terrace (Dresden, Germany) are taken back in time to the night of the terrible air raid on 13th February 1945. In their role as a performer they put themselves into the place of the people who shut their ears away from the noise of the explosions. While leaning on the balustrade the sound of airplanes and explosions is transmitted from the swinging balustrade through their arm directly into into the inner ear (bone conduction).
...experienced in the latest edition of ArsElectronica too.
music for bodies
<< terrestrial and virtual research into new ways of making and listening to music >>
(ongoing project since 2006) directed by Kaffe Matthews
music for bodies is a research project to make new 3D music and physical interfaces for enjoying it directly through your body rather than just your ears.
music for bodies is currently researching the effect of certain frequencies on specific areas of the human body, coming to an understanding of the human body’s response maps in this process. Combined with an exploration into mapping structures for scores through architectural perspectives, it is making music to feel rather than listen to.
STiMULiNE is an audio-tactile performance by the artists Lynn Pook and Julien Clauss in which the group of participants wears futuristic seeming suits equipped with acoustic activators that transmit sound as an impulse on skin and bones. Sounds are thus not perceived through the outer membrane of the ear but are transmitted as the finest of vibrations through the entire body to the inner ear. It is a form of fictional concert without narrative structure that starts from the assumption that public sites for culture in real space will increasingly be replaced by virtual communication. STiMULiNE departs from the traditional concert situation and experiments with forms of perception of space and body. The participants lie relaxed on the floor while the two artists let the sounds move along and through the bodies. The presence of the public and the social interaction between the participants thereby becomes a central creative element.
....regarding his visual vs acoustic space categorization, I remember some doubts expressed in class comments about the meaning of his 'acoustic space' idea regarding our (well... this was a while ago) cultural and media environment, arguing certain lack of validation in the real world .... as I understand it, its not something to be taken literally (in terms of regarding it as a proliferation of sound-based media or 'real' sound), but more as a metaphor for the 'new mediascape' appearing during the last century, on which media and technologies surround us, being multisource, intangible and global and becoming pervasive... elements pertaining to sound physiology, hearing and according to him characteristic of previous iliterate oral cultures... these characteristics as opossed to the previous model generated by the apparition of printing based for example in a limited amount of sources of information, frontality, textual linearity.... pertaining more to the eye and geometry of vision...bla bla bla
...I think that notion of the 'acoustic space' presented by McLuhan even if could be related, should be differentiated from the several branches of works, concepts and theories (also appearing in the second half of 20th century) that claim for the construction of alternative knowledge models based in sound, denoting the hegemony of vision in our Western culture.
Without entering now into regarding this hegemony of vision as problematic or not, neither speculating about its possible causes, consequences and effects, I think its evident the fact that it exists, even if I still find sometimes people who doubt about it. Its just obvious to me if we take into account the lack of sound-based studies and theories in lots of disciplines... note for example how we have in the world maybe only ¿4? academic programs devoted to 'Sound Studies' (also of very recent creation), in contrast with the widespread 'Visual Studies' discipline.. in addition to the lack of sound education and the marginality of sound-based music and artworks, to name a few examples.. the list could go on and on... of course this is something relative and it is changing rapidly, being evident the proliferation (small the scale but definitely a good number of projects) of sound-based inquiries and works within a lot of fields dealing with sound and most importantly, a great amount of interdisciplinary work dealing with it, caused also by the increasing accesibility of technologies to work with this medium, that became imprescindible in order to analyse aspects of it due obviously to its time-based nature.
Anyway, I think in general we are currently very hard-wired because of our culture, education and our mediascape and environments to think in visual terms... somebody was pointing out the other day to the pervasiveness of mobile music devices (ipods and so on) as a sign of the changing times... that´s a fact, but I would doubt if it as a sign of an increase in sound awareness or maybe works doing the oppossite, I see more the use of this devices as another manifestation of power and identity and actually when used in public spaces this technologies dramatically shut up the sounds of the environment, not to mention the hearing loss problems which sometimes cause. Anyway these kind of sound mobile technologies have also great potential por sound and mdia-art and actually are bein explored by several people...check mobile sound blog for example..
On this issue of hegemony of vision and the rediscovering of sound awareness I would like to share the recording of a recent short lecture by Murray R. Schafer, whose life-long pioneering work within the fields of acoustic ecology, soundscape studies, sound education and towards the promotion of aural culture at large is very remarkable... his book 'The Soundscape. Our Sonic Environment and the Tuning of the World' , published in 1977 is very recommendable and I feel listening to himself (like in this recording) as quite inspiring.
The lecture was given in Mexico the last month and has the title 'I HAVE NEVER SEEN A SOUND'.
A couple of years ago he created an online project compiling info about sonic weapons and physiological effects of sound. The site is in Spanish but most of the sources are in English... there is also an automatic translator in the right sidebar if you want to read any of the Spanish texts.
One of the latest works of Chiu (in collaboration with Juan Gil) related to this topic was a radiophonic work comissioned by Kunstradio (AT) for the Art’s Birthday 09 celebrations last February.
LISTEN TO IT
The physical dimension of sound, its potential to become a source of corporal pleasure, its invisibility, its immateriality, its power to generate emotions or affect our bodies without passing through the barriers of reason and without leaving traces… All these aspects have turned particularly important in the context of a society of control, where the different discourses of power need technologies to perform their dominance, or simply to attack and defend themselves. This is the role of sonic weapons.
The sound work proposed by Escoitar.org, a radiophonic short tale placed somewhere between a documentary and a radio-art piece, is based in the remix of audiovisual documents found across a long and deep research about certain uses, developments and technological implementations of sound. The work proposes both a conceptual approximation to this subject matter and a revealing sonic experience, carried out through an analysis of the impact and the effects of these technologies, their uses and their abuses.
Digitally processed sounds from interviews, recordings of sonic weapons, fragments of works by different artists and recordings of acoustic signals and control devices (such as bells, sirens…) made by Escoitar.org are mixed and intertwined in “Sonic Weapons”. A radiophonic work that pursues a double goal: on one side, to make the listener aware of a problem that affects sound and listening, conceived as control devices, and, on the other, to create a passionate sonic gesture in order to maintain sound safe and to keep working for its freedom.
This is a film documentary about McLuhan´s theories and his biography. I think it works quite well as a an introduction to his work.
It must be available in the torrent world and I got the dvd version released by Disinformation a while ago, which could be worthy because it contains a good amount of extras in the form of interactive tetrads, audio lectures, extra interviews, etc... so if someone is interested let me know, I can bring it to KABK.
As far as casual viewers go, it’s hard to go wrong with McLuhan’s Wake. It offers an impressive introduction to McLuhan’s four media laws, which are a bit out of date but still useful in developing a fundamental knowledge of media and cultural studies. If you are already well versed in cultural theory, it won’t be very useful unless you teach media studies, in which case it could prove to be a valuable resource. Some of McLuhan’s ideas and concerns have become even more relevant since his death, even though he hasn’t been able to update his own writings to incorporate these new mediums and technologies. Like anything, McLuhan’s Wake needs to be approached with caution and a critical eye. McLuhan would have wanted us to approach his own theories like that, I think. If only the creators of this set had done that a little more. Reviewed by Judge Joel Pearce
Saskia & Pim choose to reseach ‘night vision’, an invention of William Edward Spicer -engineering professor at Stanford University-, being first used by the U.S. Army during WW2.
Night vision is the ability to see in a dark environment. Whether by biological or technological means, night vision is made possible by a combination of two approaches: sufficient spectral range, and sufficient intensity range. Humans have poor night vision compared to many animals, in part because the human eye lacks a tapetum lucidum.
Following McLuhan’s tetrad: night vision devices do not enhance really; instead they add a visual quality. It makes electric light obsolete and pushed to extremes one’s biorhythm can be disturbed due the absence of day and life. The medium probably retrieves the yearning for ‘normal’, polychrome perception with the bare eye.
This is a nice blog about new ways of sensing or enhancing our senses:
Wednesday, April 22, 2009
As suggesting Gary F. Marcus, professor of psychology at the University of New York, just going through the experience of basic memory, decision making, language and happiness we can see the thousand and one flaws of evolution in all of the senses developed to survive.
The most dramatic and far-reaching innovations in the history of evolution, survival of the species, when everything pointed to a close, it was the intuition of some microorganisms newcomers two billion years after the formation of the Earth and the Solar system. It was a biomolecular feat, made it a pact between a bacterium and a host cell of the plant kingdom.
Chloroplast with which plants make food for themselves are actually housed in cyanobacterial cells of plants.
Cyanobacteria image: universe-review.ca
So I think it is grat, remembering that the greatest discovery we owe to these microbes called cyanobacteria with which plants make their own food is the partnership between a bacterium and the plant allowed photosynthesis; 'live from air', literally, instead of preying on the more complex or simple organisms.
An example of that is than in nature cooperation is a force more powerful than the competition and the development of the senses. Its success depends on knowing just cooperate.
To be sure, if there will be salvation in the future we have to understanding the possibilities of the molecular world.
Our ears are still adapting to human speech, says anthropologist, who discovered that genes associated with hearing have changed in the most recent thousands of years.
“We’re still genetically adapting to language,” says John Hawks, an anthropologist and specialist in human evolution at the University of Wisconsin at Madison, Wisconsin, in the USA.
Hawks has discovered that eight genes associated with hearing show signs of having evolved over the most recent 40,000 years. Some of the gene changes took hold only two-three thousand years ago.
| Speech a recent phenomenon |
| According to Hawks, the changes in the hearing genes indicate that our ears are still adapting to human speech, which evolution experts believe first developed about 50,000 years ago. At that time, humans had existed for more than two million years, making speech a relatively recent development.|
Speech is worthless without ears able to sense and discriminate sounds at the sound frequencies of speech. Our ears are still improving these abilities, according to Hawks.
Hawk’s analyses of a data base with gene information from different continents indicate that many human genes besides the hearing genes have changed in recent human evolution.
Tuesday, April 21, 2009
Kevin Warwick is the world’s first cyborg. The English scientist says that it’s time for us to overcome our human “limitations”. In the future and thanks to chip in our brains we will be able to use more than our 5 senses as the implants will stretch our ways of communicating with people and objects. Without any doubt, Warwick is a brave, charismatic borderline-scientist, but he has also risen a lot of criticism. In an interview he reveals why he thinks that humans will become a subspecies in a cyborg world.
The full interview can be found here:
here an overview of the types:
the Duomilanove This is the basic board. € 22,-
the Arduino Mini same number of connections as the basic board, but less memory. € 35,- (including USB-serial converter)
the Arduino Nano even smaller than the mini, but less connections too. € 44,-
the Arduino Mega many more connections, larger memory. € 49,-
the Arduino Lilypad an arduino version made to be incorporated into clothing (more fragile, however) € 21,-
if you have no preference, get a Duemilanove or a Arduino Mega.
Prices mentioned exclude VAT (BTW) and shipping.
To be 100% sure about pricing, check the information given by the manufacturer. Please let me know if you are aware of cheaper options: there are much more providers now than there were last time I ordered. Also please let me know if I made a mistake in summarizing the specifications.
You can order by commenting on this post; please mention which type. I will order on wednesday evening.
Marshall McLuhan, "Understanding Media, The Extensions of Man",
Routledge & Kegan Paul Ltd. London, 1964.
Marshall and Eric McLuhan, "Laws of Media, The New Science",
University of Toronto press, 1988.
Marshall McLuhan & Bruce R. Powers, "The Global Village, Transformations in World Life and Media in the 21st Century", Oxford University Press, 1989.
there is a good wikipedia page about McLuhan's 'Tetrad':
and there are zillions of sites about him and his theories ofcourse:
the assignment for thursday the 23rd is as follows:
In groups of two students, choose an existing project that tries to achieve an 'extra sense' in some way. Can be art, can be a scientific research project or something else.
Start a debate about this project with the two of you, where you reflect on the following questions:
- what does this extra sense communicate ? What is the content of this sense ? What is excluded by this sense ? (Think about the microphone example given by Edwin, the microphone captures airpressure changes, but through that is able to transmit speech, language and concepts)
- how would having this sense affect our sensorial balance ? What is the 'message of this medium' ? How would it affect our behaviour ? A good way to approach this question is by answering the four questions in McLuhan's 'Tetrad' (see link above).
Give a presentation together where you briefly present this project and your reflections on it. In total 10 minutes, so prepare what you want to say. A partial goal of this assignment is also to give you more experience in giving such a short presentation.
Trepanation is an antiquated medical intervention in which a hole is drilled or scraped into the human skull, thus exposing the dura mater in order to treat health problems related to intracranial diseases.
Although considered today to be pseudoscience, the practice of trepanation for other purported medical benefits continues. The most prominent explanation for these benefits is offered by Dutchman Bart Huges (alternatively spelled Bart Hughes). He is sometimes called Dr. Bart Hughes although he did not complete his medical degree. Hughes claims that trepanation increases "brain blood volume" and thereby enhances cerebral metabolism in a manner similar to cerebral vasodilators such as ginkgo biloba. No published results have supported these claims.
Primitive cyborg? Wanted to expand his awereness by drilling a hole in his own head, and he died at the age of 70 (heart failure)
As an efficient and cost-cutting tool we developed the smartband .This smartband 'sense' a lot of things about the body and its context: electroderermal activity, puls volume, skin temperature, tri-axial acceleration , magnetometric direction, ambient light intensity, ambient sound intensity, infrared ambient detection of human body and ambient temperature... 10bit of resolution and ASCII recording to a 1GB SD-Card... its not clear its connectivity for realtime transmission of data to other devices...
The smartband is a wearable textile band, with unobtrusively embedded microelectronic and sensors.
Created by Georgios Papastefanou, it seems a quite experimental device, but already being used in some projects... I found it some time ago through UrbanSync, an ongoing project of multimodal signals gathering within urban contexts which is using it... in this case the physiological+context data are recorded on sync with other three streams: sound, sonification of the ghz range and gps trails. further sonification, visualization and data mining processes of the data collected are still on development in a collaborative way... the sync streams of data collected during three weeks of sessions in Porto last October have been made available online for those who want to experiment with them.
from the project background:
Normally we perceive our surroundings using 5 senses: Sight Sound Smell Touch Taste What happens when we explore our environment without Sight and Sound?
The Newham Sensory Deprivation Map (2007) is the result of an intensive workshop with 34 students from Newham Sixth Form College in London. The students were divided into pairs, one of whom was blindfolded and given ear defenders so that they could not see or hear. The other student was given a Global Positioning System as well as pen and paper. Together the two explored the local area around the college for up to an hour. The idea being that the blindfolded and deafened student verbally relates their sensory experience to the other student who is taking notes and making sure they are safe during the journey. On their return the geographical data from the GPS is downloaded and all the sensory observations made during the walk are spatially recorded. The final map combines all the annotations of the students and forms an alternative sensory map of Newham.