This is an old revision of the document!


    

Audible mind

What happens when we mix a sense with the brains communication, will the brain then become a part of a larger system where consciousness can not be said to be as subjective?

Spesifikasjon

There are several reasons to use electronic circuits communicating with sound to implement our neural network:

  1. The biological neural network (eg. the brain and nervous system) is based on a very powerful parallel processing which a computer is not able to handle. By making autonomous neurons, which in them selves are simple computers, we will have a much more efficient distribution of their communication, and have a much greater chance to achieve the emergent effects we are searching for.
  2. By using sound we will be able to simulate the effect of the neurotransmitters by analyzing and responding differently to varying frequencies, unlike the traditional softwarebased neural networks whom strictly works with ON/OFF firing.
  3. The current from the direction-sensitive recievers can be sent through ionic polymeric metal composites, which will adjust the position of the reciever to the most interesting transmitter.

Implementasjon

The first braincell will be built from parts purchased from elfa. This is because they will supply me with the knowhow I will need for later to build the cells from leftover electronic equipment. I suppose I allways will need some stuff and help from them, but I wish to get the bulk of material from the debris of a capitalist consumer-society.

So far I have collected the old door-calling (english words?) from my block when they replaced it. This gives me the sensory equipment (microphones) and the broadcasting equipment (loudspeakers) including some sort of an amplifier which I have to figure out how to amplify again… This leaves me with the problem of logic; I will need some kind of triggermechanisms for when to broadcast, and when to manipulate the sounds. Probably some kind of measuring of frequency and/or amplitude which can control some simple switches and timers. The model must be based on a neural network-model which weighs the input, and releases sound according to weigth.

One idea on the logic for tempo of sampling sound from the soundcells a cell is listening to: If I place the cells as some kind of grid in the ceiling, they can use some kind of distancesensor (ultrasound?) to measure the distance to the floor. Long distance will give long beats, and short distance short beats. If there is no activity in the room, the pulse of the sampling will be slow and steady. If there is a lot of activity, the distance to closest object will vary all the time and the tempo will no longer be a stedy pulse, but an ever changing beat. The intensity of the sound can likewise be scaled upp when activity increases.

But how to distort the sound? What about the equalizers of old ghettoblasters?

The logic in the curcuits can probably be realized by leading different frequencies to curcuits with different clockrates. Large capacitators will suply a slow but powerful clockrate, while smaller cap. will have faster and not so loud rates. If the sound (frequency) is to be treated by a different set of neurons, it will simply die out. This is a modelling loosely after Artikkelen i Science (Leutgeb et al., 2005) where it is proposed that the understanding of space in the Hippocampus will give different results depending on what kind of changes one experiences. If one experiences changes in the space, the firingfrequency wil be altered, while if the subject is moving in the space, other neurons will take over.

Testing

Debugging og analyse







 
 
 
 
 
 
audible_mind.1229730951.txt.gz · Last modified: 2010/02/08 13:57 (external edit)