Back to Results

EFTA00669052.pdf

Source: DOJ_DS9  •  Size: 160.9 KB  •  OCR Confidence: 85.0%
PDF Source (No Download)

Extracted Text (OCR)

From: Kevin Slavin -4 To: Joscha Bach Cc: Jeffrey Epstein Subject: Re: MDF Date: Wed, 23 Oct 2013 15:40:50 +0000 , Kevin Slavin takashi ikegami <Ma- , Greg Borenstein An experiment that I would like to see one day (and of which I am not aware if someone already tried it): equip a subject with an augmented reality display, for instance Google Glass, and continuously feed a visual depiction of auditory input into a corner of the display. The input should transform the result of a filtered Fourier analysis of the sounds around the subject into regular colors and patterns that can easily be discerned visually. At the same time, plug the ears of the subject (for instance, with noise canceling earplugs and white noise). With a little training, subjects should be able to read typical patterns (for instance, many phonemes) consciously from their sound overlay. But after a few weeks: Could a portion of the visual cortex adapt to the statistical properties of the sound overlay so completely that the subject could literally perceive sounds via their eyes? Could we see music? Could we make use of induced synesthesia to partially replace a lost modality? It's not exactly what you're proposing, but are you familiar with Neil Harbisson's work/life: http://en.wikipedia.org/wiki/Neil_Harbisson He's an artist, primarily -- and has been living this way for quite a while now but U not aware of anyone (including Neil himself) who has looked into how it has actually altered his perception... Also I am just warming up to run a class next semester called -- for the moment -- Animal Superpowers (new name TK). Fifteen students, each one picks an animal, goes deep into how they perceive the world, and then builds the sensory apparatus to allow a human user to understand the world as that animal does. It draws from an old Area/Code project that was never built, called "Ant City" -- in which individual players were ants, physically situated in a real city, responding to digital pheromone trails. In the meantime the artist Chris Woebken did a series (called Animal Superpowers) that approximates this. (images video: ). While I love Chris' work, it's mostly about how the ant "sees" as opposed to how the ant perceives and understands the world (e.g., pheromones). I am interested in experimenting with human augmentation to provide (or augment?) these additional forms of perception. Prior to Chris' work, I had wanted to run such a class at ITP in 2005, but students don't have the hardcore sensor background.. talking about it now with Joe Paradiso in the Responsive Environments group, about connecting this to his class and the work they are doing (which includes working with a sensor-instrumented cranberry bog here in Massachusetts) If there are ways in which these kinds of experiments can fold into / draw from what we're discussing here,. welcome it. EFTA00669052 On Wed, Oct 23, 2013 at 10:41 AM, Joscha Bach < > wrote: Am 22.10.2013 um 16:01 schrieb Jeffrey Epstein >: > I would add the possiblity that each differentiated input has its own encrypted algorithm. and looking at it from too high an altitude provides little info about each one..i.e. optic nerve encrption different than nasal receptors . maybe even a one time code . that allows only the individual to access certain stored info. Indeed! Each individual will form its own code, for each modality. On the other hand, these codes do not simply diverge, but they are the result of the individual's adaptation to its own (changing, developing, deteriorating) physiology. The nervous system is designed to extract structure based on the statistical properties of the input, and to compensate for defects. For instance, replacing the fine-grained input provided by the many receptors of the cochlea with a crude implant (today's models sample only a handful of frequencies) will usually result in a subjective experience of continuous auditory perception; splicing the data of a few pixels into the optic nerve of a blind person may allocate those pixels their correct positions within the visual field. An interesting question: what are the limits of the plasticity of the sensory modalities? For instance, could we switch modalities to some extent? More than hundred years ago, Stratton did a famous experiment, where he wore glasses that turned the world upside down (using prisms). After a few days, his brain adapted and he would perceive everything as being upright again. An experiment that I would like to see one day (and of which I am not aware if someone already tried it): equip a subject with an augmented reality display, for instance Google Glass, and continuously feed a visual depiction of auditory input into a corner of the display. The input should transform the result of a filtered Fourier analysis of the sounds around the subject into regular colors and patterns that can easily be discerned visually. At the same time, plug the ears of the subject (for instance, with noise canceling earplugs and white noise). With a little training, subjects should be able to read typical patterns (for instance, many phonemes) consciously from their sound overlay. But after a few weeks: Could a portion of the visual cortex adapt to the statistical properties of the sound overlay so completely that the subject could literally perceive sounds via their eyes? Could we see music? Could we make use of induced synesthesia to partially replace a lost modality? Cheers, Joscha EFTA00669053

Document Preview

PDF source document
This document was extracted from a PDF. No image preview is available. The OCR text is shown on the left.

Extracted Information

Dates

Document Details

Filename EFTA00669052.pdf
File Size 160.9 KB
OCR Confidence 85.0%
Has Readable Text Yes
Text Length 5,548 characters
Indexed 2026-02-11T23:25:32.100438
Ask the Files