Friday, April 17, 2009

GPGPU in our Eyes

Been reading Beautiful Evidence. With all the emphasis on information density, it makes me think about our minds as parallel processing engines. But you still need to get the data in there. Eyes are a really great way to input high-resolution 5-dimensional data (2 spacial, 3 color) for most people. Way better than reading words. Well, unless the picture can be conveyed in a few words.

This also reminds me of systems like numpy or MATLAB or such things where the language is slow but if you can get data into the low-level primitives, you can chunk things fast.

On a side note, I wonder how easy it is for people (especially those who are fully blind) to learn to make sense of bas relief data by touch.


  1. Our eyes may be able to accept 5-dimensional data, but the very idea that you can usefully describe a scene in a few words, I think, means that the information density is pretty low. I also think that the fact that a blind person can successfully navigate the world without that input, means that visual information density is probably pretty low. Any idea how visual-information dense a scene could be before a sighted individual couldn't process it? Something about psychadelics, maybe?

  2. I'm no expert on computer vision, but as a human, I can do more with a detailed map than if someone tried to describe the scene to me. Unless only certain key features mattered.

    Perhaps simple descriptions vs. detailed visuals relates vaguely to vector vs. raster graphics.

    A picture's worth a thousand words, but a sentence can be worth many pictures, depending on the issue.

    Also, I think that the most important parallel processing goes on the brain (no neuroscience expert here, so I won't try to be more detailed than that), not in the eyes. The eyes are more of an input device. My title was misleading in that sense (and doesn't even fit my original text that well). It was just the easier thing to say.

    But I think you are also right that we do a lot of selective elimination of data. We know what to focus on. But even knowing what to focus on requires a lot of parallel (or super fast) processing. Again, I really meant to focus on eyes as an array input device to the machinery in the brain.

  3. Side note again, that's also why I hinted at touch. I can't even tell which dots are raised in a Braille letter, because I'm so untrained and unpracticed at that. But I imagine that touch could be an effective array input device for people who did practice that sort of thing. And I imagine the data could be fed through the same brain machinery just as easily as visual data.

    Again, I'm Mr. Ignorant speaking.