Mapping Objects and Spaces

Odds are, you don’t feel directly connected with the world.  You might perceive that there is a control room somewhere in your head, and you’re sitting in there watching a video display, barking  out orders.  Like in Woody Allen’s “Everything You Wanted to Know About Sex,” or the Disney film “Inside Out,” which is adorable and weirdly accurate.  (Fair warning: if you’ve ever had a little girl, don’t let your buddies see you watching that movie.)

You are insulated from the world by the brain, which processes what information you get from your sense organs. Your sense organs are connected with the world, but the raw data, in itself, would be meaningless to you, even if you had access to it.  Most your eyeballs can do for you, is to send data to your brain.  Your brain then makes a model of the world, and it uses that model to figure out what to do from minute to minute, and you get to hope that it made a decent model; one that, for example, will keep you from walking off a cliff.  

All models have limits.  Let’s talk about that.  Since humans primarily use vision to explore the world, we will talk about how the visual centers in the brain form visual models.

 

Previously, we thought about the brain using a bottom-top approach.  For visual systems, it’s better to think of it as a back-to-front organization.  Raw sensory data enters the brain in the back.  The information is sent forward for processing, such as identification and localization in space.  Way up front, the brain decides whether or not to pay attention to what the eyes are looking at.  If the brain does decide to pay attention, it can use visual working memory to form three-dimensional models out of the two-dimensional information it receives.  The information is 2-D because the retina is flat, like the photo-sensor in your camera.  (Or “film.”  Back in my day, we didn’t have fancy stuff like… oh shut me up.)

When we are talking about modeling the world visually, we are talking about two things that are very different from a behavioral viewpoint, but very similar from a functional viewpoint.

One behavior has to do with spatial localization.  You hear something rustling around in the bushes, and you’re able to figure out where the sound is coming  from.  Or, if you see light glinting off the canopy of an aircraft in the sky, you can point to it, or call it out on the radio.  “Contact, 10 o’clock high.”

The other behavior has to do with forming an imaginary three-dimensional model of an  object.  Your geometry teacher draws this on the blackboard:

CropiPic_20_5_12_48_30

and in your mind’s eye, you can see a box.  If you have a good imagination, you can mentally rotate the box and be able to read what’s written on the bottom.

One way of looking at it, is that we are doing two different things: modeling spaces, as opposed to modeling objects.

I get the feeling that, on an evolutionary basis, modeling objects has something to do with tool use, or “praxis.”  We have to be able to use our imagination to design tools, or to figure out how to use them. The hand has something to do with it of course.  Some people find it hard to form a visual model of an object without touching it.  Some have a hard time understanding how a tool might work, without pantomiming the hand motions involved.

The way I see it, we’ve taken an evolutionarily ancient function – localizing things in space – and married it to the more advanced function of fine movement of the fingers and thumb, and we’ve come up with a state-of-the-art feature, as brains go.  Compared with spatial localization,  3-D modeling is probably very much an evolutionarily recent function.  No many creatures possess that capacity, even on a very rudimentary level.  In fact, the only other species known to have this capacity is the chimpanzee.

When Jane Goodall could take time out from hugging the chimps, and actually study them, world_11_temp-1359976357-510f97a5-620x348she discovered something important: that ants are delicious. This is good news if you’re an ant-eater, and you have a long skinny tongue that will fit into an ant-hole.  Chimps don’t have that, but they do have one thing the ant-eater doesn’t have.  They have the imagination to look at a twig, and realize if they strip the leaves off it, it will wind up looking like something that will fit down an ant-hole.

Now, that’s pretty much it for the animal kingdom.  Humans can visualize stick-tools too.  We can also visualize screwdrivers, powerdrills, stereotactic frames, large-array radiotelescopes, and Ducatis.  It’s not easy, and some do it better than others, but we have that capacity.

These two functions – visuospatial localization and object modeling – are both examples of mapping.


Future articles:

  • Do humans have instinct?
  • Intelligence as an emergent property of neural networks
  • Projection of information of higher dimension  onto three-dimensional space, and what (if anything) this has to do with the brain
  • What we can and cannot learn from brain chemicals

 

My writing is speculative.  My plan is to keep the editorials on a separate page from the news.  But a wise reader will be a skeptic, and will build his or her own foundation of knowledge.

Dr. Ken Heilman is one of those rare scientists who can break down an extraordinarily complex topic into something a normal human can understand, and he does so with enthusiasm, warmth, and humor.   He is a brilliant professor and a good doctor; you should be happy if you get into a pickle and he shows up at your bedside.  And he has never lost that child-like sense of wonder at the world that we all had once.  If you want to get the real skinny on how the brain works, I can’t think of a better place to start.   As to the specific topic of spirituality and the brain, I recommend this.  Don’t neglect this, which should give you a good overview of his research interests and it should cover everything I’ve written here.

Leave a Reply