Memory

So far we have been talking about structure-function relationships in the brain, working from the idea that each part of the brain has its work to do.  And we have been looking at the flow of information from one part of the brain to another.  This is not a bad way to look at the brain, provided we don’t go overboard.  It makes the brain sound like a desktop computer, where information is carried from one module to another through the bus.  That’s a helpful analogy, but understand, it starts to break down when we start to talk about memory.  

Like a computer, the brain has both volatile and non-volatile memory, where volatile memory is information that goes away when you shut off your computer, and non-volatile memory is still there when you turn it back on again.  In humans, volatile memory is called “working memory,” and that refers to things that fly out of your head when you lose your concentration.  Working memory is stored temporarily in the frontal lobe.

Unlike in a computer, non-volatile memory is not physically stored anywhere in the brain.  This information is programmed into a neural network.

A neural network is one way to build a computer, although it won’t look like any computer you’ve ever heard of, other than the one between your ears.

Your desktop computer has an input grid – the keyboard – and an output grid – the monitor.   A precise input is processed by one very complex chip, the Central Processing Unit, or CPU.   In response to a given input, it spits out a specific output.  For the most part, the  flow of information follows the same pathway each time.  Your CPU might allow for parallel processing, and how it divvies up the threads might vary.  But information follows certain well-defined pathways within the chip.   And there are specific information pathways between the chip and your hard drive, and between the chip and your monitor.   All this specificity in terms of information handling implies that your desktop computer is a “deterministic” system, in the sense that these pathways are pre-determined, or designed-in.   In the scheme of things, this is a relatively simple system.   Accordingly, there is no wiggle room for error.  If you type in the wrong command, you will get the wrong result, usually of the “that does not compute” variety.

The human brain does not have a CPU.  Instead, it has millions of very simple processing units.  These are your brain cells, or neurons.    These simple processing units are densely interconnected and massively parallel, meaning information can and does take several different pathways at once.   There is no pre-determined pathway.  The flow of information is therefore said to be “chaotic.”   This is most assuredly not a simple system, so maybe a better word than “chaotic” might be “complex.”   Outcomes are expressed in terms of probabilities rather than certainties.

That’s a bit of a paradox.  The function of a neural network is to spit out an answer in the face of ambiguity.  Whether that’s a particularly good answer is another matter altogether.  The brain will tell you how satisfied it is with the result, but it may communicate its confidence with emotion.  You’re aware of the intuitive satisfaction you feel when your brain comes up with a good answer, and the vague sense of unease you feel when the answer isn’t all that good.  I’ll give an example.

My wife and I were driving outside of Durango, Colorado early one summer evening.  It was not quite dark out, but dark enough to degrade visual cues in the environment.  After a period of contemplative silence, my wife announced, “I think we just passed a herd of elk back there.”

“Really?  Why didn’t you tell me?”

“Well, I looked out the window, saw a bunch of animals, and thought, ‘cows.’  But something just didn’t seem right.  I was all like, how come all those people pulled off the road to look at a bunch of cows?  And how come those cows are way up there on the side of the hill?  And how come they have such long… well, antlers.”

I teased her perhaps a bit more than I should have.   As punishment, to this day, when she spots an elk, she says, “Look honey.  There’s a Colorado Longhorn Cow.  An exceptionally big one.  With antlers.”

For there is a rational explanation.  Her visual association cortex – the part that of the brain that connects images with concepts – didn’t have much to work with in the fading light.  Evidently it resolved that  ambiguity on the subconscious level by going with the percentages, because there are a lot more cows alongside the road than elks.  But it rendered this result with little in the way of satisfaction.  She was under no pressure to act, except to the extent that I am one of those people who like to pull off the side of the road and look, and to pout about it if I don’t get to.  She had a chance to take note of her unease, to disengage from the stimulus, and to wash the data at a level of processing that is slower, but usually more accurate; namely, the conscious level.  I say “usually more accurate” to account for those times we are not mindful of our emotional state, and ignore the satisfaction we feel with a good answer, and then go on to second-guess ourselves.

The dense web of interconnected processing units – each of them a brain cell, or neuron – is called a neural network.  The  study of such networks in the brain is called Connectionist Theory, because information is stored in the neural network by regulating the connections between these neurons.  Usually the connections are regulated by gain, or in other words, by their sensitivity to signals transmitted from one neuron to another.  Programming the neural network – for example, burning memory – is the process of adjusting gain in such a way that the network can spit out a useful answer.  For each new memory, you may have to adjust the gain for hundreds or thousands of connections before it works like it’s supposed to.

This programming process is partially empirical – wifey only made the elk/cow mistake once.  But it’s partially behavioral; in other words, it can be a punishment/reward thing.  Good programming means you frequently get that fine feeling of satisfaction from good answers.    That’s reinforcing.  Bad programming means you can’t get no satisfaction.  That’s a form of punishment, and your brain will change its behavior if it can.

That’s a big “if” for most people.  Problem is, the network can only hold so much information.  Just depends on how many neurons you have, and how densely they are interconnected.  In other words, how smart you are.  And it also depends on how your network is set up.  Sometimes you find yourself painted into a corner, realizing after the fact that your approach had limitations.  There are techniques you can use to shake things up connection-wise, to reorganize the way information is stored so you can cram more stuff in there.  We will talk about disruptive learning in a future article.

Every brain looks more or less alike from the outside, but that doesn’t mean they all work the same way.  Precisely how your own brain works depends in part on how you’re wired up, which in turn is determined by genetics, and how your brain grew.  But it depends quite a bit more on how your brain programmed itself, and every brain does that in its own way.

In a computer, a memory consists of a physical, tangible stripe of 0’s and 1’s written on magnetic or optical media.  In the brain, a memory consists of an intangible concept, where a concept is a collection of attributes that are linked to one another.  Calling up one attribute tends to call up all the others.

Here’s how my old professor Dr. Stephen Nadeau put it.  The memory of a “dog” consists of a number of attributes.  What a dog looks like, and what that has in common with other dogs.  What a dog’s coat feels like.   The sound a dog makes when it barks.  What dogs smell like.  How the word “dog” is spelled, how it sounds, and how it looks written on the page.  As soon as any item on this list of attributes scrolls across your television screen to announce an ad for dog food,  all the other items bloom into consciousness.  That’s what memory is.  If the first items you see are the letters D-O-G, the same thing happens; that’s what reading is.  If you can’t read that word, and you have to sound it out, and can only recall the concept once you hear yourself say the word “dog” out loud, that’s what reading is, if you went to Florida State.

Emotion can be an attribute, and the ability of emotion to connect itself with memory speaks to the richness of the human emotional experience overall.  When I was a new father, I was a bit surprised to learn that babies only really have three emotions: happy, sad, and crabby.  The rest come with life experience, and quickly develop a range of subtle nuance that far surpasses the capabilities of the words we use to describe them.

Think of the range of emotions that might get called up when you hear that dog on TV barking.  The laughter that came from running around with your dog in the backyard, and watching him do goofy things.  The sensation of tension giving way to contentment, when your dog greeted you at the front door after a long day of work.  The sense of sorrow, rage, and emptiness you felt when your childhood buddy died.  You could recall any of these things, in any mixture.  Laughter with crying.  Contentment with emptiness.  Just depends on the context. You’re probably feeling some kind of emotion just reading this.  Some might feel fear or disgust.  My wife would be crying.

Emotional memory also speaks to the fundamental question of whether or not emotion has anything to do with reality.  Some people think not.  They think their emotions are little more than noise that needs to be squelched out, in order to clear the mind for the task of real thinking.  In fact, as Dr. Ken Heilman is fond of pointing out, the emotional content of a memory is the only part that manifests in reality.  Remembering what your childhood buddy looked like does not bring him back from the grave.  But the tears are real, every time.

The brain is limited in its ability to perceive reality, but not because of emotion.  Quite the contrary, in my opinion.   Among other things, emotion is a form of communication.  You can even use it to communicate with yourself.

But the brain does have limitations, and one relates to a core function of the neural network; namely, to discriminate.  Specifically, to classify things in the face of ambiguity.

As we saw with the Antlered Cow example, resolution of ambiguity can be reflexive; in other words, subconscious, involuntary, and rapid.  Or it can be conscious and deliberate, and in that case we can take as much time as we want.

Bringing things into the conscious mind allows us to potentially consider more data, and it allows us to make judgments about how to weigh data.  One factor might be quite a bit more relevant than another, depending on the situation.  Likewise, depending on the situation, we might choose to discard certain data.  In that respect – as long as we are mindful of our emotional response – deliberating has the potential to be more accurate.  If we can spare the time.    Some people have elevated “snap judgment” to an art form.  If you’re a fighter pilot, that’s good.  If you’re an airline pilot, not so much.
The closer the stimulus, and the higher the level of arousal, the more inclined we are to work on the subconscious level.   The number of possible responses also has an influence.  If there are a number of possibilities, with no clear leader, it’s hard to decide reflexively.  If your response set boils down to two options that seem mutually incompatible – eg, predator vs prey – it becomes difficult to inhibit a reflexive choice.  Such things appear to defy reason; although that can be a self-fulfilling prophesy, if one is disposed to heuristic thinking.    

F. Scott Fitzgerald once said that the test of a first-rate intelligence is the ability to hold two opposed ideas in mind at the same time, and still retain the ability to function.  That’s why. 

Next

Leave a Reply