Switch Statement

076: Gödel, Escher, Bach - Ch. 11 - And Then You Experience a Chunk of Grandma

Jon Bedard
Matt:

Hello everyone And welcome to the switch statement podcast It's a podcast for investigations into miscellaneous tech topics

This is our 14th episode on Gödel Escher Bach by Douglas Hofstadter.

matt_geb_11_12:

Hey, John. How are you doing?

ch_11_jon_raw:

Hey, Matt, I'm doing well. My Matt cell is firing.

matt_geb_11_12:

Oh, that's nice. Yeah. Um, I, I don't have a, I don't just have a John cell. I have a John subsystem in my brain

ch_11_jon_raw:

Aw, damn, you skipped to the next chapter.

matt_geb_11_12:

Oh, nah, well, um, yeah. So, uh, but we're talking about brains,

ch_11_jon_raw:

Yeah,

matt_geb_11_12:

not in like the zombie brain eating way.

ch_11_jon_raw:

right, yeah,

matt_geb_11_12:

Although he does talk about neuron eaters in these chapters.

ch_11_jon_raw:

He also talks about cutting out chunks of brains.

matt_geb_11_12:

my gosh, this guy, like they, they talked about that. So like sterile Lee, is that a word? Um, yeah. Clinically that's a perfect word for it. but you wanna, do you want to just dive right into that?

ch_11_jon_raw:

Yeah, sure. I, and this was kind of a theme that within these two chapters, there was this theme of like these grotesque experiments that scientists have run to try to understand the brain a little better. Basically this guy, Lashley, I think his name was, he was trying to understand the part of a rat's brain that holds the ability to solve a maze.

matt_geb_11_12:

yeah.

ch_11_jon_raw:

So, if I understood the experiment correctly, he would, like, have these rats learn this maze, and then he would slowly cut out parts of their brain to see when they stopped being able to solve the maze.

matt_geb_11_12:

Yeah, the, the, the way that they describe that is he trained rats to run mazes and then removed various cortical regions. No, you chopped out a bit of that mouse's brain. Don't try to hide that under cortical regions because it makes it sound like, Um, I don't know. It just, uh, you know, clear, you know, sanitizes any sort of like grossness about it.

ch_11_jon_raw:

Yeah, and the worst part about it is he didn't even, he kind of like didn't even really find, well actually he found out something incredibly remarkable which is that it's not stored in any one part of the brain. And the brain is kind of this, you know, he would cut out these various parts of the brain and the rats could still solve the maze, although some of them would take longer. But what that basically means is the knowledge to solve the maze is somehow spread across the entire brain, which was, which was the incredible finding that he did, uh, come up with.

matt_geb_11_12:

Right. And that when he removed more, the mouse struggled more with the maze. So it was kind of like, yeah, it's just spread everywhere. And, Hofstadter makes this joke, which is like, I guess the brain, it's kind of a defensive mechanism. Uh, so that like, if your brain gets damaged through fights or through malicious neuroscientists, like, you know, it, it has a defense against that.

ch_11_jon_raw:

Yeah. Yeah. And this, this Lashley experiment was, pivotal in the world of understanding human brains because, you know, there was all these theories around and this is what we were talking about in the very beginning of the episode. There was a theory called the grandmother cell theory that I think was sort of a, it was sort of meant to be like a ridiculous conjecture. Like, I don't, I don't know if it was ever like a, serious theory or anything, but the idea is there's some cell in your brain that's like your grandma. And so when you see your grandma, or when you think about your grandma, like this cell is kind of firing.

matt_geb_11_12:

yeah. Um, no, exactly. And I don't, I mean, maybe they. Picked kind of a silly handle for it, but I think it was a real hypothesis that, that, yeah, there is, you could isolate the, that one, you know, neuron or series of neurons that like is where your brain is storing that information, but that just doesn't seem to be the case. It seems to be like these concepts arise out of all of these, And this is, this is one thing that kind of breaks my brain is like how, like when you think of your grandma, there's neurons, like all over your head that are lighting up,

ch_11_jon_raw:

Yeah,

matt_geb_11_12:

but there's not like, it's actually the pattern of neurons. That is, is your grandmother. And not like that there's one specific place where it's, and you've just learned that like, okay, this pattern means grandma. Okay. As opposed to like, there's one neuron that, uh, that represents that.

ch_11_jon_raw:

I say grandma, it's not like you're perfectly, imagining an image of your grandma in your mind. You know, you're thinking of like food she made for you. Like my, my grandma used to make me crepes and blueberry sorbet, uh, so I loved hanging out with her. We also used to play like Euchre together. So it's like, when you think of your grandma. These dozens and dozens of things, hundreds of things, probably like flit through your mind instantaneously. Uh, he mentions this other experiment, this guy, Penfield, who, and yeah, this is sticking with a theme of like weird experiments that seem somewhat inhumane. But apparently this guy would just shock random parts of people's brains. And when he did that, they would like see random memories. And it was just kind of this, just kind of this evidence that like information in your brain is sort of stored across your entire brain, you know, so he would shock one part and then you'd see like a memory of your family and he'd shock another part and you'd see a different memory of your family. Um, and he couldn't, he wasn't finding some clear pattern. It was more just like everything was just spread around everywhere.

matt_geb_11_12:

but isn't that, isn't that just, uh, or isn't that what dreams basically are? It's like he was basically simulating a dream.

ch_11_jon_raw:

He was creating dreams. Yeah. And they talk about, uh, one of the important things about, um, this grandma, like thinking about your grandma is this notion of funneling where, and I don't know if I can give a great definition of this, but basically there's some process in your brain where you take the stimulus of hearing the word grandma, bunch of random shit happens in your brain and the end result. is that you experience this chunk of grandma, you know, you,

matt_geb_11_12:

Please don't ever say

ch_11_jon_raw:

that sounded, it sounded much weirder than I wanted it to, but it's like you experience this kind of like grandma ness, you know, your brain, yeah, I'm doing a terrible job, but like, Cause he's talked about this concept of chunking throughout these chapters.

matt_geb_11_12:

Right, but so if I'm understanding what you're saying, it's kind of like you have this direct neural stimulus, which is like your ear hairs are like your inner ear hairs, not like, you know, the ones that have grown out of out of them, because, um, I don't want to make you sound older than you actually are. Um, but, um, but so your inner ear hairs, they're oscillating.

ch_11_jon_raw:

Uh huh.

matt_geb_11_12:

And they indicate a frequency. And then your brain has to, like, pass those jiggling ear hair neurons into this, like, okay, well, this is the word grandma, and then, like, and, and, and he talks about this, is it like, is it going directly from the jiggling ear hair neurons, and then, like, lighting up the grandma, you know, all of the grandma neurons, or is there, like, this central So Place where it's like being translated into the concept of grandma And then it's kind of like lighting up all the other associations Um, and I don't think we know right?

ch_11_jon_raw:

Yeah, I, I don't think we know, but it did sound like when he was describing funneling, he was describing something more akin to like the ladder or your ladder description where it's like, you know, you hear grandma and then there's some process that takes place in your brain. Yeah. Where you're left with this like symbolic representation of your grandma. Yes,

matt_geb_11_12:

Arguably the whole core of this chapter is about symbols and how like You know your brain is doing this translation from signals quote unquote to symbols All right And that and that we don't know we don't know like when does that process like we're trying to trace it along and I don't even think today, you know back then they didn't seem to be able to keep track of that And even today i'm not sure that we have a really clear idea of like when do we arrive at? a symbol 40

ch_11_jon_raw:

is our ability to think of things symbolically is a simple consequence of our brains being as complex as they are, because he talks about. Behaviors and yeah, let's stick with this like grotesque experiment theme. He talks about these wasps that grab a cricket, paralyze it, take it back to their home, seal it in their home, bury it alive, next to a bunch of eggs, you know, wasp baby eggs, and then the eggs hatch and like consume this poor paralyzed cricket who's been sitting there waiting to die, basically. And they talk about this experiment where evidently the wasp takes the cricket back to his nest, and then he goes in in order to sort of set it up. And then he comes back out and grabs the cricket and pulls it in. And these scientists were like, and this is kind of funny, but like when the wasp goes in the nest, the scientists would like drag the cricket away, which is just amazing. And then the wasp would go grab the cricket. Take it to the nest, go back in the nest, clean it up, basically start this whole process over, like reset the entire process. So grab the cricket, put it at the front doorstep, clean up inside. And the scientists could evidently do this repeatedly. And they were talking about doing it like 40 plus times.

matt_geb_11_12:

times And like, you know, the, the point being that there wasn't this higher concept of why the, the cricket didn't know, or sorry, the wasp didn't know why it was doing that. It just did it, you know, because if it understood why it was doing that, it would, you know, and remembered that it had done that, it clearly would not have done it, done it 40 times. But it just had some process and it's like, well, if the, if the cricket, did I say cricket of the wasp? Like if the wasp remembered why it had done it or, and that it had done it,

ch_11_jon_raw:

Yeah,

matt_geb_11_12:

wouldn't have, it wouldn't have kept on dragging the cricket to the, to the threshold. Um,

ch_11_jon_raw:

exactly.

matt_geb_11_12:

yeah, but honestly, I feel like sometimes humans do that too, where they don't, you know, they were taught something and they're really not processing. why they're, you know, breaking it down into its more component parts.

ch_11_jon_raw:

Right. Cow pads.

matt_geb_11_12:

they're just following the cow path. And I mean, honestly, sometimes that's the right thing to do, you know, because maybe they don't fully understand why you do something, but then like, it's actually important that they do. So, um, I feel like there's, there's weights on both sides. Um,

ch_11_jon_raw:

exactly. But yeah, yeah. And like you're saying, he mentions that animals generally don't generalize, you know, there's a, there's plenty of animals that do extremely complex things, but it's all signal, you know, they're not, they haven't reached that point where the signals graduate to symbols. Um, and he mentions how humans have this ability of understanding classes and instances.

matt_geb_11_12:

Oh, I'm glad you're bringing this up. Yeah. Right.

ch_11_jon_raw:

very much a programming concept. Any programmer, you know, understands classes and instances where you define a class, which is kind of like, you know, the definition of an object, but then you can instantiate individuals of that object. Um, and yeah, he just mentions how humans have the unique ability to. You know, under both understand the concept of say, like a president, but also understand the concept of like Barack Obama, the, the president, you know, which it's like Barack Obama's an instance of the class president. Uh,

matt_geb_11_12:

Yeah, sometimes I wonder like this, like, I wonder if classes and instances were already a thing at this point, like, or is he just influencing all of these downstream programming language because these programming language designers read his book and they're like, Oh, those are good names. I'll call this a class. I'll call this an instance, you know,

ch_11_jon_raw:

dude, it's funny you mentioned that. Cause literally the next bullet point in my notes after classes and instances is was Hofstetter just imagining all these things or was this known research? Because, because he's coming up, he's just throwing out so many ideas that are like, like, you know, huge ideas, you know, these like utterly like fundamental principles of like all software and, you know, understanding human consciousness, blah, blah, blah. And I'm sure he was drawing from like a lot of modern research at the time, but it was 1979. Like this stuff was barely understood.

matt_geb_11_12:

Yeah. Like, I don't even know. I would need to go back and look, but I mean, it didn't, C didn't exist yet. So it's like the, the oldest programming language I know that used, that had the concept of a class didn't, didn't exist yet. So I would need to go back and see if this influenced those, those language designers. Um, and it is funny cause like, it's just a perfect example of one of those times when I never really thought about the word class before. It's just, you know, class. It's just, that's just the word you use to define, you know, an interface or whatever, you know, or like define a class,

ch_11_jon_raw:

Yeah.

matt_geb_11_12:

but it's like, no, it's a grouping of things that share attributes

ch_11_jon_raw:

Yeah. Right.

matt_geb_11_12:

and I don't know, it's just cool to have that, you know, now have that greater realization now after having used them for like, 15 years. Uh, but, um, and then the, like the, the other cool thing that he says is like, you also can't have a, so yeah, you talk about a president, uh, and then you talk about Barack Obama, but like when you have an instance of Barack Obama, that alludes to, you know, Yeah, the class of presidents, but just the class of men and the class of people. And like, you can't have an instance without also, you know, instantiating a bunch of, uh, or not instantiating, but like kind of bringing to mind

ch_11_jon_raw:

Having the qualities.

matt_geb_11_12:

additional classes.

ch_11_jon_raw:

Yeah, exactly. Exactly. And this is this whole symbolic way of thinking where our brains are able to understand like, Oh, you know, this is Barack Obama. And then all of these important ideas about Barack Obama. We have access to all of those ideas because our brains are symbolic and can understand these like classes and instances

matt_geb_11_12:

And this is what's super powerful about, about the concept of a class is because like, You can imagine, like, when you're thinking about the class of president, like, you can actually divorce it from all of, like, the specific attributes of Barack Obama. Like, it's like, it doesn't need to, you know, it doesn't need to be a man. It doesn't even need to be a person. Like, could you have an AI president? Like,

ch_11_jon_raw:

dog mayors, there's plenty dog mayors.

matt_geb_11_12:

mayor, right? Could it be, could there be a dog president?

ch_11_jon_raw:

I

matt_geb_11_12:

There's no real, no rule. That dog can't play president.

ch_11_jon_raw:

deep,

matt_geb_11_12:

like a reference to, uh, um, air

ch_11_jon_raw:

Yeah

matt_geb_11_12:

no, ain't no rule. A dog can't play basketball or whatever it is. Uh, um, so anyway, uh, but yeah, that's my, that's all I'll say about, about classes. Oh, well, actually, sorry. I did have another point about this, which is like, this is constantly, you know, um, I feel like listeners may remember I'm going to law school right now. And like, this is constantly what you're doing in law school, where you're, you know, you have this general rule and then you read this very specific case and there's this back and forth between cases affecting the rules and rules affecting the cases. So you constantly need to read this very hyper specific series of facts and then like distill it into this abstract class of rules.

ch_11_jon_raw:

So your, your, uh, programming background, understanding classes and instances is hopefully going to carry over to law school in some sense.

matt_geb_11_12:

I think, yeah, in a lot of ways it does, I think.

ch_11_jon_raw:

Nice. Well, the last note I had for this chapter, uh, he talks about, and I, yeah, I did not take very good notes, so I don't know, we can sort of discuss this, but I thought it was an interesting point, like humans don't understand physics. Like, it's not like when you see something happen, like if you see a ball roll down a hill or something, it's not like your brain is calculating like, Oh, MGH potential energy, and then calculating some one half MV squared or whatever, you've just observed that type of thing, like a ton of times. And so your brain is able to build this like probabilistic cloud of like, what's probably going to happen based on, you know, all these observations. And it's, and your brain is basically a big predictive machine where it's just, you know, it's taking in sensory input and it's just kind of predicting what's going to happen next. And at some level, that's sort of all it is. And, uh, I just want to do again, I know I say this so much and I'm probably sounding like a broken record, but that's exactly what chat GPT is. Like it's, it's taking in input and it's predicting the next thing. And a lot of people say like, Oh, it's just regurgitating it's training data. But it's like on some level, that's exactly what the human brain does.

matt_geb_11_12:

Oh, yeah.

ch_11_jon_raw:

And so I don't think there's as much distinction between what GPT is doing and what our brain is doing. Then a lot of people realize, or a lot of people think there is.

matt_geb_11_12:

Um, this, this ties directly to a section in the last chapter, which, which it says, like, in quotes, computers can only do what you tell them to do. Like, and he's kind of like calling into question that, like, common, you know, wisdom. Um,

ch_11_jon_raw:

Yeah.

matt_geb_11_12:

And I think this, this is exactly what you're saying, where he basically says, as this thing gets more complicated, you're going to be able to predict and predict with worse and worse probability of what it is going to do. And now today you basically can't predict it at all. Uh, I mean, we're not quite at the point where it's kind of acting on its own. The half basically, but, uh, But I don't know. And like, especially in a world where you have a system that can interact with humans and like gather information about that interaction, I still, I still firmly believe it's like, there's your feedback loop. It's just going to evolve. Like,

ch_11_jon_raw:

Right.

matt_geb_11_12:

so, so I just don't buy the whole, like, Oh, it's just regurgitating, you know, information.

ch_11_jon_raw:

Yeah, neither do I. Well, should we seal off this chapter, or should we try to squeeze in the next chapter?

matt_geb_11_12:

Let's, um, Okay, there was one other thing. No, no. Oh, yeah. Okay. So maybe, uh, there was one other thing that I wanted to talk about. So he talked about neurons and, and glial cells. Uh, there was this very interesting point that he made, which was like, oh, there's like 10 times as many glial cells as there are neurons, but they just perform this structural function. So we're not going to talk about those.

ch_11_jon_raw:

Yeah.

matt_geb_11_12:

That just stood out to me as like, this feels like one of those times when, you know, we have all this quote unquote junk DNA that like, doesn't do it. Quote unquote, doesn't do anything. And it's like, I just am always very suspicious when things like that happen. Uh, and so I looked at more into glial cells. And if there was anything that we've learned about glial cells since this came out and they've, they learned something very interesting, which is as a, As a baby, you have a ton of these synapses, like way more synapses than you would have as, you will have as an adult. And these microglial cells, like, go around and they trim synapses that they will like eat synapses, basically. Uh, yeah, yeah. And so that kind of guides how your brain works as you're growing up. And they were saying that, um, people with autism, like the way their glial cells work is, is different and they, and they are not, they're, they're not trimming as many synapses as, kind of normies. Uh,

ch_11_jon_raw:

Wow.

matt_geb_11_12:

so these glial cells do seem to have this very important function. Now they're not, they're not doing the processing themselves as far as we're aware, but they are performing a really important role in cognition because they're kind of like guiding like the

ch_11_jon_raw:

The structure. Yeah. That's crazy. That's, that's, uh, dude, I'm glad you looked that up, because I had a similar thought when I read that part.

matt_geb_11_12:

all right. But so I think, I think that was all I had, but yeah, we're kind of at the end. So maybe we, maybe we just make it this one chapter and then we'll, we'll have another, uh, we'll wrap it up or wrap up this section with, with chapter 12.

ch_11_jon_raw:

Hell yeah. Alright, cool. Well, I will see you next time, Matt.

matt_geb_11_12:

See you next time.