.jpg)
Switch Statement
Switch Statement
077: Gödel, Escher, Bach - Ch. 12 - Antagonizing Wasps for Good of Humanity
Hello everyone And welcome to the switch statement podcast It's a podcast for investigations into miscellaneous tech topics
Jon:This is our 15th episode on Gödel, Escher, Bach by Douglas Hofstadter.
Matt:Hey, John, how are you doing?
Jon:Hey Matt, how are you?
Matt:I am. Doing pretty well. I'm feeling. Feeling mindless today. no, there's nothing, no processing going on.
Jon:well,
Matt:between the two years.
Jon:you could easily map your mind to my mind then, because I'm also feeling mindless.
Matt:Digits. We have a perfect. Was it isomorphism
Jon:Perfect isomorphism which is just
Matt:our
Jon:zero equals zero.
Matt:The empty set. Yes. Um,
Jon:loved the different, uh, Jabberwocky in this chapter.
Matt:commit this? I think there was a guy in college who had committed this to memory. And he would just recite it, uh, social engagements. And that's the kind of guy that I want to grow up to be.
Jon:absolutely. Yeah, I was,
Matt:But yeah. Do you want to, yeah. So what. What is what's going on here with his, with his Jabberwocky.
Jon:Well, yeah. And this was, this is so fascinating to me. This is like another, uh, topic that this book just like throws out randomly that I think is just like ridiculously fan, uh, fascinating. He, he basically, so Jabberwocky in the first place is like an extremely strange poem, you know, like, um, uh, what is it? Lewis Carroll. Is he the guy
Matt:Lewis Carroll. Yeah.
Jon:He uses all of these words that aren't real words, but you can sort of Like figure out what they mean. Uh, they're, they're kind of like, um, what's it called when you mash two words together in order
Matt:Like portmanteaus maybe.
Jon:portmanteau. They're kind of like portmanteaus where he'll say like slithy or something. And it's like, slithy isn't actually a word, but it, you can tell it's kind of like slitheringly or something, you know, it's like a version of that. Anyway, this poem is, in the first place, extremely difficult to, like, represent in English, which is how it was originally written, I think. But then translating it presents this, like, new, crazy issue because so much of the poem is based on how words sound and, like, how they fit within the rhyme scheme. So it's like, if you're translating this poem, Not only are you contending with all of these words that aren't actually words, but you have to find other words in your language that may not actually be words and also fit them into the scheme. And it's just kind of this, you know, very interesting process.
Matt:Well, yeah, the translation actually has to go down to a lower level than words and, and start to kind of take word fragments, prefixes, and suffixes, and. You know, Because it's not a real word. There's no, no one has there's. There's no dictionary. You can look up. Uh, slit with a translation from S a of sliver. Into French. So. Uh, so yeah, and, and, and what it maps on to, and why he calls this out is the French word he uses is lubricity. You or whatever it is, I'm not sure if I'm. Pronouncing that correctly. And it's obviously completely different from the word that he actually uses. Uh, but. That's the, that's kind of the mapping of the, that prefix. Cause I guess SLI doesn't have that same like slippery slide. Snaky sound or like connotation. In French.
Jon:Yeah. It's like whatever, you know, neurons are lighting up in your brain. When you hear the word slithy, you want to find that same word and, or you want to find a thing in French that makes those same things light up. Cause, cause like you're saying, it's like, you've broken it down to something far deeper than language. It's like meaning and feel, and you're trying to map on in your language, the same meaning and feel, which is really, really cool. he discusses another example of this in Dostoevsky, which this is another one that I find very interesting because I think like Dostoevsky, Russian author, obviously, you know, extremely, his writing is, is very like deep and. Like, it's almost like, I think in order to truly understand Dostoyevsky, You have to, be Russian. And you have to have, experienced some of what he experienced. And I'm sure it's still, you know, I've read, um, What was the book? I think it was called the Brothers Karamazov. And I remember it being like, just difficult to understand. And I felt like, you know, I felt like a little bit, like there was this barrier of, you know, my life experience does not enable me to like truly get what's going on here. Um, and I think, and he talked about how the very first sentence of, uh, this Dostoevsky book. Which crime and punishment was it?
Matt:Uh, I'm not, I'm not, I don't remember exactly. Uh, Which which book it was. Yeah.
Jon:Okay, maybe it doesn't matter. But in the very first sentence of this book, he mentions this place. And Dostoevsky, I think, just calls it S Place. But he's referring to an actual street in St. Petersburg. And if you're a reader of the book and you're living in Russia, you probably realize that he's referring to that. And just by mentioning that street, it's like sort of conjuring all of these ideas and thoughts and, you know, maybe you know that that street is where like the lower, lower class lives or something. So you're immediately like bathed in all of these concepts and ideas.
Matt:Right.
Jon:talks about the difficulty of translating that to other languages, because, you know, for one thing, as I guess he. He figured out that this S place, uh, refers to the word carpenter. Like what the thing that S stands for is the word like carpenter in Russian. So that there was this one translation that just translated it to like carpenter street, and he was talking about how, like, that almost makes it sound like a Dickens novel or something. Like it just sort of completely takes you out of the like Russian ness of it. And, and it's sort of like, is that a valid. Translation, because in one way it is something more meaningful to an English person. And it maybe allows an English person to like experience those, some of those same emotions, you know, cause carpenter is like a, uh, you know, like a tradesman. So maybe it conjures some of the same feelings, but it also like completely removes the sort of like Russianness of it.
Matt:Right.
Jon:and yeah, I just thought it was a very interesting section where. You know, it's like, what is language? And I think this sort of gets to what you're saying, where it's like, you know, none of the words or sentence structure mean anything. It's just about what's sort of happening in your brain when you're like perceiving these things.
Matt:Yeah. And it's almost like. It's like you have a map from words to like a series of lived experiences. And like the, the author, like indexes into that map and you're like, boom, you have this like rich series of experiences. And this is exactly what you're saying. But. For an English reader. It's like, there's no element. Like there's no name. You could call that street. That's gonna evoke all of those, like same series of lived experiences. Uh, so it's like the job of the. Translator is impossible. Really? Uh, Um,
Jon:There was a really interesting example of this recently. Uh, there's a show called Shogun, which is an old book by James Clavel. Uh, but. In Shogun, there's this English guy, or maybe he's Dutch or something. I can't remember, but he lands in Japan and this is like ancient feudal Japan, basically. So the show is multiple languages. Like it's a lot of ancient Japanese. It's also like some Portuguese. Cause there was like Portuguese folks there. And then there's like English when the main character is speaking. But in reality, he's like not actually speaking English. Like, I think he might be speaking Portuguese, but they like translated it to English for English viewers. Anyway, the only reason I'm bringing up the show is because the creators of the show wanted to create this very authentic ancient Japanese feel. So what they did is they went through several layers of translations where they had like experts in ancient Japanese writing and poetry. write these initial drafts of like what the characters would be saying. And then they had people who could translate that into, I think it was translate that into English. And then they would have separate people translate that English back into like conversational Japanese.
Matt:Hm.
Jon:So what you're, what you're left with is this, Dialogue that's like and the craziest part is like it's in Japanese. So me as like a American viewer, like I'm not even experiencing most of this greatness, but what you're left with is this dialogue that feels very like authentically ancient Japanese, you know, like could be spoken by in a feudal Japanese lord or something, but it also kind of works within the You know, dialogue of the show, basically, you know, as a viewer, you're like reading the subtitles of what these people are saying. And, you know, I think if it was totally in this ancient Japanese style, it would be like a little obtuse. Um, but it still feels very authentic. And I guess they went to great, great pains. You know, they, like I said, they had these layers of translators to sort of go from ancient Japanese to modern Japanese back to like conversational English. Uh, and I think that. I don't know. I think what you're saying, like the job of a translator is extremely difficult and, you know, sometimes going through those great pains can make your work more important and better.
Matt:Yeah, no, I think that's, I think that's true. Um, I wanted to, uh, talk about the ASU. Uh, Um, so Hofstatter talks a lot about, or he, he gives this kind of like, uh, I don't know what, what you would, it's kind of like a mental exercise. It's like, imagine that you're a person who needs to map out all of the United States. And you just have a, you just have a map of all like the rivers and lakes and, and like geographical features and like you're responsible for drawing in the borders of all the states and where the cities are and what have you. The way I'm taking it is this is you growing up and learning about the world. And it's we all have this same, ability to perceive the world the same way, but then It's our responsibility to build up. Uh, this more like constructed understanding of the way the world actually works. Was that your understanding of the section?
Jon:Yeah. I, I think that's a huge part of it. I also took it to mean like, you know, cause this chapter is a lot about mapping one human's brain onto another human's. And I think that's why he's talking about like translating, because that's sort of, like we're saying, that's this act of like trying to get a human brain from a different culture, different time period to experience the same experience that the author intended and how that's a very difficult process. Um, so I think this ASU, USA, ASU, uh, sort of game, I don't know how to describe it is an act of like. Taking information in your brain and, you know, kind of putting it down, uh, or, you know, representing it in a way that, you know, could be shared, but also like if multiple people are going through the same operation, you're going to wind up with these things that are like fairly similar, but also like different in very important ways.
Matt:This is this actually dovetails with a thought I've had for a really long time, which is like, How you arrive at light morality. Like, what is, what is right. Like, you know, you could have too. Like, if you imagine two children, right? Like. They are caught lying by their parents. You could imagine that. Like one of them arrives at the conclusion. Like I should stop lying. And then the other one arrives at the conclusion, and I'm not trying to like vilify this other person, but like they arrive at the conclusion like, oh, I just need to be come a better liar. You know what I mean? I just need to do a better job of lying. Um, and it's like, you know, the external, the external. Kind of. Influence or like circumstances are the same. But you arrive at these different. Roll systems about like how you should live your life. There's nothing to say that. Just because, you know, like P people can arrive at very different line conclusions about like how they're supposed to live their lives. Uh, even from the same series of influencing structures, I guess, in the, in their life. that process. It interests me a lot. Like why you do see like very different. You know, different kinds of moral frameworks.
Jon:Yeah, man, this is a topic of great interest to me. I think a lot about workplace culture, uh, because, you know, in my opinion, I've observed certain workplace culture almost like devolve, you know, become worse over time. And I've always wondered why that is. And I think it's very, very similar to what you're talking about. Like, In any given system, there's incentives, right? You know, it's like, why do kids lie? They lie because they can get something, you know, they, they lie because they can get out of trouble or they lie because I don't know, they can earn, you know, earn something similar, very similar things exist in a workplace culture where based on how the incentives are designed, there might be some behaviors. That, you know, don't actually help the bottom line, don't actually help your team. Don't, don't help grow your teammates or don't even help you like ultimately contribute, but those behaviors help you get promoted or help you get, get more money. Um, and I think that unless, unless those incentives are very, very carefully designed, you can have these. Uh, like they call it degenerate gameplay in video games, where it's like, if your mechanics aren't perfectly designed, users can just kind of exploit them and, uh, you know, do crazy things. Same thing can happen in a workplace where, you know, it, unless those incentives are very carefully designed, people can kind of abuse them. But going back to your earlier thing about morality, There's different types of people and some people will sort of always stick to like this very deep underlying, you know, their own sort of deep underlying morality and they sort of won't like abuse those systems or exploit various things. And I've, I've thought it's funny because. Those types of people can sometimes be punished, even though in a way they're like operating how you would want them to operate as like the CEO of the company. Um, but anyway, just something I've, I've thought about a lot. I think it's very interesting.
Matt:This is something I think a lot about in law school, because you need to. You need to get good grades in law school, but also the point of law school is to learn the law. And like those incentives are not exactly aligned. And it's also not clear to you. Maybe how best to align those incentives. But like, obviously there's like a trivial misalignment. There is like your cheat on the test, right? It's like, like that's how I'm going to get a good grade. I'm going to somehow find a way to like, get the answers in advance and then cheat on the test. Like that's a very clear.
Jon:That's a, that's a soundbite right there.
Matt:Uh, don't, don't call it that. Um, And, but then there's this other one where there's a little bit of gamesmanship where it's like, you really try to just, you know, you're trying to learn the minimum amount, just so that you can barely. Get by on the test. And then there's like, okay, well let's. Like, let me just try to learn. As expansively as possible about like everything. Because, you know, Any one of it could be useful. And then like, I, as long as I have a thorough understanding of kind of the whole domain, it doesn't matter what they throw at me. Like I'm going to, like the point is to understand at all, Um, And. And I've always gravitated more towards that last one. And I'm not saying that in. In a lot of ways, that's worse. That's the worst approach because it is more like. Your it's a more diffused thing. You're spending time reading things that are not going to be on the test. So it's not something I would advocate to people, but. I'm just. Like, I'm trying to figure out for myself, like, okay. Where like, or actually bouncing approaches like two and three, basically. Trying not to. You know, gives it because you do need to focus on what the, what the professor cares about. Even if you think it might not align perfectly with, what's going to eventually go on to make you a better lawyer.
Jon:Yeah, no, exactly. And that that's the, cause I think there's a lot of folks like you who they're kind of self motivated and they, if there's material to learn, they kind of want to like learn all the material backwards and forwards. And I think like you're saying, it's, it's like, you got to do both. You have to be really focused on what you know, the professor wants, but also, Satisfy your own way of, of doing things. And that's definitely something I've also struggled with at times. going back to the ASU USA thing, like, I think that throughout these three chapters, Douglas Hofstetter has been driving towards this idea of symbols where, like our conversation about written language, sort of igniting parts of the brain, those parts of the brain that are ignited are like symbols. You know, when I say California, you're experiencing a symbol, you know, you have this abstract representation of California, you know, maybe it's liberal. Maybe it's really long North, South, maybe you have SF with the Tenderloin, it's kind of sketch. So it's. This is a uniquely human ability where we can like conjure up these vast abstractions that allow us to sort of assign a tremendous amount of like meaning and, useful information can sort of immediately be conjured out of these abstractions.
Matt:Yeah. I mean, Hofstadter's, you know, he basically says like, this is kind of a defining characteristic of, of us that we do have the ability to. Have symbols in a way that the rest of the animal kingdom who are. To our knowledge. Doesn't like we go back the wasp and the cricket,
Jon:Yeah, Oliver Joseph Mengele experiments
Matt:Yeah like that, that wasp doesn't have a, doesn't have a symbol for. You know the concept of safety or it doesn't have a symbol for, uh, you. Th there's, it's just the signals. It has a signal about the cricket. And it has this very rote. Procedure that it's just following. Um, Whereas we, we understand or. Sometimes we understand like the larger context of why. We're doing something and can CA could do this processing like, oh, okay. There's a reason why I'm pausing at the threshold here. And I've already done that. So, um, I'm going to skip that when, when a. Statistic researcher pulls my Cricut out of my, uh, uh,
Jon:Yeah, exactly. And he even mentions consciousness is an emergent property of symbols, obeying, triggering properties. So basically this whole entire mechanism of, of our brain taking in stimulus and conjuring these symbols based on that mechanism, consciousness emerges. That's Hofstadter is, is making.
Matt:Dude and. And like, I know we keep on getting back to. Uh, large language models. But like, that's gotta be what large language models are doing. Right? Like they must have these, these symbols that it's, you know, that it's getting triggered by. Like there could be no other way that it does what it does.
Jon:it is so hard to not mention large language models. Like I'm trying to mention them less because I feel like I've literally talked about them in every single episode. But like going back to the Jabberwocky thing, those, those like meaning and vibe, like objects that are being invoked by the Jabberwocky phrases, those are embeddings. That's what they are. It's like you can take an embedding and you can convert it from one language to another because it's just like raw meaning, you know, it's a raw feel. It's raw vibe. Like that's exactly what an embedding is. So it's and also he mentions I, you know, like the notion of me, like myself, when I think about myself, he mentions that being a subsystem. So basically, and a subsystem is like this constellation of symbols, like it's this extremely, uh, sophisticated object in our brain that, you know, you know, basically it's, it's just, um, conjuring all sorts of historical and informational stuff. I just saw an article this morning about how Claude, Has almost like sub large language models within itself and you know clod is anthropics large language model And so when you ask clod a query These sort of like sub large language models ignite and determine like which actual part of the larger model like
Matt:But these are, are, you're saying these are emergent.
Jon:Yeah, this is like this emergent property of these models getting bigger and bigger is, you know, you almost have this entire subsystem to use the words of Douglas Hofstadter that ignites and then that ignites some other subsystem and those together end up igniting some symbol, which is an embedding and then based on that embedding, there's your answer and it's it, the similarities are just too incredibly Uh, similar to not mention this.
Matt:Oh, yeah. No. And, um, this is interesting. I heard, I heard that. They think that large language models, kind of have an understanding of like, what is true or not. They have looked at that the networks. And they have noticed that there's like a part of the network that It triggers when. A statement is true and it like, it doesn't trigger when the statement is false and I don't know if that arises. Uh, that rises to the level of like a sub system. That's a symbol itself, like truth. And so, uh, and I dunno, I think it's a really interesting part of the., area, because everyone wants to solve the hallucination problem. If they can start to understand when the model thinks something is true or not, or like, There's no truth value. Then maybe they can make it more reliable. But anyway, I feel like I'm getting a little bit farther away from, from your point
Jon:Well, no, no, that's I feel like that's a good addendum because like that part of the model is like a subsystem, you know, it's, it's telling whether or not the output is going to be true or not. And, you know, as these models get larger, there's a great, great, uh, Turing quote. Complexity often introduces qualitative differences. We've mentioned this a couple of times, but like, as these models get bigger and bigger, these subsystems arise and you know, who knows GPT five is going to be an order of magnitude larger than GPT four, maybe it will have this like truth subsystem that you're talking about. And before it outputs an answer that needs high veracity score, it will invoke that subsystem.
Matt:Yeah.
Jon:And that may just be a simple by product of the model being larger.
Matt:Yeah. Um, No, I think, I think that's true. And. Like, I think we get back. It was actually dovetails back to what we're talking about before, where it's like, these models are responding to the external, influences they're responding to feedback and so. If. If it's training. Seriously. Penalizes things that are not true, which. That's its own can of worms. Cause how is it? How are you arriving at the truth value? Eh, uh, of like, The external truth value. Um, so, you know, in a certain sense it's turtles all the way down, but, um, Which, which that's actually another interesting question because it's like, I feel like our S our resolution in the outside world is that You have to just look at the process by which someone arrived at a conclusion and not like you can't know. Apriori whether or not the conclusion is right or not. But you have to look at like, okay, well they did this, they did that. They did this other thing. We have a pretty solid sense that that process leads to high quality results. So I'm going to believe. What they said, you know, so maybe that kind of has to be like you're as opposed to trying to solve truth externally and like re enforce it on the, on the system. You have to ask it to show it to work and then,
Jon:Right.
Matt:know,
Jon:It's like reading the, uh, reading the actual, uh, sources in a
Matt:Just the sources. It's like, yeah, if you can port point to a real source and my do a good job of translating that into English or like into kind of a synthesized fact, like that's really when. You know, it should get a really high score, but.
Jon:he ends this chapter with like this big
Matt:quote
Jon:from this author, uh, named Lucas. And he basically, like, trashes this guy. Like, he's like, Oh, this Lucas guy, he's so fleeting, odd, and confusing. And then he gives, like, a 14 paragraph quote from Lucas. And it's like, what is happening here? Like, has he ever mentioned this Lucas guy? Or did he just, like, randomly?
Matt:I don't think so. Cause he says our first encounter with Lucas, but so it sounds like it's not going to be the last, but, um, the other thing that was so funny is he's like, oh, he. He arrives at the direct opposite conclusion as I do, but I actually don't know where they, where they differ in opinion. I couldn't even figure it out.
Jon:that exact, I read the Lucas thing and I was like, Oh, this seems kind of Cogent and
Matt:Yeah. Interesting. It wasn't any more bewildering than any of the writings of, of Hofstatter. Uh, in any appreciable way, I felt,
Jon:It's like a rap battle. Like we got to go read Lucas's stuff and he'll be like, man, this Hofstetter guy, he's so, uh, capricious and arbitrary.
Matt:Yeah. Yeah, no, that is funny. I had, I had that same thought. I was like, wow, he's really, uh, he's really digging this guy. Like in a way that I couldn't, when I read his stuff, like, I couldn't. I figure out where his animosity, maybe they had like some, maybe he slept with his wife, maybe Lucas, like, I don't know. He had some personal vendetta against them or something. I did have one more thing. I know we're kind of getting a little bit long on time, but, um, The thing that occurred to me was like, Say you had two. Software programs that did the same thing. Like. Imagine trying to like compare the source code. It's like you look at them and they look very similar. I guarantee you, the source code for them is going to be. Completely different, like organizing a completely different way in a way that like, it would make it really hard. Like they're both going to be. Structured in a, in some way that. Is somewhat reasonable, but there's nothing that, that would indicate that like they're organized in the same way. Like you might be able to draw some parallels of certain functional modules. I think it's a really good parallel for two brains where it's like, Superficially, they can look like they're doing the same thing, but then you look at the internals and they're actually organized in completely different ways.
Jon:Yeah. No, that's a, that's an insanely good parallel. Cause it, it also is like. Um, you know, cause he discussed this before where like, uh, high level programming languages get compiled into assembly and then that gets compiled into machine code and the machine code is the thing that actually runs on your CPU. This introduces yet another level of like, okay, that thing's running on your CPU, but it's then painting these pixels to your screen. And like those end painted pixels might be the exact same pixels. But then you have all these translation layers, like it could have been written in a completely different framework. One could be running in a browser and one is like a client application. So yeah, it's, it's a really, actually that's a super good analogy.
Matt:Um, but yeah, so that's, that is my last, that was my last note. Um, but inter interesting chapter. Mine's and thoughts. So that's a, this one was called.
Jon:I, loved these three chapters. Like, I really feel like the book is, it's sort of coming together. Like, I feel like I'm getting it. Like, I'm starting to understand, like, why he wrote all this other stuff about, like, formal languages, but
Matt:I am interested to see how he is tying this back to like, if, if the core point is to have you understand. Girdle theory of incompleteness, like. I'm not seeing how he's going to tie the same, I guess that's what I would say.
Jon:Yeah, no, I'm not exactly seeing that, but although I think it'll be kind of like that analogy you just made, where it's like, You know, machine code is this utterly formal thing, but then it produces like a video game that's like has all these emergent and crazy and unpredictable probabilistic things. And so, I don't know.
Matt:Okay, well, we'll have to, we'll have to stick around for the next chapter. It's called bloop. Floop and gloop. So a. That should be, uh, should be interesting.
Jon:Exciting.
Matt:All right, I'll see you next time, John.
Jon:See you, Matt.