.jpg)
Switch Statement
Switch Statement
075: Gödel, Escher, Bach - Ch. 10 - Bootstrapping Complete
Hello everyone And welcome to the switch statement podcast It's a podcast for investigations into miscellaneous tech topics
This is our 13th episode on Gödel Escherbach by Douglas Hofstetter.
matt_ch10:Hey, John. How are you doing?
ch_10_jon_raw:Hello, Matt. How are you?
matt_ch10:I am doing all right. I'm, I'm excited to get into these, these chapters, big in an expansive, expansive topic here.
ch_10_jon_raw:I loved these chapters. Like, I have spent a lot of time in this book feeling like I'm in a forest. Like, I'm confused. I'm not sure where I'm going. And, you know, there's no markings anywhere. These chapters actually sort of feel like he's circling.
matt_ch10:a clearing. Something is
ch_10_jon_raw:Yeah. Like I feel like it's opening up. I'm starting to sort of understand what he's doing. I don't, I mean, when I say sort of, I mean, like, I'm just at the very beginning stages of understanding what he's doing.
matt_ch10:right. Whereas previously it was zero. Now it is the value of the amount of understanding is, is non zero.
ch_10_jon_raw:Well, it was like previously, I just felt like I had this smattering of very interesting material. You know, he's talking about all of these like rigorous, formal. Languages and how they map to each other. And it, it just seemed like the book was like, okay, this is like an investigation of, you know, the history of formal languages and, you know, math and I don't know, art. And it was just interesting. But now it's like, this is how the human brain works.
matt_ch10:Right. And yeah, that's, that's a good, good call out. So these chapters are kind of, it's kind of smooshing together how computers work, how brains work, you know, what a mind is, and then kind of trying to draw parallels in between all of these different constructs.
ch_10_jon_raw:And I feel like he's arriving at this sort of theory. Or I don't know if it's a theory or just a thought that human thought and human sort of consciousness arises out of a vastly complex set of like very understandable processes. And when I say understandable, I don't mean in the sense that like, we understand how electricity is moving through the brain. What I mean is like. rudimentary, you know, it's like these, these processes of, well, we'll get into it later, but he discusses like chunking and subsystems and sort of how through this arrangement of the brain and these much, much simpler units, this much, much more complex idea of consciousness can arise.
matt_ch10:Yeah. And he, he starts this journey. I mean, he talks, he talks about chess, chess a little bit, but he, he kind of, you know, for these three chapters, he starts by talking about computers and like the CPU. And if this is our metaphor for what you're talking about, the simplest layer here is, is a CPU and it is just rotely performing these tiny little operations, you know, in your computer. I mean, I guess with a, with like a, complex instruction set architecture, um, you know, each instruction can kind of do like big things, but, um, but like, just, just for purposes of simplification, it's like, okay, the computer can add, it can store things, it can subtract, it can divide, you know, and like, that's basically it. And then you have to create something like, uh, a video game out of, out of those like core, very simple, uh, Atomic elements and then you kind of build up from there.
ch_10_jon_raw:Right. Yeah. And he touches on a couple of concepts. I guess he's discussed these concepts before, but I just think they're really, really, really cool concepts. He talks about how like machine language is this thing. That is the thing that the computer understands. Like that's literally the set of instructions that your CPU is running. But then there's this higher level language called assembly. But between assembly and machine language, there's a very clear mapping. You know, it's like you have a line of assembly. That's like, add the value in this register to the value in this register. And as a human, you can understand that, but then that maps almost exactly to a machine language instruction. That's like a series of bits that you basically can't understand. So, so there you've created an isomorphism between two things that are very. Like, uh, structurally similar. But one is easy for a human to understand, one is very hard. And then he introduces this other concept of a higher level language. Uh, you know, basically something like C or C where that language is compiled down into assembly. And what you've done there is, there's still this isomorph between C and assembly, and even an isomorph between C and machine language. But you've basically completely changed it structurally. Uh, you know, C compiles down into assembly in ways that no one understands unless you're like literally working on the compiler and like an LLVM God or something. Uh, so anyway, it's, it's just this interesting concept where you're expressing the same information, the same program in three vastly different ways, you know, but you're trading off between like. Easy to understand by a human. Uh, you know, brevity is an element here. Like generally you can have more expressive power in higher level languages. but yeah, I just, I find that concept fascinating and you know, it's part of the reason I'm super duper interested in different programming languages.
matt_ch10:no, this is, uh, this kind of goes back to what he was talking about in an earlier chapter with these mappings where, yeah, you can have almost, uh, I don't know if I forget exactly the terms used to refer to them, but he had these trivial mappings. It's almost like a superficial mapping and then you have this like incredibly complex mapping and obviously this comes in in a bunch of different ways like The mapping between like your concept of I don't know self reference and like the neurons in your head like It's pretty unclear like how, what, what the processes between those two, you know, those two things. Um, but yeah, that, that all, that also applies in, in computer code. Um, there was one fact that I, I looked up. So this, this book came out in 1979.
ch_10_jon_raw:Right. which which continues to be incredibly surprising because this book is so prescient.
matt_ch10:well, it's, it is so funny because we, you know, we were like, Oh, this is going to be pretty out of date. But. It is funny how fresh it still feels like he talks about, he talks about, um, you know, compiled languages. And then he talks about interpreted languages and like, you know, that's kind of still where we're at. I mean, like maybe you could argue that, uh, that LLMs are kind of like now almost are just barely starting to be this, like another layer of indirection on top of that. Um, but so this book, so. Uh, Python. So Java and JavaScript both came out in 1995. Python came out in 1991. C came out in 1985. So literally like none of the, of the, Most popular languages that you would think of as like these core, like that just under gird, all of software today existed at the time when this guy was writing this. Um, and it is just so funny how, like it kind of, yeah. Like some of the superficial aspects have changed, but like the fundamentals have really not changed that much.
ch_10_jon_raw:yeah, no, that's a, that's a super interesting point knowing that some of these, you know, the, the most popular interpreted languages came out like 20 years after the publication of the book. That's insane. One of the funniest parts, he was giving this, you know, he called it an AI architecture and it was basically like a godly architecture of, you know, an AI computing system and, you know, at the lowest level it had like, you know, Machine language, it had compilers. One of the rows in the AI architecture was Lisp.
matt_ch10:Yeah.
ch_10_jon_raw:I just thought that was hilarious. Not, not the least of which, or not only because we've had experience with Lisp, we like designed our own Lisp for a project
matt_ch10:Dude, I cannot take any of the credit for that. That was all you. That was all you.
ch_10_jon_raw:Well, I, that, I don't know if I would call it credit, but it's just funny to me that, you know, at the time of writing this book. It was evidently thought that Lisp, you know, which today is somewhat of an antiquated thing, I would say. Uh, but anyway, Lisp was thought to be this like vital part of an AI architecture.
matt_ch10:I mean, not as far as matchups is concerned, not antiquated, you know, it's, it's a vital part of our, of our AI infrastructure.
ch_10_jon_raw:Yeah, I, I am a big fan of, of Lisp's for what it's worth, but I just thought that part was funny.
matt_ch10:Yeah, but so aside from that, it's kind of a very, of like a lot of this chapter, uh, this first chapter, uh, is really all about just, Or talk, you know, talking about the abstract idea of layers of, uh, layers of abstraction and then using computers is like a concrete example of that.
ch_10_jon_raw:there was, a few other things I wanted to mention, which these things just seemed important in retrospect. Like, as I was reading the chapter, I had the same feelings like, Oh, this is just about computers. And, uh, we've both spent a lot of time with computers, so it was a lot of review. But then in the subsequent chapters, I got the feeling that the reason he was talking about computers is because they are a very good analogy for human thought. And so he was sort of driving to that. So he mentions these two other concepts. One is bootstrapping, which bootstrapping means a ton of different things. I remember at my first job, we used to always say bootstrapping just to mean our server, like starting up,
matt_ch10:Yeah.
ch_10_jon_raw:but in the context where he is referring to bootstrapping, he's talking about a state of a language Where the language is advanced enough and the compiler for the language is advanced enough that you can actually rewrite the compiler in that language.
matt_ch10:Right.
ch_10_jon_raw:Which is, you know, it's kind of a cool milestone for any language. Like Rust, you know, reached the point where they rewrote the Rust compiler in Rust. And it's kind of like You know, it's a, it's a testament to the language being powerful enough to actually do that because generally speaking, compilers are incredibly complicated and if your language can handle doing that, then, you know, that's evidence that it's a solid language. Uh, but it also just makes it this completely self contained thing that isn't reliant on any other like processes or languages or, you know, other conceptual overhead. So it's, it has a lot of value in a lot of different dimensions.
matt_ch10:Yeah. It's, it's, it's just, it's just another one of, I mean, obviously this whole book, you know, self references core to this whole book, um, but it's just really cool to think about like, okay, well you have a concept of a language that would allow you to make a better compiler, so first you need to make a bad compiler. That can compile like this, you know, can actually. Come compile the language that could allow you to make the better compiler. And now you have the bad compiler and it can allow you to write the better one. And then you can do that again and again and again, until you have, I guess, this godly compiler,
ch_10_jon_raw:right, right. Yeah. And here we are today writing these very high level, highly expressive, you know, incredibly type safe memory. Say rust is famously a memory safe language, you know, which is basically these massive innovations in language design and, and authoring. And those are, you know, at first, like you're saying, they're written on top of some C compiler, which itself was originally written on top of an assembly compiler and yeah, so it's just this, it's, it's a progression of software innovation, but it's also just this amazing conceptual thing that's been growing over time,
matt_ch10:just one little superficial point about that section. There was this very funny point where he was like. Hopefully it's obvious why, like why it's called bootstrapping. And I was like, are you kidding? Like, why would anyone find that unless they specifically knew the term, like pull one up by their own bootstraps, which feels like a pretty light.
ch_10_jon_raw:Even then.
matt_ch10:Like I still don't, it still is unclear. Like once you know that phrase, it's still, I feel it's like kind of a stretch to use that as the term for that. So, uh, I don't know. I feel like sometimes he he loses sight of like what normal, like normal people think.
ch_10_jon_raw:I thought he was trolling. Like I genuinely thought he was like, like trolling the reader. Like,
matt_ch10:Oh, maybe,
ch_10_jon_raw:but I don't know. I mean, I, I might've just read it that way because it sounded ridiculous to me, but I, yeah, I mean, knowing Douglas Hofstadter, it might've been a joke. It might not have been a joke. I don't know. I mean, two, two paragraphs later, he was making a joke about the spelling of the word insight. And then he was like, he was like, computers don't have the insight to realize that insight is spelled wrong. And it was like, haha, Douglas.
matt_ch10:Oh, that's funny. That's funny. Cause I saw, I saw that it said, like that it was misspelled, but I didn't read closely enough to know that he was making a joke. I just thought it was a typo and I just like kept on skimming past. Uh, Yeah.
ch_10_jon_raw:you know, if you have like a function call, you know, named insight, like you literally have a function called insight, you call it like 500 times in your program. And then the 500 first time you call it, you spell it wrong. Like you switch the G and the H the program isn't smart enough to realize, like, even though it would be obvious to any human reading the code, like, Oh, he meant to call this other function. Like there's, you know, there's no program that's smart enough to realize that. So that, that was the point he was making.
matt_ch10:Yeah.
ch_10_jon_raw:But the other thing I did want to chat about, and now, now that I'm looking at my notes, I'm realizing there's a few things, so you can
matt_ch10:Yeah. Let's keep, let's keep going.
ch_10_jon_raw:Uh, but he talked about the point where intermediate levels lose their value, which this, like, I, I really feel like he's grappling with these, like, extremely powerful ideas and, and what he means by this, I think, and I wanted to like chat with you about this, but he was talking about whether, you know, like one of, Um, our goals as a, as a human civilization is to predict the weather, like predict a hurricane and make sure that we evacuate people if people are about to get hit by a hurricane. And the way that we've come to do that is with, you know, radar. I mean, I don't really understand these technologies, but radar and Doppler, uh, Doppler radar in order to like detect the movements of cold fronts and warm fronts. And he was mentioning how. Like, that's a very high level way of looking at the weather. But then the weather, when it reaches your town or whatever, you see all these very local effects of the weather. Like a little gust, gust of wind in an alleyway or something. And those are basically different levels of looking at the weather. You know, like one very, very, uh, Like low level is to understand atoms and how atoms interact. And, and you could actually use that knowledge to understand whether, but it would just be this vast computation that, you know, no one's capable of doing. Um, but anyway, he was talking about, so as you move up these levels of sort of looking at the same problem in these different intermediate ways, some of those levels are basically worthless. Like, like you can't look at an, at leaves blowing around in an alleyway and say, oh, it's going to hurricane tomorrow, you know, like that, that intermediate level has no value in solving the problem of predicting the weather, even though you're like, you know, in a way you're like looking at the same information. Um, so I just thought
matt_ch10:You're just describing like, excuse me. You're just describing like all of software engineering where it's like we just, you just have a job doing like working on one of those, your whole job is working on one of those useless, worthless intermediary systems. Uh,
ch_10_jon_raw:Yeah. Oh, totally. Yeah. And so many software problems can be solved by like pulling back the curtain and sort of gaining an understanding of some black box you're using. And there might be a solution that's so much simpler if you just make the change in that black box. But yeah, I mean, like you're saying, it's like so many engineers literally never pull back that curtain and they don't know that that's the case. Um,
matt_ch10:and I think, I think this is the wisdom behind resisting abstraction. Like, some people, some people are like, you know, duplicated code is better than the wrong abstraction or whatever it is. And they will like, people will resist that. They're like, Oh, I don't know, like duplicated code is really bad or X, Y, Z. And I feel like the longer I program, the more I appreciate just using the, like, using the dumb flat system for as long as you possibly can. Like, something should be really painful before you create an abstraction around it, I feel like.
ch_10_jon_raw:Oh man, I agree so hard. We should like print that on t shirts. Yeah, I think, I think there's like this dogma in programming that, you know, as I get older and, and program more and more, like, it just strikes me as almost like, uh, dangerous, you know, like that whole, do not repeat yourself. Like, obviously it is a good rule and it's a good rule in so many cases. But if you're like forcing yourself to do these contortions in order to not repeat yourself, like you're often just making your software harder to
matt_ch10:You're making it worse. Yeah, exactly. Because each one of those like layers of abstraction has its own risk. Like every time you have an interface, like there's a risk of a mismatch or, or what have you. So, um, so like, yeah, there's a benefit to not having the, I feel like people always, I think people tend to think like there's no cost to creating A new interface, but there's actually pretty high costs when you're creating a new interface.
ch_10_jon_raw:Yeah, no, totally. Um, and, and that actually segues nicely into the very last thing I wanted to chat about, which was sealing off. He mentions this topic where. And this is basically black boxes. It's like things communicating with each other across some interface, but the internals of those things never interacting, you know, so there's some boundary where, and this is common in, uh, like disciplines where like we're software engineers, we usually build software for like other industries, but it's not like we understand the internals of those industries. I mean, sometimes we have to learn some of it, but we're, you know, by no means, even remotely close to someone who actually like works in that industry. Um, and so you get this effect of like, you know, sealing off where there's this boundary line where information passes from industry professionals to us. And then we try to design software, they don't understand software. So we have to pass that back across this boundary line. Um, and it's, it's sort of this. You know, I think he wanted to discuss this concept in the sense of like informational modules, like talking to one another, but I think this is also an extremely interesting concept today from the standpoint of large language models where, you know, today it's, it's almost impossible to have a single individual that like truly understands all the little micro details of mathematics, but And then also truly understands all the micro details of, you know, Confucianism or something, you know, cause those are two utterly different disciplines that would take decades to truly master. But with something like a large language model, you can theoretically have that entire context available and maybe find new novel solutions to problems that are like very interdisciplinary, uh, solutions. And, you know, I just think that's like an incredibly powerful concept. It's almost like this, this whole sealing off concept just dissolves away. And you're left with one vast, incredibly agile, maneuverable knowledge base that can be wholly understood always. And you can kind of form solutions based off of it.
matt_ch10:that's interesting. Can be wholly understood. Are you saying that the, the knowledge base itself understands everything? Or are you saying that you as a consumer of the knowledge base can understand it?
ch_10_jon_raw:I guess what I'm saying is a large language model that, and I think this is a lot of people's argument as to why AGI is, is coming down the pike. Once you add enough parameters to a large language model, you can essentially, you know, these parameters are almost, uh, directly proportional to like how deeply the model can, you know, quote unquote, understand things. So if you have a model with a hundred trillion parameters, which By the way, it would take, you know, the entire energy of the universe to train today, so
matt_ch10:I mean, they're gonna do it. Like, the companies are gonna pay it, apparently.
ch_10_jon_raw:I mean, once Bill Gates sets up his sodium reactors, I think we'll be okay. Uh, but anyway, like once we create these huge models, there is a chance that, you know, the model can deeply understand all of these disciplines and be able to come up with this, you know, new knowledge based off of that, which is to me, that's very exciting.
matt_ch10:I'll give you a perfect example of this, which was, just the other day, I wanted to write a little Blender script. And it's like, I've scripted in Blender before, but there's this arcane series of like, high object hierarchies or whatever that I have to look up every single time I want to write a very simple Blender script. And just so, just in case anyone's unfamiliar, Blender is a 3D modeling software. So,
ch_10_jon_raw:ever
matt_ch10:best program ever written, hands down. Um, but so, I asked chat GPT to write me this little script and it, it got it. You know, there was one little bug, which like is almost certainly a bug that I would have written the first attempt. Cause it was like, it was dealing with frames and it said like, Oh, you have to reference frame underscore start. And actually you needed frame underscore final underscore start for whatever reason. Anyway. Um, but so like. It's, it's a perfect example of this hyper specific knowledge. It's like, can you like imagine how many people on the world in the world? No, how to do that. It's gotta be like on the order of like maybe 10, 000 people, like could possibly like, no, even what I'm talking about, like scripting and blender, like, and, but, but it just like, it just whipped it out. It's just like, uh, it was just. I don't know and and it just lends credence to your point that like but if I asked it some nuanced detail about Confucianism like it would probably also be able to answer that at a level of like an an intermediary like an intermediate level historian basically
ch_10_jon_raw:Yeah. Yeah, exactly. Like, and I think, you know, I'm hopeful that that will only improve. And I'm also hopeful that we will be able to improve human society based on this. I guess there's obviously these negative possible negative outcomes, but,
matt_ch10:Yeah, um, yeah, I think um, let's see They did talk about epiphenomena, which there's not much there, but I did think it was an interesting idea that you have, you're working on a system, and there's these emergent thresholds, essentially, where, just as a function of these complex interactions, whenever a hundred people start using your system, the whole system stops working for anyone, basically. Um, and, um, That's always a very interesting thing to find. Like, where are those threshold points in your system in terms of, and like it, most it's most relevant. And he talks about this when you're considering like the way a system scales, um, for, for software engineers, I feel like, um,
ch_10_jon_raw:And I'm glad you're mentioning this because I think this leads to a point that he's going to make in the next chapters, which is like, as you add complexity to a system, there's just these new emergent properties. And that's important for understanding how the human brain works.
matt_ch10:Okay. So maybe that's a perfect, maybe that's a perfect segue to, uh, We can, um, wrap, wrap part one of this, this series, and then we can, uh, dive into two brains and minds. What do you think?
ch_10_jon_raw:I love that idea.
matt_ch10:All right. Well, I'll see you next time.
ch_10_jon_raw:See you next time, Matt.