Switch Statement

070: Gödel, Escher, Bach - Ch. 5: All Real Trees are Upside Down

Jon Bedard
Matt:

Hello everyone And welcome to the switch statement podcast It's a podcast for investigations into miscellaneous tech topics Hey, John, how you doing?

Jon:

Matt, I'm doing really well. How are you?

Matt:

I am doing all right. This is, um, I think this chapter might be, might be a little rough for people. It's funny because I think there's going to be this bimodal distribution of enjoyments of this chapter.

Jon:

Yeah. I mean, for me at least, I think this is going to be something that's difficult for me because I'm a programmer. I've programmed all my life and there's a lot in this book that is like programming adjacent. And I struggle to like break out of just thinking about these concepts. Through a programming lens, which, you know, I think there's a lot of value in thinking of them that way, but I think they're also like larger concepts than just that. So

Matt:

This one. And this one, I will say he is the most like, uh, uh, directly. I'm just giving you the programmer. Oh, I'm trying to get you to like a programmer's intuition of recursion.

Jon:

Yeah, yeah, yeah. There was a point in the chapter that straight up had like a, you know, that diagram that, uh, SQL languages will have where it's like, so, and maybe we'll get to this, but there's a point in the chapter that literally had that diagram that I never realized that, well, I guess this is probably so obvious as dumb to say, but like that has its roots in linguistics where it's, you know, you're kind of saying like, oh, you have this part of speech, what could possibly come next? You know, it sort of branches from there and then what could possibly come next from that branches. So I, I just, I don't know the minute I saw that diagram, I got flashbacks from learning SQL.

Matt:

This is one thing that is really cool about taking a computer science course and then going back and looking at linguistics. because you do realize how there's a reason they call them programming languages, you know, I feel like you don't think about it that way, but it's, when you're doing this, like, consumption of token or like this conceptual, like, hierarchy of expressions and sentences and clauses and what have you, you realize how much, you know, DNA they share, so to speak.

Jon:

Yeah, totally.

Matt:

But yeah, so this is chapter five of the book and it is called recursive structures and processes. So, I mean, as we, as we kind of alluded to, we're talking about talking about recursion.

Jon:

As with the other chapters, it kicks off with, uh, this dialogue between our friends, Achilles and the tortoise. Yeah. And there's this additional character named Hexachlorophene J. Goodfortune, which I like. But anyway, I actually did not read this whole dialogue because it was very long and I really wanted to get into the content of the chapter. But I read a little bit of it and structurally it was recursive. Like the dialogue itself, you know, Hexachlorophene kind of threw them in this torment. That's such a weird way of describing that. But anyway, they, they just kept going into like sub worlds. Um, and there was some interesting things about the dialogue that he actually discusses a little bit later in the chapter.

Matt:

Well, the thing I was going to say about this dialogue is. I think I know where Christopher Nolan got the idea for Inception because literally in this book, they have a diagram of the, you know, of this dialogue and it looks very similar to those, there's like explainer diagrams about Inception. So, uh, so I don't know because, because it does involve, and It's funny because I think the, like, if, if a person has any awareness of recursion at all, it's like the drost effect. I don't know if you've heard of that or it's, or drost, um, the drost effect or drost effect, uh, it's D R O S T E. Um, and it refers to, it's a picture of a, so the product is hot chocolate, I think, but. The product picture is a woman holding a box with a picture of the hot chocolate and she's on it. So it's, you know, it's this recursively nested. So, um, I think people are, are maybe familiar with these like infinitely self similar recursion where it just goes on forever. Um, and, and that's not actually what, uh, Douglas Hofstadter is talking about. He's talking more about these like. Recursion, because you need to have a base case in order to do something useful with recursion,

Jon:

Yeah. Bottoming out.

Matt:

bottoming out. Um, but so, so it's not, so there's, I guess all of this is a long way of just saying that the stories are not very self similar, but they do talk about like pushing and popping. Just, I don't know if you got to this. There are these like two. Serums or whatever, uh, and then there's like the, it was a push. Oh, I'm forgetting what the, it was a pop tonic and push soda or something like that. Um, and it gave you the ability to push yourself into a picture. So, um, but, and, and obviously he's using those terms very intentionally to. Refer to like pushing stack frames, uh, onto the stack. Um,

Jon:

very, you know, programming, like alluding to a very programming concept of a stack frame, which gets pushed every time you call a function. He mentioned a interesting thing about resolving like recursive structures that, that I found fascinating just because my personality and probably. This is probably because I'm a programmer and I've been doing it for so long, but if there's some sort of recursive structure and it doesn't, like, pop all the way to the top, let's say, like, let's say Inception had gone down into three, you know, sub worlds, but had only popped back up to sub worlds by the end of the film, like, that would have been, like, very infuriating to me. But he talks about how some, some people aren't as aware of that type of thing. Like, again, I didn't read the dialogue, but he mentions how in the dialogue, like one of the characters is super concerned that they haven't like popped all the way back to the beginning, but the other one is just kind of blissfully unaware. I thought that was funny.

Matt:

yeah, yeah, yeah. Um, there was one very, um, other very small, or it was actually like a major reference, but, um, I think it was a, um, like, so in this story, they, he talks about how Uh, they're on this Ferris wheel. So Achilles and the tortoise, these are our kind of classic characters that we keep on, you know, seeing. And so they're on a Ferris wheel and there's a guy with a hook that is like flying over him with a, with a helicopter and they grab onto it. Um, and I think he, with that story, he's referencing this, uh, This term, there's this book called Darwin's Dangerous Idea by Daniel Dennett, which, man, there's a

Jon:

It's a lot of alliteration there.

Matt:

This is a real book, by the way, it's not like made up for some, uh, it sounds fake, but, um, but he says, uh, evolutionary, evolutionary biologists who are not strict Darwinians are Darwinians. Hold out some hope for a metaphysical sky hook to explain the existence of things like human consciousness. Um, and I think that that is basically just, cause, cause the idea being that like you're, you just want some sort of magical thing that's going to come along and like solve this problem for you. Um, and so, uh, that might be a tenuous relationship, but, or, or maybe I'm grasping at straws there, but,

Jon:

I don't know. That's very interesting though.

Matt:

but I've heard this in, in psychology or, uh, philosophy where it's like you refer to a sky hook as something that is kind of like, you know, it's like the question mark and then like profit, it's basically, you know, if you say step one, like collect all the underwear, step two question mark, step three profit, like that question mark is the sky hook where you're just trying to like paper over everything.

Jon:

I just thought he was referring to Kareem Abdul Jabbar's cheat code in basketball, just skyhooking all the time.

Matt:

Yeah, yeah, yeah, I mean, it was hard to Google for because of that.

Jon:

he mentions an interesting thing about music that I actually was not really aware of. Like, I'm pretty familiar with music and music structure and some music theory, but he mentions how a lot of times in music there's this notion of like you're pushing keys But then you have to sort of pop back through the keys that you've pushed, which I'm much more familiar with progressions that, where the progression itself might repeat, you know, like there's two, five, seven, which is like a jazz, blues, rock progression. But it's not like you, and, and those represent keys in, in a scale, by the way. But it's not like you go 2 5 7 5 2. I mean, sometimes you might, but I don't think there's some hard and fast rule that you have to do that. But he mentions little harmonic labyrinth, which is a pretty cool Bach piece that I guess exemplifies this. I listened to it, I couldn't really, I mean I could hear key changes, but I couldn't really hear that it was like, popping back through the keys. But I thought this was interesting. I feel like I'm going to learn a little bit about music theory, reading this book, cause he keeps mentioning these things,

Matt:

Oh, interesting. So that it was also the name. So little harmonic labyrinth is both the name of the.

Jon:

the Bach piece

Matt:

The Bach piece and oh, funny. Yeah, I guess I'm not surprised that he did that.

Jon:

It's a cool name. It's just like a, I don't know. It's a, it's a fun little piece of music and I can just imagine Bach, like titling it that

Matt:

yeah, yeah, yeah. This is so funny because I don't know, like you're saying, I, I think of key. Yeah. Maybe I think I'm thinking more of chord progressions, but like, obviously for chord progressions, I think of them as a cycle, um, where, you know, you're just kind of like looping through, um, I feel like key changes, I don't have as good a framework for, cause I feel like, I feel like my conception of a key change is you just change the key like once and then that's it, you don't, you don't necessarily change back and there's not necessarily even two key changes.

Jon:

Yeah, well, right, exactly. That's, that's kind of my, I just am not really aware of any rules with key changes, you know, it's like you do it to, you know, sometimes you'll key change to a dramatically different key in order to completely change the tone of the music. Sometimes you'll key chain, like, or key change, like one key up sort of gives it this like lifting feeling. Sometimes you'll key change from minor to major, you know, blah, blah, blah. So there's all these different things you can do with key changes, but, um, but it sounds like he's mentioning like a specific form of music where it's almost like recursive in the way that it key changes, which I thought was pretty cool. Yeah,

Matt:

You have a key chain, like if we're just talking about the simple simplest case where it's like you have a key change for a little while and then that resolves back.

Jon:

totally. I can, I can totally imagine this being like a return to, you know, cause sometimes key changes can make you feel like a little off balance at first. So it's sort of this way of like. You know, ratcheting up the tension, making you feel a little bit more off balance, but then sort of bringing you back home. And I can imagine that, you know, being very satisfying. Oh,

Matt:

Recursion in music is, you know. I'm, you know, I don't make a lot of music. I, I got this, you know, kind of into a little rabbit hole a little while back, trying to like make some music. And one of the things that I tried was kind of this like recursion, recursive sub sectioning of music of the song where it's like, and I actually used like the golden ratio where it's like, okay, I have a You know, the first, there's the first part of the song and then the last part of the song, and then their lengths are roughly like the golden ratio. And then I split both of those up into like little chunks that are in the golden ratio. And then each one of them have their own little, like, so maybe you would have like a key change on the second bit of the whole song.

Jon:

interesting.

Matt:

but, you know, so, and different things are changing at these, at these like separation points, but then, but there's always like, you know, changes in the music are, are always like kind of falling on these golden ratio, uh,

Jon:

Yeah.

Matt:

uh, separations.

Jon:

Hey dude, maybe you found some mystical, you know, secret secret code of music.

Matt:

Maybe, um, that was, that was Miami. So, um, um,

Jon:

Well, that was a cool song. We should like play it in this

Matt:

Yeah, I'll, I'll put, I'll put like a, uh, five second clip. right. Or maybe, maybe I'll just have it go for the outro of this, uh, of this episode. I'll, uh, I'll do Miami.

Jon:

let's do that, dude. I mean, I, I mean, I really liked that piece, so we should definitely throw it on. he mentions a very funny thing. So I have a limited knowledge of German. I took German in high school, again, very limited knowledge, but he mentions this interesting quirk about the German language, and I think a lot of languages have this, where a lot of times in German, you'll take the main verb. Of the sentence and you'll sort of put it at the end and he, he taught, he gives a joke. I didn't write down the exact contents of the joke, but it's like, you know, some German professor is speaking and you basically have to wait, you know, the, the professor goes on and on and on and you have to wait until the very end and then he just spits out a bunch of verbs and then you finally understand what he's saying. Um, and I have, I've definitely observed this in German. Like I remember, you know, there's a sentence like, Like, I have slept, or something, would be like, you know, Ich habe schlafen. I'm probably getting the conjugation wrong. But you can put all these phrases in there. Like, I have slept in my bed, would be like, Ich habe ins Bett schlafen. So if you combine all of these phrases, or add all of these clauses to the sentence, you can have this, like, arbitrarily long sentence. You know, I slept in bed with my dogs comfortably or something, where sleep goes at the very end of the sentence. So you don't even know what's happening until, until they say schlafen. And, uh, yeah, I just thought that was a funny example to throw out.

Matt:

I'm so glad you're bringing this up. Like I tried, I looked around for a good example of like, how, uh, How this, these could stack recursively, um, but maybe, maybe the way that it works is like, is your, if your dog is doing something, you know what I mean? Like, can your dog be like, you know, if your dog is eating, you know, it's like, I slept with my eating dog. It's like, like, I, dog. Eating slept, you know, you know, I'm, I'm, I'm grasping at straws here, but I guess that's maybe how you could, you could stack those.

Jon:

Maybe. Yeah, it's interesting, because I wasn't as familiar with, like, stacking a bunch of verbs at the end. I'm much more familiar with the thing where it's like, you know, you have a, a main sentence, but you can just kind of arbitrarily throw all of these clauses in the sentence. And the verb, which, you know, a verb is pretty important to a sentence. It's like, until you've heard the verb, you don't really know what it means. And you know, Germans put that verb at the end in a lot of cases. Um, but yeah, maybe I wonder, I should, uh, figure out how to say, I slept in bed with my dog who was eating.

Matt:

Yeah. Yeah. Yeah. And then you could go even further, like, uh, eating a chicken who was singing or, you know, like, and so maybe that's how you, uh, you stack the, um,

Jon:

But the whole reason he was throwing out this joke was because he was talking about recursive transition networks, which this actually goes back all the way to me bringing up SQL diagrams. So recursive transition networks are basically these diagrams that show how a grammatical structure can, uh, or, or shows how, you know, various sentences can be created given some grammatical structure. So like with SQL. You know, you start with select, meaning you're like getting data from some source, and then you mention like what you're actually selecting, like, let's say you have a table of employees or something, you might say select name from employees, but also in SQL, you can do things like selecting from a query itself. So you could say like, you know, Select name from, and then have another select. So, so you're literally selecting from the results of some other select. Uh, so there's some cool, uh, recursive transition network diagrams for sequel, where after the, from select star, from it sort of loops back to select. And that, you know, that diagram shows that you can sort of represent these arbitrarily complex select statements.

Matt:

I feel like once, once that clicks, for someone who's learning computer science, that's really when it starts to feel like a magic trick.

Jon:

Oh, yeah, yeah as any programmer who's listening to this show knows, recursion is truly powerful. Like it's just as a concept in programming, it's so, it's so much better at representing certain types of problems, you know, like anything that's a tree problem or like a graph traversal, You know, obviously you can do these algorithms iteratively. You can sort of put everything in a stack manually, but if you, if you get the right sort of representation of the problem, and sometimes that's recursive, the solution can feel so much more elegant, so much easier to maintain and read, and that those things are very, very important when you're talking like large code bases with lots of engineers working on them. Yeah.

Matt:

is like, if you're, if we're thinking, thinking about the example of the SQL query, like what feels so confusing about it is like. you'll be reading like, okay, this is the structure of a SQL query. And then in the middle of it, it'll reference, it'll say like, Oh, you can have a query in here. Like in the, in the definition of how you can structure a query, you will have a query itself. And that's when your, your brain is breaking. You're like, wait a minute, we haven't even finished defining what a query is. How can you be referencing it in the, in the definition?

Jon:

Oh yeah.

Matt:

And you know, that's, and I don't know, like, I just want to make sure that where, and I might even like, I don't know, I might cut this and put it back earlier, but just, I'm not sure that we like took a step back and clearly defined like what recursion is. I feel like we've been getting, coming at it from a couple of different sides, but like at its core recursion really is. I mean, I, I think of recursion as self reference where, you know, for programming, um, it's like you are referencing something within itself. And that, and like, if we're going, thinking about the SQL query, you, you say, oh, well, like you can actually reference what a query is within the definition of a query. Um, and that, and so just like, just to, just as like a little aside about like, Like if you've never heard of recursion before, like it's basically that where you're saying you're kind of using the definition of the structure itself within itself. And like, that's when you get to do all these like powerful things, because otherwise, like, like I'm thinking of something like JSON, you know, JSON is obviously like this recursively defined structure. And, you know, JSON is just a way to. to store data, um, for the, for the web. And I mean, it's very similar to, to SQL, uh, but you can have JSON objects that are kind of nested within other JSON objects. Um, and then like, if you wanted to do that, like you would just have to manually like define the structure the whole way down, like as you know, and then you would have some like depth limit if you wanted to just like never have self reference, um, And it's just like, I don't know. It's just fundamentally limiting. I think, uh, in a way that like recursion just kind of frees you from, uh, from that.

Jon:

Yeah. Yeah, yeah. No, I would even go so far as to say some problems are like, you know, they're not impossible, but they just become extremely difficult without recursion, but with recursion they become trivial. Um, the one, one that I really like, I, like I remember in college, I was so dumb At programming at first, uh, and I remember programming a binary tree, which is basically a data structure where you have a node and the node has a left and right pointer that both point to nodes. So it's, it's recursive, it's a recursive data structure where a node points to two sub nodes. And our teacher was like, okay, how do you find the height of a binary tree? Okay. And, you know, you think to yourself, like if you visualize a tree, like if you just see a tree in your mind and it's got a node at the top splits into two nodes, those both split into two nodes, you know, you can sort of see the height. It seems like this super duper simple concept, but when you're writing that in code and all you have is like, you know, a node, like what, what code do you write in order to figure out the height of a tree? And it turns out it is an amazing recursive function. Still one of my favorite recursive functions because of this college experience where I felt like it was finally clicking for me, where in order to calculate the height of any given node, you just get the tallest height out of your children. And then, and, and basically you add, you know, well, okay, so actually let's, let's back up just one second and talk about bottoming out, which I'm realizing is the next, uh, the next note I have. So in any, uh, recursive structure, there's this notion of bottoming out where, like we were talking about before in SQL, you can select from a select itself, but you can't like infinitely do that. You know, you can't select from a select from a select from a select from a select from a select. There's a, there has to be a point where you're selecting from some root source. Otherwise it just goes on infinitely and breaks, right? So that's kind of

Matt:

The, your computer runs out of memory. Like you, you said like, The reason why you can't do it forever is like kind of for this mundane reason that like, there's a limited number of Adams in the universe, you know, it's like at some point, like you got to end the, and the query that being said, like, there's nothing in the recursive structure that's going to stop you from, from doing that, you know, like you could go, go on all day long.

Jon:

Oh yeah, you can have all sorts of stack overflows in your code.

Matt:

Oh yeah.

Jon:

generally speaking, if you have a recursive algorithm in code, you might have like a little bit of code at the very top of your recursive function that says like, if some condition is true, then just return something immediately. Like don't even recurse. So in this example of this height on a binary tree, basically if you have no children, like if your left pointer is null and your right pointer is null, then you just return one because that's, that's the height of a, of a tree with no children. So then the recursive function becomes, oh, just give me the two heights of my children. Pick the larger one. And so it's this like utterly trivial function once you write it down. Uh, but it's, you know, I think it just sort of like exemplifies this elegance of recursion because you've sort of done this hard thing, but you've written like two lines of code.

Matt:

Right. Right. Right. Right. Um, Yeah. And, and actually just, just very quickly, um, getting back to like, we said, okay, you can't write an infinite SQL statement. Like it has to end somewhere. The problem with when you're writing like a recursive function is like, there is nothing to stop you, but time from writing a function that is going to recurse infinitely. If you, if you don't have that bottoming out, like, let's say you forget to check for the case when like both of your children are null. then like you might just just keep on looping forever.

Jon:

Exactly. Yeah. And, and often in a, in a program, uh, you can have these scenarios where function A calls function B. But then in the, in the running of function B, you're, you're also calling function A, which is sort of this other form of recursion where, you know, things call each other. And those cycles can be arbitrarily complicated. You can have function A calling B calling C calling D, which calls back into A. And so this is why, as a programmer, You know, you need to be very careful about representing those bottoming out scenarios because you can have arbitrarily complicated code and you need to make sure that there's not some weird edge case that forms an infinite loop.

Matt:

This is why I never understand how our question parsing works and matchups.

Jon:

Yeah, I am so amazed that that has not broken in a million ways.

Matt:

Yeah.

Jon:

He talks about Diagram G, which I thought was very interesting.

Matt:

Ooh, this one. I I don't know if I read so closely about diagram G.

Jon:

Yeah, I mean, I should have written down a picture of the diagram itself, but it's basically a diagram, um, you can think of it as sort of like a, like an upside down tree with two, you know, it's an upside down binary tree with like a left and a right. And

Matt:

in a way, it's kind of like a correct way tree. You know what I mean? It looks like a real tree.

Jon:

yeah, it looks, that's actually, yeah. I forget that most trees are, are right side up

Matt:

branch up.

Jon:

as a programmer. I think of all of my trees as being upside down. Um, but anyway, the left node has, has just a single node, but the right node has like a copy of itself. Um, which is, which is again, like a tree where the left node has a single node and the right node has a copy of itself. So it's this recursive tree structure, but you realize that if you're, if you're building this tree structure by iterating, you know, like, like, let's say you start with just one single rendition of this, and then at each node, you're adding itself. This is very hard to describe without a diagram, but in any case, I'm just going to like not bury the lead and say that it creates the Fibonacci sequence. So if you're, if you're basically counting all of the leaf nodes, you're going to get 1, 1, 2, 3, 5, 8, 13, um, which is, which is kind of cool, um, and you can see it very easily in the diagram, and I'm not doing a very good job of describing the diagram.

Matt:

I mean, this gets back to what he says about isomorphism. It's like, I don't know. It's just one of those places where you kind of. Learn that like two things are actually the same thing, even though like their trappings are, are different.

Jon:

Yeah, one thing that I thought about when I was reading this is you start to see why certain things arise in nature. Okay. Like, I remember hearing this, like you just talked about the golden ratio and, you know, people talk about how the golden ratio happens in nature all the time. And I think when you first learn that fact, it's just completely mind blowing. And I think it is still sort of mind blowing, but that is a number that's just intrinsically linked to recursion. Um, and, and the Fibonacci sequence again is intrinsically linked to recursion. So if you're seeing the Fibonacci sequence in nature, It probably means that you have some, you know, sort of recursive structure happening, like two, two entities form another entity, and then those three together form two more, you know? So it's, it's very like, once you see the pattern, you can sort of, it becomes a lot more, uh, rational that these numbers would just appear in nature all the time.

Matt:

Did you make that pun on purpose?

Jon:

No, but. But I'm happy with it.

Matt:

Um, yeah. And so isn't it the, there's some estimation of like the end Fibonacci number that uses the golden ratio, right? Isn't it, it's something like,

Jon:

Yeah.

Matt:

five to the, to, to the end or something is like the, you know, approximation of the end Fibonacci number or something like that,

Jon:

Yeah. Something like that. I feel like there might be an E in there. I feel like there's always an E in there.

Matt:

Yeah, yeah, yeah. so yeah, we have, uh, he gives us this very, uh, beautiful, but like bewildering, uh, graph. He, he defines a function called int x. And the, the outcome of this function is, is a chart. I mean, it's. Like to produce it, you're just doing this kind of conventional, like, okay, you go from zero to one and then you just plot, you know, you compute the value of int, uh, int X at every point along the way. But what's so remarkable about it is you get this, like, almost like seesawing effect where, um, It's, it's going to be impossible for me to describe. It really doesn't look like anything else I've ever seen. Is it like, I guess the closest way to say it, it's like these, um, alternating, uh, like parentheses back and forth along, along the like negative, you know, basically an inverse, a perfectly inverse correlated line. So like a line starting from the top left down to the. bottom right that's made out of recursive parentheses.

Jon:

Yeah. Yeah. And I, this was really cool. I mean, people should just look up the graph cause it looks very beautiful. And I think it's also an example. There's these cool things that you learn of, uh, in, in programming, or at least that's where I learned of them. I'm sure they talk about them in a lot of other disciplines, but they're called fractals where basically there's a mathematical representation of an image or a graph. And the more you zoom in on any part of that image or graph, It just keeps creating itself. So you can kind of like infinitely zoom in on any position in this image and it will just keep creating itself. And it's really, it's really mind boggling, you know, when you see one, uh, but it's also, you know, like it's, it's, they're typically like, well, I don't want to say typically, but a lot of times they're just very easy to describe with Like, it'll just be a formula with like two variables. And so it's, it's just this cool thing that can arise with these recursive structures.

Matt:

Yeah, it's, I think what's funny, what's funny about recursive structures is they, they're all at once, there's, they're like this mix of novelty and similarity, like at the same time. Because I think, I've heard something, I forget, I think I watched a Vsauce video where He talked about like music and about how you human brains kind of like the, like, they're most interested in this, like sweet spot of complexity where it's like, if you have just white noise, which is like infinite complexity, it's going to be. I don't know. It's so, it's so complex that you can't find any patterns in it at all. Your brain is like, I, I, I'm just discarding this. There's nothing here. And then if it's like just a line, you're like, okay, well, whatever. Like, I don't care. But like recursion is kind of this beautiful balance where it's like, I feel like some people almost get like. Entranced by recursion because you can just keep on going and it's never exactly the same But it is it does continue to be very similar, which I think is very visually satisfying

Jon:

Yeah.

Matt:

And then, you know, that's kind of evidenced by how many like 10 hour, uh, mandelbrot set zooms there are on uh on youtube

Jon:

Yeah. Have you ever just watched an entire one of those?

Matt:

Oh, yeah

Jon:

Um, he talks about Feynman diagrams, which are these diagrams that I guess describe the behavior of subatomic particles. I'm going to be honest. This part like really went over my head, but it was very beautiful. Uh, I guess. You know, quantum mechanics describes there's just 25 particles or so, and four forces and basically everything in our universe, uh, emerges from just those 25 particles and four forces. Uh, so I guess these Feynman diagrams are ways of, of representing those interactions, um, and they just look really cool. Like, uh, I get, this is like, I feel like a theme with Feynman, like stuff that he works on. I feel like he was the guy that would always just work on, like, the fun, cool thing, you know, when everyone else is, like, getting into the nitty gritty.

Matt:

Yeah, yeah, yeah, um, he had this very cool, like, shorthand for, uh, sine, cosine, and, uh, tangent of something where he would basically just take each one of those letters and then just extend the, like, the top bar over what they, uh, over the rest of the expression, which I always thought was like this very beautiful, like, he, he kind of had an artistic eye, I think.

Jon:

Yeah. Oh yeah, he was a guy who saw the beauty. I think a lot of what drew him to science was the beauty that was there. Um, just having read, like, there's a great book, I think it's called Seriously, Mr. Feynman, You Must Be Joking or something. It's got a weird title. Um, but it's sort of like a biography, autobiography. Um, really, really good book. Really fun to read. Uh, but you can sort of tell that he's, he's attracted to the beauty of things. He's not this, like, hyper analytical, like, I mean, he is, he's obviously hyper analytical. He was a genius. But I think that, you know, I think he would think of himself as like a, a creative person, like with a lot of artistry in what he does.

Matt:

Yeah. And I think he also just had a lot of respect for, or like, just appreciation of where students were coming from in terms of like doing their explanations. Like, I think it's way too easy to just get caught up in all the details, but like, I think the way he taught was always like, I don't know, meeting the student, like where they're, where they're at.

Jon:

yeah. Great educator. Great science communicator. Um, if there's a lot of YouTube videos actually of him describing like fairly complex things. And he just, yeah, like you're saying, he approaches his, his scientific descriptions very well.

Matt:

Um, so this, um, I didn't fully understand how this tied back to recursion, but I guess, I guess the idea is that like, similar to what we were saying before, where you can have a node or like you can have a structure in this Feynman diagram. And then there's some, there's some set of. Possible like transitions that node can make to other nodes

Jon:

Yeah,

Matt:

That can it can in contain the original node. Is that is that kind of the idea?

Jon:

Yeah, I think so. I think it was kind of like those recursive transition networks that we talked about earlier. And I also feel like there was a, yeah, feel free to correct me if I'm wrong, but I feel like Hofstadter mentioned how there was only a finite number of these Feynman diagrams, you know, like, and which makes sense if you think about it, like, there's only so many subatomic particles, and they can only do so many things. Um, and I think with these, you know, a lot of people who study quantum mechanics will just kind of know all of these Feynman diagrams by heart, and they'll basically be able to take like an atomic system and sort of think about those Feynman diagrams and apply it to that atomic system.

Matt:

Yeah, yeah, yeah, yeah Well, yeah, I'll have to I want to learn the learn the rules of this grammar because he does have one he has one here, which is like just very hard to understand what's going on, but

Jon:

Yeah, dude, I looked at that one for a while and I was just like, nope. Uh, shortly after this, he mentions Rachmaninoff's second symphony. Which I just, I just wanted to shout that out because I think Rachmaninoff is in my top five musicians of all time. He wrote, uh, there was this period of music. It's, it's, it's easily one of my favorite periods where, you know, like the romantic period was sort of in full swing. This is sort of the Chopin period where music just got very melodramatic, you know, like, like sort of. The classical period was, was characterized by, you know, a lot of this like, uh, beautiful, um, you know, sort of like Beethoven, where it's like these beautiful, simple, powerful, uh, you know, melodies and orchestral arrangements. Um, and I feel like it led into this like impressionistic romantic period, which just had these like. very dramatic pieces of music, you know, where you'd have these cadenzas that were just like, you know, and a cadenza is a part where like a soloist sort of takes over and just does something completely insane. Um, and Rachmaninoff has these piano concertos that I feel are kind of squarely in this, you know, historical period in music where the pianist is just playing this incredibly elaborate, you know, very technically difficult. His pieces are are famous for just being incredibly technically difficult, but very beautiful, very, you know, kind of over the top, almost like schmaltzy. Um, and so anyway, I love, uh, I love Rachmaninoff's music. So it was cool to see him mentioned the second symphony, which I guess has some sort of recursive structure. Again, I was unaware of, you know, I've heard the second symphony, you know, dozens of times. I wasn't aware that it was recursive.

Matt:

Yeah. I mean, he's kind of getting like, I think he's being a little bit loose with the term recursion here. I mean, he, he's talking about like modularity and breaking things down into these like sub components. And I think he's almost more like viewing at it through that lens where it's like there's these chunks that can be kind of like recomposed,

Jon:

I think you're right. I think you're right. And I think a lot of folks who might be reading this, who don't know music theory, um, are, are probably surprised to learn that there's a significant amount of like structure and rules that govern music, um, or I shouldn't say that govern music, but like, you know, most music that we're aware of is written within these structures and, you know, keys, well known chord progressions, um, and. And so, I think that he's, yeah, he's, like you were saying, I think he's just kind of alluding to a lot of those musical structures and, and just using them as like yet another lens through which to view these concepts.

Matt:

Well, that's a really good point. Like, for example, the Canon, I never realized. Like the Canon is a piece of music with a tremendous amount of. Structure to it and rules governing it. Like there's, and maybe, you know, this ties back to the, you know, the language or what have you, like you're, once you start writing it, if you become quite constrained, there's only a certain number of. Uh, possible like almost solutions to the, the constraints laid out. And even, I mean, he does, he references in earlier chapters about how like they would leave some of it out because it was almost, it was just defined by the prior part of the canon. So, um, there was one point, um, just going back, uh, very quickly for anyone who has done a little bit of programming, like You'll very quickly run into loops. It's like, I feel like it's like the first thing or like maybe after variables, you just go right into loops and it's easy to think of loops as just this innate thing, but, um, we actually worked on a programming language together. And what's funny is you realize that implementing looping via recursion is actually. Like easier and just actually happens earlier than writing a loop. Um, and, and it's just funny because we, we just discovered that accidentally, but like he actually, this is how he structures the chapter. He talks about recursion first, and then he talks about loops kind of downstream from that. And I think. From like a conceptual level. That's actually how it works. It's like a loop is actually this more, uh, complex construct than recursion.

Jon:

Yeah. Oh man. I'm so glad you're bringing this up. Cause there's so much of, uh, uh, programming with loops that I think is, is just completely fascinating. Like there's a language that I really love called Scala and they represent their lists as a head and a tail where the head is just the first item in the list and the tail is like the rest of the list and when you first see this, and there's other languages that represent their lists that way too, but. When you first see this, if you're coming from like Java or C you're like, what the hell? Like, this is crazy. But you quickly realize that representing your lists that way enables you to do these like recursive comprehensions with your list where you can say like, you can say like, okay, I want to do this thing with my head. And then, you know, do this other thing with the rest of the list. Which might be to just do that thing over and over again with the rest of the list. So you can write this code very easily where, you know, let's say you have a list of numbers and you want to like square all of them. You can, in Scala, you can just say something like, you know, take this list and map this function over the whole thing. And because their lists are represented in that way, they can just kind of take the operation, apply it to the head, and then pass that operation down the tail.

Matt:

Right.

Jon:

And you just get this new list back, you know, that's all your numbers squared. Um, and yeah, like, like you're saying, it's like that, that way of representing your list that facilitates these recursive algorithms like maps and folds and stuff, it, it enables you to do a lot of, you know, very cool, elegant things in your code that you really can't do otherwise.

Matt:

Yeah, it is. It is funny. Cause I, and I guess like, if you're kind of, I don't know, looking more at the, the structure of that, it's like, You have something that takes a And then once you can grab the first thing and then split it off from the rest, like it just lends itself very neatly to, and I mean, I'm kind of repeating what you're saying, but like, Like now you have the rest of the list and then you can obviously just pass that back into the original function because it took a list in the first place.

Jon:

Yes, exactly. Yeah.

Matt:

And so, yeah, that feels like a very powerful, powerful paradigm.

Jon:

Yeah. He mentions one of my favorite algorithms of all time, which is minimax, uh, which is this, you know, famous artificial intelligence algorithm, and this is, this is more old school artificial intelligence where, you know, the entirety of the intelligence is encoded within the, you know, the lines of code. But basically maybe we can give a quick description of how minimax works. Like, let's say you're playing a game like chess. If you're trying to think of the best move that you can make, one way of thinking about that is to pick the move that gives your opponent the worst options. So, you know, for example, um, It's going to be hard to think of an example, but let's say I pick a move that gives you the worst options. I know that you're going to pick the best option out of those options. And then you know that if you, you know, pick one of those options, I'm going to pick the best options that I'm with, or that I'm left with. So Minimax is this algorithm that assumes that your opponent is always going to pick the best option available to him. So you're basically picking the option that minimizes his best option. Which is kind of where the, the algorithm gets his names. Like, um, like say we represent a chess game as an integer score. Uh, you know, where like in chess, there's a concept called material, which is basically the pieces that you have on the board and one naive way of, of calculating like the value of a position is to just take the quantity of material that I have. And subtract the quantity of material that you have. So if you have more material than I have, it'll be a negative number, which means that you're doing better. If I have more material than you, it'll be a positive number, which means I'm doing better. So my goal is to leave you with options that are higher value, and your goal is to leave me with options that are lower value. Um,

Matt:

And then you're saying you use. You're using those assumptions to like. To explore the space, because like, even, even if you've computed the best possible, my best possible move. Given your, your options. You don't know that that's true. So I guess you need to do some, some searching where you're like, okay, well, these are the five moves that I think he might possibly do.

Jon:

Yeah.

Matt:

And

Jon:

that's,

Matt:

yeah, go for it.

Jon:

well, that, that gets into why chess AIs are so interesting because chess, as most people know, has this explosive, you know, permutations of possible moves you can make. You know, I think by like the fifth chess move, there's like 10 billion combinations or something. Um, so you obviously can't. You know, in a computer algorithm, you can't calculate every single option, which leads you to what is referred to as a heuristic function,

Matt:

Yeah. Mm

Jon:

of a chessboard, and you try to calculate this, this score, like I was saying earlier, you know, an integer score, um, and, you know, a dumb way to calculate it would be to use material. A smarter way to calculate it might take into account what's commonly referred to as tactics in chess. Like, let's say you have a knight, and you know a knight moves with an L shape. If you can move a knight into a position where it's attacking two pieces at the same time, that's called a fork. You know, meaning your opponent can only move one of those pieces away, meaning you take the other one. So a fork is, is a tactic, and so another way that you might score a chessboard, uh, in your heuristic function is to take into account forks. Um, and so, so chess AIs are actually a really fun thing to write if you're, you know, a programmer, um, because you sort of have to be smart about this heuristic function and think about forks and pins and alignments and things like that, um, and sort of assign values to these things that might change over time as the game progresses. Um,

Matt:

Now the new, the new techniques, and maybe this is like getting too deep into chess, but the new techniques like alpha zero. Are they minimaxing and they're just like using a better like evaluation function that's learned or are they not even minimaxing at all? They could just take the board state and directly translate that.

Jon:

that's a good question. I mean, I know they, you know, they're using a machine learning model, um, where presumably they're plugging in the board state and, you know, getting some sort of score. I would imagine that there would still be value in minimaxing, even with that. Um, uh, but yeah, I, I actually don't know how, how those newer chess algorithms work. It's pretty cool.

Matt:

Well, this is, I mean, this is one thing that's really cool about the minimax is like you have a black bar, at least for chess, right? You have a black box function, which consumes the board state. And it tells you, well, I guess, is that true though? Because that's funny because like the quality of the board is almost like, and this is where the recursion comes in. It's like a downstream effect of like the, cause I guess at some point you need to have something that says, how good is this board position for each player. Right.

Jon:

But yeah, that's, I mean, I think like you're, you're getting at like in chess, that's often wrong. You know, like there's a thing in chess called a sacrifice, which like, let's say I, Give you my queen, but like the queen is the most valuable piece in the game. But the reason I'm giving you my queen is because there's this combination that I'm about to do that has a bunch of forcing moves. You know, you take my queen and then I put you in check seven times and then I win the game with checkmate. So there's a lot of, uh, you'll actually see if you look at like a stockfish evaluation of a position is the evaluation will actually go up and down. uh, depending on how deep the stock stock fish algorithm has run, which, by the way, stock fish is a, is a world famous chess ai, it's the most powerful chess ai. Um, so yeah, you, you'll see that like, that that heuristic function alone does not tell the whole story. You, you sort of require this depth to be able to know, like, you know, is this a short term loss of value based on some longer term, you know, sort of strategic. And that's what the depth of the uh, Minimax will tell you. And that's why there's a lot of research and work done to try to minimize the number of nodes that you're actually looking at. Uh, so that you can increase that depth. Like, so every node that you can sort of eliminate from contention has a huge amount of value because nodes that are high up in the tree tree. Represent like thousands of nodes that are lower in the tree that you need to look at. So if you can prune those nodes early, that's very advantageous.

Matt:

Yeah, but it seems like you're your potent like you have the risk that you're potentially cutting yourself off from some amazing move that this opens up downstream. So like, I don't know. It seems and that's why it's interesting an interesting problem. I

Jon:

It's totally interesting. There's, I actually watch these chess videos sometimes that'll talk about like AIs playing against AIs. And there, there are often these moves that make no sense to a human being. You know, it'll just be like, they'll just, like, give away a bishop, like, early in the game or something, and it's like, what the hell is going on? But then, like, 20 moves later, they'll have the opponent in this position where they just can't move any of their pieces. Um, and it's sort of this example of, like, you know, you're making a move that looks really terrible, but in actuality, you know, there's this very long term strategy where that move puts you in a better position. Um, and it's interesting. It's interesting to see that the evolution of like chess AIs and how they just, uh, because it's something that you can very easily just like pit the new version against the old version and like make sure it wins every time. So it's, it's this type of thing that is just getting better and better. You know, it's, it's very easy to like not get worse. So, um, yeah, it's kind of crazy.

Matt:

Man, um, well, there was one, there was one more thing that I did want to, uh, talk about, or just like very, like, this is just like a, an amusement. Um, have you, have you heard of, have you heard of Hofstadter's law before?

Jon:

No, no,

Matt:

Um, Hofstadter's law is, it always takes longer than you expect, even when you take into account Hofstadter's law.

Jon:

Yeah. This is a very programmery thing.

Matt:

And it's so funny because I feel like I've heard this forever. Like, I've just, I don't know when I heard this, probably ten years ago. But, um, I did not realize that this, this is the Hofstadter referenced in Hofstadter's Law. So,

Jon:

heard that before, but I just don't, it doesn't always reference Hofstadter. Like it often just says like, you know, things take longer than you think they will, even when you take this, this into account. So I feel like, uh, he's not getting his due credit.

Matt:

yeah, yeah, yeah, yeah. Um, but that's, that's all I had. Man, this one was probably a little, this one was dense.

Jon:

Little chonky, um, there was actually one last concept that, that I did want to mention.

Matt:

yeah.

Jon:

He, he briefly discussed programs that can modify themselves at the end of this chapter, which, you know, this is one of those concepts that, you know, is very, very powerful. I mean, you can imagine if a, if a program could, could improve itself, like going back to these chess AI situations, like. If a chess AI could somehow tell that there was this weakness in the heuristic function and actually fix that weakness itself, like, obviously that would be tremendously powerful, because the thing would just get better and better over time. Um, and he mentions how this ability, which he also refers to as tangled recursion, which I thought was a cool, cool way to reference it, but he's refers to this ability as the heart of intelligence. Um, which it's, it's yet another thing in this book where, you know, it's 2024. We have chat GPT four, Oh, where you can literally talk to a computer and it, it talks back to you like a human being. And he, this book was what written in the seventies. Um, and he's talking about, you know, programs that can modify themselves and that being the heart of intelligence. And. I just think there's going to be a lot of references like this in the book, where he's, you know, just talking about these things that are very applicable to today.

Matt:

Yeah. Man, this book feels so prescient. Uh, like, I mean, this is something that I've, I had always thought about, like, I mean, and the, my conception of it is like, not what it looks like. I, I, I imagined a program that could, like, I was writing C and I was like, okay, well, can it like output C and like compile that and then run that? And, um, and it's funny because the first person that I talked to about that, he was like, well, that sounds like malware. Like, that's kind of the main, um, reason why people do self modifying code. Um, and then I was like, Oh, okay, well, maybe, but, but I don't know. I mean, obviously it's, it looks completely different, but,

Jon:

Well, it's funny too, because that's almost more complicated than backpropagation, which is the, you know, one of the primary principles that, that make LLM so powerful. Um,

Matt:

that, like, it's just, it's, it's, it's much harder and not as, not as good, but, um, but yeah. Um, I mean, once we, someone's going to come along and stick, like, program compilation into, like, the loop of some, one of these AIs. It's probably already happening. Um,

Jon:

doing it for sure.

Matt:

um. And yeah, Skynet's going to be here before long.

Jon:

Yep. Maybe we can end on that pleasant note.

Matt:

Yeah, um, all right. No, I'm a, I'm a techno optic optimist. I think, uh, I think it's going to be, uh, Utopia.

Jon:

looks bright.

Matt:

all right. Well, I will see you next time, John.

Jon:

See you next time.