.jpg)
Switch Statement
Switch Statement
084: Gödel, Escher, Bach - Ch. 17 & 18 - Cheese Burger or Pineapple Burger... Still Haven't Decided
Hello everyone And welcome to the switch statement podcast It's a podcast for investigations into miscellaneous tech topics
matt_ch_17_18:Hey John, how are you doing?
jon:How is it going?
matt_ch_17_18:I am doing all right. We've got kind of a special couple of e episodes to end. End our saga.
jon:Definitely a saga. Yeah. This book has been absolutely epic, but we are wrapping it up. Uh, we're kind of tired of recording episodes about this book, but it was an awesome book. Incredible book.
matt_ch_17_18:Yeah, we didn't really, we didn't really look before we leaped, I guess I would say. Uh,
jon:Yep.
matt_ch_17_18:that's life. But we stuck through it.
jon:We
matt_ch_17_18:much to the chagrin of our listeners, I am. Sure.
jon:Yes. This is more a testament to our sticktoitiveness and not a testament to the material being good,
matt_ch_17_18:right? No, we never let quality get in the way of b pigheadedness. I guess I would say.
jon:could not be more true. but this book did it. It was, you know, as the rest of the material was like the couple. Last few chapters were very interesting. Uh, he got into a discussion on state of the art AI at the time. This is like the late seventies, which I thought was fascinating. We'll talk about it a little later. Um, but I guess this chapter was more about, this chapter was interesting. It was like a little bit philosophical. My take on this chapter was it's about how thinking can be viewed at. Uh, as a high level description of a system, which on a low level is governed by simple, like almost, I wanna say like obvious rules The brain, uh, does all these computations basically. Both phy, physical chemistry, uh, you know, electricity moving through the brain and DNA, using chemistry to, you know, do the programming of the human body. That's all very low level, understandable rules thinking itself, like abstract thinking is sort of a high level description of those vast low level rules,
matt_ch_17_18:right? and and in particular, I mean, the reason Why we're talking all about this. is. He talks a bunch about the church Turing thesis, Which. my understanding is that this is kind of fundamentally the, the Turing complete equivalency essentially, that anything that is turning complete is isomorphic with anything else that's turning complete
jon:Right, like they all have the halting problem and
matt_ch_17_18:They. all have the halting problem, but I mean, and, and I think what Hofs Satter is saying here is that human thought processes Also, turn complete and therefore could be isomorphic with a computer that comes up with the same result that a, that a human does.
jon:yes. Yeah. And he has a hilarious example later in the chapter where he talks about deciding between a cheeseburger and a pineapple burger.
matt_ch_17_18:Yeah.
jon:and, and I actually really like this example because he mentions how. When that type of indecision happens, it's not as though your brain is running slowly, like your neurons are firing at the same rate at all times, no matter what. It's, it's some higher level thing that's happening. You know, throughout your life, you, you've established these weird neurosis, which this really speaks to me by the way. But, and those weird neurosis that have developed in your brain structure over years of, you know, removing glia and creating your, your brain's larger structure have caused you to have this indecision between cheeseburgers and pineapples.
matt_ch_17_18:Oh, I, have, I have experienced, this so acutely because it, it's funny. because. From your subjective experience, your brain's not working. There's no processing, uh, going on in your, in your brain. And That's why it feels like, oh, the program, it stopped. Um, and I experienced the, the, the, the most acute. Visceral experience I've had recently is you are called on by a Professor Uh, you know, so in Law. school, one core element to this is the cold call where like, They they call, they just randomly pick your name from a list and they're like, you, what's the answer to This question? Uh, and so, and. man, like you talk about your brain just like going completely blank, even though if, if it weren't high. So high stakes like. You would completely, you would be able to answer it easily. It's like your mind just goes blank and it's like, I don't know. And, and that's almost a more unusual situation than what he's describing. And maybe not unusual, but it's, there's a question of like, What is different about that scenario? Like Why is your brain deciding to do something different in that scenario than if your friend asked you the same question?
jon:Yeah. Yep. And it's strange. Well, I don't wanna go on a man massive tangent, but I think about this a lot where we put so much of a premium on that, like in the moment execution. Sports is a perfect example of this, where it's sports is all about like, can you do the thing in the moment? Uh, and I just feel like humans are so much better at doing the thing, not in the moment that, that I've always thought it was just very interesting to me that we put so much of a premium on, on doing the thing in the moment
matt_ch_17_18:I think it's, I think it's the, I like, I don't think most people would, would call this out, but I think it's, kind of the, have you heard of p hacking?
jon:it's, oh, yeah. Yeah. I have a friend who like is kind of part of the science industrial machine and he complains a lot about P hacking.
matt_ch_17_18:Right, which p hacking. my understanding of p hacking is you just do an experiment enough times until you get what seems like a statistically significant result, but you just did it so many times that like. eventually, you know, it's like, yeah, it'll you know, oh, this is a 95% confidence, but you did the experiment 20 times. So it's like, uh,
jon:Yeah. I, I feel like Pete, I, I think that is. Like explicitly what p hacking is, but I've heard people use it almost to define just being disingenuous with data. Like I, I feel like the term has sort of taken on this like more, uh, abstract meaning.'cause I've also heard it used to describe a lot of what sociological science does, where they just take a specific demographic and they run an experiment and it's like. Oh, this thing worked really well in this very specific demographic, and it's like, well, that's clearly not that useful to the entire human race.
matt_ch_17_18:right. and, but They try to spin it so that it's this massive discovery and,
jon:e Exactly. Yeah. And,
matt_ch_17_18:but anyway, getting back to sports, I think the idea is if you're just doing it by yourself. you could try 10,000 times and tell people about the one time that it actually worked. Uh, But, but anyway. so getting back to getting back to the chapter,
jon:Yeah, there was an interesting part. I love, whenever anything brings up Roman John, I, I feel like Roman John is like a meme where whenever he's mentioned, it's always in the context of like, someone talks to him about like a thing, you know, they'll be like, oh, I was in a taxi and the number was whatever. And then Romano John will immediately give this like. Extremely in depth, intuition slash insight, you know, into this math thing that he just comes up with on the spot. and anyway, so he, he gives an example of this in this chapter. Where evidently there was this problem about billeted officers and you had to solve this endless fraction. They didn't give enough detail on the problem for me to determine like what the actual problem was, but it's just a anecdote where, where Mono John came up with this like amazing, solution for the problem and he really impressed this other mathematician. But I'm, I'm really, uh, rambling here. The whole idea behind this was to describe how. The way Roman John thought was, at least from observers who knew him well, not different from how other mathematicians think, like it's not like he was using some other set of rules in his brain to come up with new mathematical insight or to perform computations. He may have been much better than other mathematicians at those things. But he wasn't using like novel processes. And I guess the whole point here is there's this isomer between him and other mathematicians. Um, and I guess the argument is like we're all sort of using the same underlying set of formal rules at the low, low, lowest level. but sometimes those ladder up to these different, you know, more amazing higher level abilities.
matt_ch_17_18:Yeah. this is something that's, that's interesting because. If you're, if you're achieving a different result, like surely something must be like, I, I, is is, is he just saying they're using the same system? Like, because those feel like two different.
jon:yeah, I think, yeah, I think he's saying they're using the same system. He's, he mentions how he. There's no mystical thought process that could lead to, you know, these crazy, astounding results that Roman John had that can't be put down in fluke. Meaning he was using the same set of underlying reasoning tools that every other mathematician had. Uh, he was finding new insights and doing crazy stuff you know, so there was clearly something special he was doing. But it wasn't as though he had some, you know, unique tool chest.
matt_ch_17_18:one point. that that is unclear to me is like, they talk about, They talk about, uh, Turing, I don't know if, is it, is it in this chapter they talk about, uh, Turing's, uh. I guess they, they go in the turn test. So maybe we can if we just blending it all together.
jon:together. Yeah, I think that might have been the next chapter.
matt_ch_17_18:Yeah. So. I mean, in the next chapter they talk about the Turing test and you know, Turing there, uh, and Turing himself referred to it as the imitation game, where if a human wasn't able to determine whether he was talking with a human or a machine, you know, then that's, then that machine is intelligent or Like that's one test.
jon:test. They passed the test.
matt_ch_17_18:What it sounds like Hofstadter is saying is that if those two things are identical, like, it's like those are, those are isomorphic. because they behave in the exact same way, but he's also attempting to argue that like, no, like the actual underlying processes themselves Are isomorphic as well. For math, I guess. that's, feels like. okay? Fair. We're you're just using numerical operations, we're all using the same lang, uh, same language to to operate on, but it sounds like for language, that's a harder, case to make, I guess. Or may maybe not.
jon:Well, yeah, and I think that's one of the more interesting. Themes in this book. He discusses strange loops all the time where you have these low level processes which ladder up to a higher level, you know, consciousness or you know, the, the sense of self or or whatever, all these super high level concepts. And I think maybe Hofstetter would argue that a computer, let's take a large language model of today, they seem human. I think Hofstetter would argue that even though. On some level, you know, we're performing these biological chemistry processes, uh, and computers are performing very electric, silicon processes. But there is some level where we're very similar, that there's the same abstractions. And I think that's part of what I was taking away here, um, is, you know, in spite of Roman John's sort of magic seeming leaps and intuitions. On some level, he's just like every other mathematician. Yeah. Um, and actually I'm glad you sort of started going into the next chapter, uh,'cause I wanted to say a couple things about it. Maybe we can kind of transition to the next chapter.
matt_ch_17_18:let me, There was just one thing uh, one point that he raised, which I thought was, just a good shower, thought. He talks about. beauty and whether or not it's intrinsic to the Object. itself. Um, Which, uh, you know, it is, it is, I think when you talk about a beautiful object, or beautiful person, it sounds like it's attaching to the person or the object themselves, but you Kind of realize, beauty is really, and like it's so trite, but it's beauty is in the eye of the beholder.
jon:yep.
matt_ch_17_18:That's really true. It's like beauty, You know, you, you know, if a woman is beautiful in the forest. and There's no around, no one around. to see her, like, is she actually beautiful? Like,
jon:beautiful?
matt_ch_17_18:Like, no. I, I would argue it's a, it's, you know, some, there needs to be some other conscious being there. to perceive the beauty in order for Something to be beautiful.
jon:beautiful. Yeah, it, it's, there's another book called Zen and the Utter Art of Motorcycle Maintenance, uh, by Robert Pirsig. Where he discusses beauty. He, he refers to it as quality often, but he describes it as this like culmination of, you know, deep skill, talent, symmetry, it was, it sort of felt like this thing that could be put into words. Like, even though beauty is this totally like ineffable thing, there are elements like an artist, you look at the Sistine Chapel and it's clear that it requires such deep skill that can only be learned, you know, through a lifetime of doing this. And I think when you're seeing the beauty of the Sistine Chapel, you are seeing that skill. You're, you're like perceiving that. Only, you know, a lifetime of like, work and talent could yield something like this. I think that's at least part of it.
matt_ch_17_18:So you're, so it. almost Sounds like you're taking the other side of that, argument that there is something intrinsic to the object. That.
jon:you? I think so. Like when you're seeing a beautiful person, you're seeing, you know, hard work in the gym or you're seeing really good genes or you know, you're seeing symmetry. Symmetry or, or something. I like, I don't think I'm taking the other side because I do think there is definitely an element of beauty where it can just arise, uh, out of nowhere and you know, and it is very cultural. Um, but I also think there's another interesting element to beauty that is very quantitative.
matt_ch_17_18:It's like why is a, Why? is a sunrise beautiful?
jon:I, I think that humans have an innate connection to nature and things in nature that are, I mean, without sounding silly, I'm gonna definitely sound silly here, but
matt_ch_17_18:silly it up.
jon:like, I think, you know, humans like rainbows. Because rainbows are this crazy visual phenomenon that happens. like I was at Niagara Falls recently and you can see all these circular rainbows, like if, and it was just like, this is crazy. This is amazing. but yeah, it's, it's just physics, right? It's just like light passing through little droplets in the air. so
matt_ch_17_18:Dude, I had a, I had a, I have a theory, which is we experience beauty Like from an evolutionary, perspective. Beauty is a way to encourage us to explore.
jon:oh,
matt_ch_17_18:think we are novel, like we are rewarded when we see things that we've never seen before. And that takes the form. Like if you're at the top of a mountain, you're like, damn, like I never see this like this. I don't see this specific combination of my neurons like Going crazy.
jon:Yeah.
matt_ch_17_18:and like, that made you climb up the, or like made you wanna climb up to the top of that mountain or get over that hill or this, that, or the other thing like. That's my current working theory for, for why we would develop that. Like subjective, subjective reward for nice looking things.
jon:I, dude, I think that you're hitting on something. I think this is why humans have achieved what we've achieved because we strive for beauty and quality. Like we find those things to be, to have value. And things that aren't beautiful don't have value. So we move away from them and discard them. Uh, and I think that's been such a important part of our. Our ability to progress.
matt_ch_17_18:Yeah. All right. But you wanted to I kind of halted your transition to chapter 18. Let's, uh, soldier on.
jon:No, that was a really good interlude. Uh, but yeah, chapter 18, the dialogue in this chapter was super interesting. He's talking about this thing called Sure Lou. Yeah. Which did I just like, skip over what loo means? I like didn't,
matt_ch_17_18:no, he talks about it. at the end of this, uh, at the end of this chapter. Um, So, uh, eto,
jon:echo.
matt_ch_17_18:she lu is a phrase that comes from Typesetters, where they would. Keep all of their letters in order, in rough order of their frequency. So Echoing Loo is like roughly the frequency that Typesetters would need to use those letters because they would, they would just keep them in that order so that it was easier, quicker to actually type, set something. Uh,
jon:nice. Hilarious.
matt_ch_17_18:but anyway, he took, he took Shared Lou from this, like the second part of this, uh, of that phrase.
jon:okay. I had written down in my notes, is this an acronym question mark, so I guess I never figured that out. But yeah, the dialogue was all about this. It was like a computer program where you could just, you had this set of blocks. Uh, geometric blocks, like cones, cubes, spheres, and you could tell the program, like, move the sphere on top of the cube and it would like, perform that action for you. And then you could query stuff about the state of the objects and it would sort of tell you you're, you know, taking crazy pills. Like if you said, oh, is the pyramid on the cube? And it, and it wasn't on the cube, it would be like, no. Or if you asked it a nonsense question. Uh, it could, it could sometimes tell that it was a nonsense question. And I was kind of laughing while reading this dialogue.'cause I was like, is, is this what was considered state of the art ai, like at the time of this writing? Because it just seemed like such a quaint like utility. And these days, you know, you can pass some hyper detailed image to an, to an ai and it will describe it in like a huge paragraph,
matt_ch_17_18:Dude, I cannot wait until we have house robots.
jon:Oh dude, it's gonna be huge.
matt_ch_17_18:I think we're right there. I think like I, I would say by the end, of 2026, there will be a reputable company that you can buy a, a house. like a robot that would be able to fold the laundry, wash the dishes. tidy up.
jon:Yep. I think my theory, uh, and I would like buy robotics, ETFs. Because robot is going to become the next big purchase. You know, people have house, car, phone, tv, you know, a couple other whatever appliances robot is gonna be like right there under car
matt_ch_17_18:It will just be like the last appliance. that you buy.
jon:Yeah, exactly. Well it's exactly, it's kind of the appliance to end all appliances. Uh,
matt_ch_17_18:but. it's like first, first you buy the dishwasher, but then Once you have the dishwasher like if you have, a more disposable income, it's like, of course I'm gonna buy a robot that can load it so that I don't need to, I mean, although it's interesting,'cause do you even need a dishwasher.
jon:That's what I was gonna say. It's like I think a robot, I think you actually buy the robot first because the robot can just manually wash dishes and you don't care if it takes him. Six hours
matt_ch_17_18:Yeah. Yeah, yeah. I have heard that dishwashers are more. more efficient though, than, uh, than washing them by hand though. So, um,
jon:Yeah. Uh, but in any case, that was sort of my takeaway from this dialogue is just how quaint it was. He starts the chapter with a brief description of Alan Turing. This sort of, uh, was right before his description of the imitation game. And he mentioned that Alan Turing died from a quote accident with chemicals,
matt_ch_17_18:Oh God,
jon:which I was just like, are you kidding me? Douglas, like, Alan Turing committed suicide because he was chemically castrated, right? Uh, which he, Douglas makes no mention of Alan Tering being gay, or you know, him undergoing extreme. I mean, torture basically by the, by the US
matt_ch_17_18:by the government Well, it was the UK government, right?
jon:or by, yeah, by the UK government after he had contributed so greatly to the, the UK government and our war efforts, which is, you know, this is one of the great tragedies of the modern age, Alan Turing. And I felt like, I mean, not that I'm not, that I need every mention of Alan Turing to be this like full. Story or, or whatnot. But I just felt like Douglas Hofstadter's description of Alan Turing was so, uh, I don't know if like whitewashed is the right term, but it was just very like, you know, it was like light reading.
matt_ch_17_18:It's almost like It would be better just not to reference his death at all. Like it Rather than, just, yeah. Oh, I mean they do say, Uh, some say suicide, but um.
jon:yeah, I, it just felt like a massive cop out to me. Like he was specifically trying to avoid mentioning.
matt_ch_17_18:there's such important, context there that it's a dereliction of his responsibility as an author to like, not, like, if you're gonna bring that up. and, and he's only 41, that's the other, like, that's the mind blowing thing. Imagine like this is really a crime against humanity. Because you Have one of the most Absolutely. genius people of all time.
jon:Yeah. Uh, it, working in, you know, the most important technology of our era furthering our, you know, our knowledge of
matt_ch_17_18:He could still be alive today.
jon:Yeah.
matt_ch_17_18:Like this book was only written in In, this was written in What? the, Well, no, no. Wait, no, he couldn't. He probably couldn't be alive today. But,
jon:he couldn't. Yeah, he would be in middle age around world Wari, so Yeah. Probably dead by now.
matt_ch_17_18:Yeah. no, I, I was thinking he was 41 at the time this book was written, uh, and said no, but, uh, But anyway, uh, if you've never seen, Imitation Game, uh, I. cried at That movie because, you know, it talks, it talks about all that stuff and just the, just injustice of the whole thing. Um.
jon:I just like Kera Knightly. I'll watch anything with Kera Knightly.
matt_ch_17_18:Oh, nice. I didn't even really, I just remember that I, was like, just demolished by, by all of the, like, you know, when you, watch the movie and like, they have those like title, those like credits at the, or the like the, like where are they now? Or like those little title cards that describe like actually what happened after the contents of the movie. Dude. those are my kryptonite. Like those just always like, I just get me.
jon:Dude. Have you ever seen Stand by Me,
matt_ch_17_18:No, no. But now it sounds like I'm gonna have a new way to just, I don't know,
jon:dude. If those of you're kryptonite, by the way, my dog is freaking out in the background. It's probably audible, but. If you, if those, uh, you know, where are they now? Things are your kryptonite. You should check out. Stand By Me.
matt_ch_17_18:Nice.
jon:It's a good, it's also a great movie. Uh, it had like, you know, early, uh, river, I wanna say River Phoenix. I think Keefer Sutherland might have played one of the kids, like all these super famous actors basically play these little kids in the film. And it's, it's based on a Stephen King book, a really good movie.
matt_ch_17_18:Um, But anyway, so, uh, so yeah, so, he, uh, he did, he did turn, uh, he did turn dirty. Uh, in his, uh, description.
jon:Yep. Um, but anyway, this chapter felt to me like kind of a description of, of state of the art ai. So I, to me, this chapter felt extremely dated. You know, he discussed chess, ais, uh. He gave kind of a discussion of like the Minimax algorithm, which
matt_ch_17_18:I'm so glad you brought this up'cause I have mini mats in my notes.
jon:Yeah. Yeah. He, he, it's like, he described how Minimax works. I don't think he ever met named Minimax.
matt_ch_17_18:It might not have even been referred to Minnie Maxx at this point,
jon:yeah. But that was a, uh, you know, minimax was a massive discovery in. In AI or well, discovery, I don't know if that's the right word for it, but essentially that's the algorithm where you build a massive state space. You know, like in chess you can only make so many legal moves on any given, you know, board state. And so you build a huge tree out of every single possible move you could make, and then you use a heuristic function to kind of score those. And each player assumes that the other player will always make his best move. So that's why it's called Mini Max because on one layer of the tree, you're minimizing, you know, because if it's black move, black's move, you want to pick like the lowest scored move. If it's white move, you want to pick the highest scored move. Um, and this is chess by the way. That's why I'm saying black and white. Um, but anyway, that's why it's called minimax.
matt_ch_17_18:Right. And, and the, the PO point that he calls out with Minimax is some, some approaches literally just search the whole, They just walk down the whole tree and they kind of try to find Good end states without attempting to evaluate. And then other ones were just being like, okay, well we're just gonna look right. At the board. and just Like,
jon:like
matt_ch_17_18:look at, like, evaluate. do static evaluations. Um,
jon:Yes.
matt_ch_17_18:the pro, the problem with the first one is just in chess, There's that explosion of possible future states is just too large to, to do in a reasonable amount of time. so That's when you use that like heuristic to kind of like prune, prune the tree.
jon:right. Yeah. And there's a, a ton of, you know, variations of minimax. Like I, I don't think any real AI. Use minimax today. Something like stock fish, which is probably the most advanced chess ai, uh, uses probably an algorithm more like a star, which is a version of minimax that performs significant pruning, like you were saying, based on that heuristic function. And also the more advanced that heuristic function is, um, the more, basically the better your AI is. Although it can, yeah. One issue is it can prevent you going deeper on certain trees, which can also be a problem.
matt_ch_17_18:Oh,
jon:there's a trade off. Um, but anyway, uh, another thing that he mentioned that I thought was interesting is he mentioned this guy who wrote this tool that basically transitioned from one piece of music to another piece of music, like it, like interpolated between the two, I guess.
matt_ch_17_18:Yeah, I didn't know at what level. it did the interpolation. I mean, I'm assuming it wasn't just a fade, like I'm sure they could have before that time They could just fade to a piece of music. I guess they like, Did it at the note level though.
jon:yeah, it struck me as some sort of like midy, you know, interp, interpolation. Um, and I don't know, it was just like a funny section because. He was like, it sounded like crap. But it was very, but it was very interesting and, and one of his key questions is like, who's actually composing the music? Like if you pick a, a slice out of the center of that thing, it doesn't sound like either of the pieces. It sounds like a wholly unique, bad piece of music. Uh, and, and his question is like, who's composing that? Like is the computer composing that? Right,
matt_ch_17_18:Right, and I mean this is, this is so much more relevant. today because Like if you had to ask who wrote a piece of code today, it's like and, and you hear these, you hear these, uh, estimates. That's like 50% of code is gonna be written by machines, X, Y, Z. It's like, it's probably already there.
jon:Yeah,
matt_ch_17_18:because. at least when I'm coding so much, Like, especially
jon:especially
matt_ch_17_18:all of the kind of like, ministerial, like the very, okay. It's like, okay, there's a function, You just, there's very clear, parameters in the local context, and then I just need to construct a call to that function. Like that's the sort of thing that ai, just like, it's just so easy for it to do. Um, of course. I am constantly burned by, by it like. Guessing, uh, Wrong. about like which parameter should be, And then like, I don't, even, I never wrote, I never, chose which parameter should go into that function. It just
jon:just
matt_ch_17_18:picked something that like, it just shape matched the function call. So of course I go back and it's not working, and I'm like, Oh, shit Like that's completely the wrong. thing that should be, should be in there. So um, but anyway, like. the, the larger point is There is this question about like. Authorship in that context. because So much of it was done by ai, but the human is also in the loop there. So it's really a collaboration.
jon:Yeah, no, totally. Uh, people have been talking a lot about vibe coding, uh, which I, I almost wanna say the myth of vibe coding because, you know, I've been programming my whole life. I would say that. Ais are extremely good at programming almost insanely good, but they can't do the whole thing, not even close. So it's, it still requires this sort of human intervention to, you know, prevent mistakes, but also to do like very structural things. Uh, and I think this will, I think it'll go away. I think very soon ais will be able to do the whole thing. Uh, but I just think there's a little bit of ways to go before that happens. The
matt_ch_17_18:The thing that has and I know we've talked about this offline, the thing that has blown my mind. is now you can have an AI take screenshots of a web app. So you ask it to do. Uh, perform a task and then it can actually interact with a web app. And that feels like the thing where, or the sort of thing where now it's closing the loop. It's able to validate its work. So who knows if, if you'll be able to, and this is something I need to try. If You have a simple app, you that, you know, an existing simple app, you add it, ask it to add some unit of functionality. Cannot it do that? Well, we'll see.
jon:Yeah. Yeah. And I think like, well, I don't know how much we want to discuss how capable our AI's at programming, but I think there's. Always, or not always, but for in the relatively near future, there's going to be this effect where AI can get like 95% of the way, but that additional 5% is like insanely hard because you have to interpret what the AI did and then you have to do the human element. You know, like the AI can probably code up a perfect app that clicks on, or you know, you click on buttons and very specific things happen. But then like, making that app feel good, like, do you add animations? Do you make, you know, the flash on the button, click take a certain amount of time. Like, there's just gonna be this kind of long tail of like figuring stuff out.
matt_ch_17_18:Yeah.
jon:Um, and I don't know, I mean, I'm a huge AI believer. I, I feel like I'm sounding like a, uh, AI hater, skeptic, but I just, I just see these things in the media that sound. So sweeping and having dealt with it on an almost daily basis, like it's not there yet.
matt_ch_17_18:Yeah. Yeah. So we're not all your programmers out there, Don't, don't get too worried that you're just gonna lose your job immediately. Uh, but,
jon:Yeah. You might lose half your pay
matt_ch_17_18:Yes,
jon:and you might get replaced by the guy sitting next to you who's better,
matt_ch_17_18:vibe coder next door. Uh,
jon:exactly.
matt_ch_17_18:um, So one of the themes that he talks a lot about? in this chapter is creativity. And it's like, when can a machine be said to have been creative? And I know You're talking about this, but it's like now the question. becomes like, is the u is the computer itself. Being creative and, and you know, to What? extent is That, an ev evidence of, of intelligence? And that. gave, that got me thinking about, uh, have you heard of like move 37 in the, in Lisa doll's, uh, go game? Uh,
jon:no, I don't think so.
matt_ch_17_18:so. this is a, this is this like famous Move basically. So Lisa doll famous go play or like the top ranked go player at the time, This. was in 2016. Google, uh, Google's DeepMind, they had Alpha go and they were, they had five matches and Alpha Go won, uh, three or uh, four out of the five. But there was One move in the second game. It was Move 37, which all of the commentators were like no human would have ever thought to, to. play that move. But it was this beautiful. And like apparently. Lisa, it all took an unusually long amount of time to. To respond. So um, we're there. like, we're really there where, I mean obviously all of the stuff that's happened since, but like feels like the kind of thing where there's like a flash of, of, intuition that
jon:yeah.
matt_ch_17_18:we're seeing.
jon:Yeah. You hear about this all the time in chess. There's actually like commentators refer to them as human moves versus, you know, computer moves where. Stock fish will recommend these moves, and commentators will almost like dismiss them. They'll be like, oh, that's a computer move. Like no human would ever, you know, come up with that. And it's usually the type of move where like, uh, well, I don't know how. So in chess there's a concept of forcing moves where for a long string of moves. There's only one option, like if your king is in Czech or something, it's like you have to get your king outta Czech. So, you know, generally speaking, there's a limited number of moves, often only one, uh, that you can make. And so the tree, uh, even though it can be really deep, if it's all forcing moves, then a very good chess player can see deep into the tree, uh, because they're just calculating, you know, single moves. But there's also these times where, uh, and the, the positions are called sharp, where there's a lot of different moves. Very few of them are good, and many of them, even though they may look good, are actually really bad. And I feel like these computer moves arise out of those, you know, mysterious positions where, you know, there's a lot of possible moves that can be made. A lot of them might look good to a human, but very deep in the tree. You know, some of those moves are kind of proven wrong. Uh, but no, you know, even the best, even Magnus Carlson can't calculate like even, you know, 10 moves in the future. If the set of moves is very, very wide. Like he can only calculate that deep if, if it's like a lot of forcing moves.
matt_ch_17_18:In a way, I mean, those seem like the ideal move to make Where you find a move, like, so let's say you had one that was uh, one move that started a chain of only one option moves, And then another one had this like lar, like these large expanse of like. pitfall moves that, are like available. uh. it seems like That second one would be would be preferable.
jon:Oh, yeah. Well, and that's what a lot of players, me, Kyle Tall, is famous for not necessarily making the best moves, but making moves that caused his opponent to be just completely thrown off. Yeah. And that's part of the reason why I think chess is still very interesting to people is because there is a psychological component, you know, a strong psychological component. And you know, a lot of times a move that's demonstrably worse based on a chess AI is actually the better move to play because of how the human reacts to
matt_ch_17_18:Yes, yes. Which I was just looking up the games that have been quote unquote solved. and you know,'cause now, I guess, chess would be considered to be solved, right? Or, or, is it, is it considered to be solved or
jon:I, I don't think it's solved. In the sense of like, everything is fully calculated. There's, there's a thing called a table base in chess, which if there's, I think that if there's only seven pieces on the board, every possible seven piece combination has been solved. But, you know, in the early game there's, uh, 32 pieces on the board. Uh, so, so I don't think it's been quote unquote solved. E every chess AI is like 10 times better than any human. So it's solved in that regard. Like no human could possibly beat a chess ai.
matt_ch_17_18:Right. Okay. Okay. Um, well, so ultimately. the, the games I was trying to find like what are, what's the frontier of games? And then a lot of people were saying like games where there's. Human interaction, like games that rely on psychology are the ones that, that still kind of remain to be, to be solved. So,
jon:Yeah,
matt_ch_17_18:but I think, I think that's pretty much pretty much all I had. I don't know. Did did you have anything else you wanted to
jon:Yeah. I had one more point and I just wanna mention this'cause it really segues into basically the rest of the book. He mentions how a primary problem in AI is, how do you represent knowledge
matt_ch_17_18:Yes.
jon:and. He discusses, you know, a few different ways, kind of at a high level. He's like, is knowledge just a list of facts? Like, is it some like list of relationships between things like high is, you know, conceptually above low, for instance. I mean, often it's literally above low, uh, but it can also be more metaphorically above low. Um. So, yeah, he just does a brief discussion of like, how do we represent knowledge? You know, how do we represent the relationships between like nodes of knowledge? Uh, and it's, it's interesting. Maybe we'll do this in the next episode, but just comparing and contrasting it with. How large language models actually represent knowledge.
matt_ch_17_18:Exactly.
jon:Um, because there's some, I mean, again, like Douglas Hofstetter just makes these very prescient predictions. Sometimes he sounds, you know, like he's making cave paintings, but other times he just sounds like he's from the 28th century. Uh, and it's, it's kind of wild. But anyway, that's, that's to come.
matt_ch_17_18:Yeah. I mean this is, this is still gonna Be this is still a big problem today. Um, but it's.
jon:But
matt_ch_17_18:the the one thing I'll say, and I don't want to, uh, get off onto a big tangent is like, this is a problem for humans too, right? It's not like
jon:like,
matt_ch_17_18:sometimes it sounds like people are, or, or in my opinion, people are being or holding AI, to a higher standard. than you would hold the human to basically,
jon:Yeah,
matt_ch_17_18:and you want this thing to be, completely infallible. and
jon:and
matt_ch_17_18:it's not clear to me why we would expect the AI to arrive at a, at a more pure version of the truth than than a human would.
jon:Yeah, no, that's a good, I see that a lot as well, and it cracks me up. I still find it. I mean, ais are just so utterly advanced. Like we are already living science fiction and everyone's like me, deep seek is better. Like it's, I don't know, it's just funny. All this quibbling when we're dealing with like just such extraordinary things.
matt_ch_17_18:Yeah. People so quickly people have just started. to take it, take it for granted and be like, well, what have you done for me lately?
jon:Yep. So,
matt_ch_17_18:all right, let's. leave it there. Um, this is, I mean, This is a fan, a fascinating topic. um, but we'll have to explore it in the next episode when we cover chapters 19 and 20.
jon:Heck yeah.
matt_ch_17_18:All right. See you next time.
jon:See you, Matt.