.jpg)
Switch Statement
Switch Statement
085: Gödel, Escher, Bach - Ch. 19 - AI Predictions of Futures Past
Hello everyone And welcome to the switch statement podcast It's a podcast for investigations into miscellaneous tech topics
Jon:This is our 22nd episode on Girdle Escher Bach by Douglas Hofstetter.
Matthew:Oh, hey John. How are doing?
Jon:I am doing really good. I don't know, I just had a nice couple of weeks, been relaxing, been uh. Working on random things here and there.
Matthew:What do you do to relax?
Jon:I like to cook, I like to run. I've been playing my piano a little more, which is good because I don't know, just have been struggling lately to, to do that. So it's nice to like set aside time to do it. Uh, but yeah, I'm, I, I don't know how I relax.
Matthew:working on any particular pieces right now?
Jon:Well, there's a piece that I've always wanted to learn. The Chopin ballad, number one. I think it's like one of the greatest solo piano pieces, but it's very difficult. So it's, it's like a little bit outside of my like ability., but that's good. It's something to shoot for.
Matthew:that's the way to do it. I feel like. Yeah. You gotta. Expand the domain of possibilities.
Jon:What about you? How do you relax?
Matthew:how do I You know, I avoid it if at all possible.
Jon:don't seem to, you're busy.
Matthew:it is interesting because I think I do the same things. When I am stressed and when I relax. But the distinguishing factor is like, I don't have a deadline that's nearby, like same things. I like coding, but, I will do that. And that is, that feels like rejuvenate, like rejuvenation. It rejuvenates
Jon:rejuvenative.
Matthew:but. Rejuvenative. Yes, exactly.
Jon:have just made that word up.
Matthew:Last time we were talking about the history of, well, the history's history of ai,
Jon:right. ancient history. Yeah.
Matthew:and now we're talking about the future history of ai,
Jon:Yes,
Matthew:the former
Jon:former future, what was thought to be the future of AI in the year 1978, which is simultaneously hilarious, but also amazing, just what he did sort of predict.
Matthew:this is the thing, like if you were gonna talk to anyone, if anyone was gonna come up with anything approximating a correct answer about this, it would be Douglas Hofstadter. So like, the fact that he gets some of it wrong is not, you know, I don't think we should dig him too much for that.
Jon:Yeah, although we will talk about them.'cause some of them were just utterly hilarious. Well, I thought they were hilarious. They're, they're really not that funny. Uh, but another thing I wanted to mention about this book, and I feel like now is a good time to mention it, we, we had a great conversation with a friend of the podcast recently. I think that was the last episode that went out. And we sort of discussed like some of the some critical aspects of Douglas Hofstetter. Like he did get a lot of things wrong. He kind of like that whole biology chapter on DNA was basically like a big hand wave and it turned out to like not be accurate at all. And you know, someone published a white paper completely debunking it. Which I, you know, I think is, is good that people sort of critically examine these ideas and tear them down where possible. But I also love that Douglas Hofstetter is just kind of wildly tossing out ideas in this book. And some of them are just completely brilliant
Matthew:Well, this book is just a series of these like enormous ideas. It's just like one after another. He will just have a section where he introduces this unbelievably vast construct and then just like be like, well that's, that's a topic for another
Jon:Yeah.
Matthew:Uh, and.
Jon:Well, right, and the, actually, that segues into exactly what the point I wanted to make, which is like. I think this book was probably very inspiring to a lot of people. I'm inspired by this book, and it's 2025. It's like 50 years after he wrote this book. And I, I just wonder how much of today's AI advancement and like engineers working in ai, uh, you know, scientists being interested in ai, I wonder how much of that was at least partially inspired, like by this book. And I would, I would venture to say.
Matthew:a ton
Jon:A, yeah, a ton of it. So anyway, this is just me praising Douglas Hofstetter in spite of his obvious and many flaws.
Matthew:One of my take. I'm glad you're bringing this up because one of my takeaways while reading this book is I wanna be the kind of person who discovers magee's human condition or what have know, like the, my primary takeaway is. Douglas Hofstadter just did like a ton of reading and looking around and consuming, and he's like a, I don't know, he's an idea collector. Um,
Jon:multi-disciplinarian.
Matthew:this book is all about like sharing all of those, all those ideas with the, with the world of,
Jon:Uh, it's a testament and, and this really goes to your point, but it's just a testament to how the more well-rounded you are. I. Yeah, the more ability you have to sort of come up with ideas that lay on the cross section of multiple disciplines, like a lot of these ideas that he's coming up with have been informed by his study of art and the hard sciences, you know, and I feel like without the confluence of those two things in his brain, these ideas might not exist.
Matthew:I always go back to Jesse Shell's comment. this really advanced juggler, do you remember
Jon:Yes. Yeah, it's a good part.
Matthew:People were like, how do you juggle so well? Like, who are your juggling And he's like. I don't look to other jugglers to become a better juggler. Like I look to a swan or I look, I go to the ballet, you know, and I, I derive inspiration from all these sources. and that's how you come become this more, like, much more like varied and complex and nuanced, juggler and human, I
Jon:Yes. And I also think it's important this bears repeating because I always felt a pressure to be as specialized as possible. I. In my life and you know, I'm old now so I can be a little reflective. Obviously specialization is very, very valuable. You know, you have some expertise that you're just very good at and it's valuable, whatever. But I also think don't ignore being well-rounded. Don't ignore learning about our learning about humanities if you're, if you're interested in science like I was, um, because it will make you a better person and I think it will make you better at whatever you choose to specialize in as well.
Matthew:so I know we introduced this, uh, this is about prospects of ai, he spends a long time talking about these bond guard think of, uh, these bond guard
Jon:I thought this was very interesting and, you know, somewhat dated, uh, because basically these bonard problems. And I don't even know if I'm gonna describe this super accurately, but it's, it's almost like a series of images.
Matthew:latest chapter in our, this is a terrible topic for a podcast'cause these are highly, highly visual, problems.
Jon:Yeah. They're, they're highly visual. It's, it's like these cards with, you know, drawings on them and some of the drawings will be like. You know, uh, three squares, some of them will be like a big squiggly line. And I guess the, the issue is attempting to like, discuss similarities and differences between these, these Bonard cards. Um, and, and sort of a discussion of like, how might an AI discover differences between these things?
Matthew:Right.
Jon:because, you know, for a human it's, it's fairly simple. You might look at something and you might say like, oh, everything has rounded edges. even though the images are quite different, you know, you could have an image of like a silhouette of a person, and obviously that's a lot of curves. And then an image of like circles. And one commonality you can draw from that is that there's a lot of curved edges. There's not, there's not any straight lines, whereas a computer would have an extreme difficulty determining that potentially. I think that's sort of the point of these bonard problems is to, gain a better understanding of like, how do computers differentiate, between these things that are visually distinct in some ways, but. But visually very different in other ways.
Matthew:Right. he uses this term, he had this phrase, which was, these are. Tiny science, like you do science in this like tiny little context because you, because it's exactly true where it's like you have all this data, which is this like raw, like pixel data essentially. but then you basically have to come up with a hypothesis. You look at a couple of these images,'cause I, maybe it's at this, there's like six images on each side, um, and. You kind of have to like, be like, okay, well I looked at two of them, so now maybe it's this, this a, a relationship. then you have to look to the other side to make sure that they don't have that attribute. And then confirm with all the other ones, which, it is, you know, it's. I've, I guess it, it's just another way to think about what the act of like science is, and actually I've never really thought about it this way, but it's like an AI is kind of doing science all the time, I guess. Right? Like just with their visual and auditory input.
Jon:Yeah. Yeah. He talks about, uh, he, he mentions, I think this was a phrase from the book, representing knowledge with nested frames, which is kind of like, it's an idea We've discussed a million times where there's so many different abstraction layers, which you can look at a problem with. Um, like the example I like to give is there's that famous, you know, you're firing a cannon off of a rooftop and, you know, how do you calculate where it's positioned after n seconds? And there's the famous formula, you know, like negative 4.9 T squared plus vertical velocity, t plus some constant, the height of the building. But you could also use general relativity to solve that problem. And, and those are basically two like drastically different abstraction layers to look at that. And he applies the same concept to bonard problems where, you know, you can look at things like count the number of things on each card. And that's one frame, you know, through a, to look at this. or you could do what I was saying earlier, like. Look at the roundedness of the, of the shapes. and these are just all kind of different frames which nest together. And it's almost like the summation of all of that is sort of what happens in a human brain. It's sort of like using all of these nested frames at once, sort of instantaneously. and it's also similar, I think, to what happens in a neural network, like in a, a large language model.
Matthew:Right. Well, and, and he, he talks about this as well with, I think he used two terms. It's, I think it's like focusing and filtering I. At every layer you have to do this discarding of, irrelevant information and like a, a heightened focus on specific
Jon:Yeah.
Matthew:And you know, I, we probably talked about this before, but this dovetails like very closely with attention, right? That's this, like this large language model mechanism that is kind of backing all of this, these latest advancements
Jon:advancement in ai.
Matthew:And I think, and my understanding with attention is actually learns, like part of what it learns is what to like, what to focus on like in a particular context. when it's being trained like it, it is like, okay, well in this context, these are the relevant things I should be looking at.
Jon:Right, and it gets better.
Matthew:mechanism. Yeah.
Jon:becomes a skill basically, that it has is to like know what to focus on, even in a completely novel sentence.
Matthew:Which this is, this kind of like goes, he talks about like meta descriptions it feels like this is kind of that like meta move where it's like, okay, first you're just trying, you're like operating at this lower level, which is like the the words, but it's like, okay, well there's actually this meta skill, which is like the relationships between the and that is. You have to set up your system in such a way that it learns to do that, not just accept the words directly.
Jon:And I'm sure GPT users have had this experience where you ask it a question, but you ask it really, really poorly and then it answers the actual question you were trying to ask. And that's like an even more meta thing is sort of reading between the lines and like discerning what the user actually wants. and I'm super impressed by this when large language models get that right.
Matthew:Yeah, really early on it was funny because I was talk, someone who was really into it was like. You don't even need to worry about writing your particularly well. Like it's just so good at understanding what you mean because like I, I have this thing where I'm just like, I will wordsmith even when I'm asking, uh, Chachi bt a but it's like, I don't need to do that. It's gonna figure it out. I could probably write it in completely broken English and it would still be able to figure out what I
Jon:It, it's hilarious.'cause when ai, you know, first, when GPT-3 0.5 was released, like I joined an AI team at the company I worked for, and I just remember all of us being obsessed with prompt engineering. I. Like we were all writing these, like, like you're saying just multi-paragraph prompts. We were sharing them with each other. We were saying like, oh, if you, if you yell at it, it will do better. Or if you threaten it, it will, you know, have yield better results. And looking back on that, that was just ridiculous. Like you would just not, I mean there don't get me wrong, there's a lot of prompt engineering still going on, but just for a lot of the use cases that we were, were doing, like you could just. You know, word vomit and, and these latest models are gonna get it right every single time.
Matthew:It's kind of hilarious to think that prompt engineering is the newest profession created by ai, but also the
Jon:Yeah.
Matthew:the first profession to be replaced by ai, I guess it's fitting.
Jon:Yeah, I mentioned, I think I said the same phrase before, but I had written a note to myself about this chapter. This is like cave paintings of ai. Like he mentions at one point, words having spatial relationships with one another. You know, like, and a simple one being like the word hi is. Above the word below, you know, things that are high are generally higher than things that are, that are labeled with below. and this is so similar to a concept called an embedding space. Where in large language models things, you know, literally have an embedding associated with them, which is this vector of like a billion floating point numbers. And there is literal spatial relationships between these concepts. Like, and if you, if you look at the, the word high in an embedding space, and then look at the word below in an embedding space. They will have a spatial relationship with one another. And if you apply that same spatial relationship to a different concept, you can make that concept higher or lower, which is very strange. I don't know if I'm explaining that very well, but I just thought that section was interesting.'cause again, he's sort of touching on this idea that today is like very important and very, you know, it's something that's an active area of research.
Matthew:Yeah. And beddings are the kind of thing where until you have one and like poked, poked at one, it's, I feel like very hard to wrap your mind around what, what that means. not to mention that it's also confusing because it's got more than three dimensions.
Jon:Yeah,
Matthew:which is like obviously immediately making it somewhat inapproachable.
Jon:Yeah. Embeddings. I feel as though embeddings are one of those things that we still haven't even scratched the surface on. Like they're so powerful. You know, there's so much you can do with these relationships between concepts, but to your point, they're just hard to reason with and they're hard to, as an engineer, it's hard to understand what's going on and there's just not a lot of ergonomic tools for working with them. There are a lot, I mean like there was a very recent thing that happened where. Uh, GPT released its new version of Dolly and everyone was creating studio Ghibli pictures
Matthew:Yes.
Jon:and all that is, is, well, I shouldn't say all that is, that's definitely removing a lot of nuance, but a lot of what that is is applying like a ghibli embedding to some other piece of image. You know, you give it an image of a pot and you. Sort of subject that pot to this ghibli directional embedding, and boom, you have a studio, Ghibli pot.
Matthew:Yeah, no, it's incredible. If, if listeners have not worked with embeddings before, I went through an AI bootcamp and they started off by showing movies and like, it was a one dimensional embedding and it was between horror and comedy, on the spectrum, you know. So basically they. You could imagine everyone a score from zero to one toter determine, like zero is like you're the most extreme like horror movie. And then one is like, you're a comedy So you could conceive of like, okay, well you, if you make a new movie, like it's gonna be somewhere. In the middle of like, you can place it somewhere on this line and you could, or you, or you could create a new point on that line and you could conceive of like, oh, it's something that's like a perfect split between a comedy and a horror
Jon:Uh, I was just saying yes. I think this is a really powerful idea.
Matthew:Right. So, so, but that's kind of like a way to think about it in like, on, on one line where it's like, yeah, okay, well let's, I, I'm trying to come up with a movie that's like halfway between horror and comedy. There was that movie with Willam Defoe and uh, it was, just came out a little while ago. Anyway, uh,
Jon:like pretty, uh, pretty little things or something? Pretty things. interesting because. In a, in a lot of ways that is between comedy and horror, but it's also like very extreme on other axes, which is, which is another strength of multidimensionality is everything has, you know, thousands of axes that it can be imagined on, and Multidimensionality allows you to achieve that.
Matthew:Right, exactly. Because you would, you would wanna have that off into one, some random corner of the embedding by itself.
Jon:Yeah, vaguely pornographic, maximize the vaguely pornographic axis. anyway, I, there was a bunch of other concepts he discussed in this. Like he, he, he was basically predicting a lot of these concepts that are really important today. Like he talked about message passing, having individual, he called them actors that have their own sort of idiosyncrasies or own abilities. And passing messages between those actors. This is age Agentic ai. This is, you know, he's basically describing age agentic ai, which is one of the key principles of like modern AI development. Um, and he's also discussing a lot of other things like tool use, this message passing. This is kind of like, uh, there's a buzzword going around today. MCP. Which is like model communication protocol or something. and so this is like a high level interface that enables models with completely disparate abilities to talk to one another. It's sort of like a contract that it's like an API basically. and so anyway, I just, you know, once again like this is Douglas Hofstetter basically predicting or, you know, stating that this concept is gonna be important and like here we are. Everyone's funneling billions of dollars into this very concept.
Matthew:That whole section has this really powerful idea, which is like the message, like you need to consider who's receiving it before you can really understand what a message means. You know, because if, if like you have a message written in French and you send it to me, I'm gonna do something very different with it than someone who is able to read
Jon:Yeah.
Matthew:of the context of the actor, like you're saying, who is, who, like knowing who's gonna get it. Like you can't really know what the message means and he, what does, he calls this, um, a symbol, which I don't love that. Like, I feel like that doesn't really capture the,
Jon:Yeah.
Matthew:the, like dynamicness of, of what that, what he's describing
Jon:Right. Yeah. Symbol seems like a discreet, singular like atom. Uh, man, I'm using like even worse words now.
Matthew:Oh, like a very, like, very tight concept.
Jon:Yeah, exactly. Whereas this is almost like you're passing like a cloud of concepts.
Matthew:Yes. Yeah, exactly. Should we get to these, speculations?
Jon:Sure. Yeah, we can, we can definitely get to the speculations. The first one I had written, uh, I had written the AI's can't ad, I think, and yeah, unfortunately, I'm forgetting the cons, the context of why I wrote this, but I think maybe he had said something along the lines of like, AIS will be extremely good at like highly analytical tasks. And I just wonder what he, what Douglas Hofstetter would think about the fact that ais have trouble doing basic arithmetic. I had also written, he, he said something along the lines of like, AIS will never solve chess.
Matthew:That, that one threw me I think the message was more like, we're not gonna have something that can beat anyone at chess until we have a GI basically, uh,
Jon:Which he wasn't even close on that one. I mean, Kasparov.
Matthew:whiffed on that.
Jon:Yeah. Although it's interesting'cause I, I had to look up when the Kasparov Deep blue thing happened, which is basically the first time a computer beat like a grandma. You know, Gary Kasparov, one of the best chess players to ever live. this was in the mid nineties, so I guess he, you know, he was right for about 15 years.
Matthew:Yeah.
Jon:Uh, but now we have ais that are just so incredible at chess, like no human would ever come close to beating them. Yeah. But those were, those were the only two notes I took.
Matthew:So yeah, so in terms of the other, uh, the other predictions he made, there were, there were two other ones that, That I thought, were worth mentioning, a computer program ever write beautiful music? This was one where I don't know. It's funny because I think there's kind of like split opinions about this. There's, I think there's a contingent of people who are in the, there. There's something essential about human experience that needs to go into music in order for it to be beautiful and it, it actually can't be beautiful without. human, having intention behind the music,, I'm sympathetic to that view. I don't hold it, but I can see where they're coming from,
Jon:So I have always believed that music is sort of a sacred thing and that it is a part of human expression. But I also think that there's different types of music, you know, and some of like, I think the same thing about film, where it's like sometimes you want a film that's just utterly formulaic, you know, it doesn't really do anything new. Gerard Butler shoots a bunch of people and that's great., but other times you want something that's more like a personal. Either experience or just like a personal vision that's completely original. And yeah, the same thing exists in music and so I, I kind of agree and disagree with the purists.
Matthew:Yeah, no. That, that makes sense. The last one just super quickly was, I thought it was hilarious that he said that people would never, explicitly program emotions into the machines. He would probably, not expect that people are using AI for their, girlfriends, uh, and boyfriends in 2025.
Jon:Yeah. Yeah, no, that one was, was very amusing.
Matthew:all right. I think that's, I think that's all I got. Do you have anything, anything else you wanted to mention?
Jon:That was all I had. I mean, another, good chapter, hilarious discussions on the state of AI at the time. Predictions, some of which very prescient, others of which hilarious. But yeah, just a fun read.
Matthew:All right. Well, I will see you next time for chapter 20.
Jon:See you next time, Matt.