Switch Statement

086: Gödel, Escher, Bach - Ch. 20 - The End of the GEB Era

Jon Bedard
Matt:

Hello everyone And welcome to the switch statement podcast It's a podcast for investigations into miscellaneous tech topics

Jon:

This is episode 23 on Go Escher Bach by Douglas Hofstetter.

Matt:

Hey John. How are you doing?

Jon:

Hello Matt. I'm doing really well. How are you doing?

Matt:

I am doing all right. I'm on, I'm on location. I'm at St. John's Law Library

Jon:

yep. Working through finals

Matt:

working through finals. Yeah, I gotta gotta do crim.

Jon:

crim.

Matt:

yeah, that, that, um, that is short for criminal law.

Jon:

Nice. I thought it was like a new form of cramming, like before a final, but

Matt:

Yeah. I'm, I'm crim. Well, I, I am, I'm crim cramming right now.

Jon:

Nice.

Matt:

but today I know we've threatened finishing this book many, many times before. But today we're gonna deliver on that, on that threat Chapter 20.

Jon:

Yelp Chapter 20 and a brief dialogue Afterwards,

Matt:

Yeah, you're gonna be the di, the re, as usual, the dialogue expert.

Jon:

I felt like I had to read the final dialogue just'cause it's like literally the final thing in the book.

Matt:

i, I felt like I had to read the final dialogue and then I successfully re resisted that urge. Uh.

Jon:

Nice, strong will.

Matt:

so this chapter is called Strange Loops or Tangled Hierarchies, which I mean, that could be the title of the whole book as far as I'm concerned.

Jon:

Yeah. Yeah. This is basically the theme. The, the name of this chapter is the theme of the book, uh, and it starts with a dialogue called sloth canon, which is a really cool Bach piece. Just wanted to point that out. I, I, like, I. I searched out this piece and, uh, listened to it. It was, it was awesome.

Matt:

Now it's interesting because he has some sort of like trance position here. Like the title has been flipped upside down or something. Uh, do you know why he's doing that?

Jon:

I don't know.

Matt:

Is it, is it a mirror up, like up and down? Like if you flip it, the piece of music upside down, is it like exactly the same?

Jon:

Oh, that's a good question. I don't know that. I don't know that. Or maybe the structure of the music kind of like inverts in the middle or something. It feels like a very Douglas Hofstetter thing, but I just listened to it and I didn't, it didn't sound like it inverted, but maybe I'm just deaf.

Matt:

No, no. I mean, it sounds like the thing you need to like listen to it many times and actually just flip it backwards. You might even just need to do like a midi, you know, it's not something, it's not a transposition, you could do on the, the, like, the, what is it, the frequency data you would need to, like, it's a, it's an operation to perform on like the, the

Jon:

the mid data.

Matt:

Yeah, the mid, like the sheet music and then reinterpret that. And it's exactly the same

Jon:

Yeah, you need to like reflect the notes vertically on the staff, like around middle C or

Matt:

Yes, yes, exactly. I think that's, I think that's what it is. but yeah. So what, what, what is this piece of music like? Is it, uh, I don't know.

Jon:

I mean, hard to describe. I like, it's like many Bach pieces where it's this convoluted in the very best way. Uh, feel like usually convoluted is like a negative term, but it's just this like very intricate. Piece

Matt:

we can, I mean, presumably we can put a, put a sample in there, right?

Jon:

we should have been doing that'cause this, this book, I mean, one of the great things about this book is it does discuss these amazing creators. You know, Bach is like literally the greatest musician ever. And Asher is this amazing artist and you know, there's tons of his p his works in this book that we have not been able to do justice to at all.

Matt:

Nor in a podcast have any, any way to, but,

Jon:

Yeah, exactly.

Matt:

maybe, maybe we do a video at some point.

Jon:

Yeah. but yeah, it's just a cool way to find great, great music, great works of art. one of the many things this book offers.

Matt:

truly. And I know we always say this about all of the books that we read, um, pro primarily to try to make it clear that we're not a substitute and should bear no liability for, uh, uh, reproducing the work. But, um, but you should go read the book. Uh, because there's just so much more and there's so many references to really cool works of art, of course, which we can't, uh, reproduce. But also like so much stuff we just didn't even talk about at all.

Jon:

Oh yeah. Yeah. It, it is also just like a. I've thought about this a lot with this book where there's a lot of things that I feel like I understand, but then there comes a time where I'm trying to describe it to a friend and it, I just can't like, it, it's weird. It's like I feel like the information is locked inside my brain and I just lack the ability to express it.

Matt:

This is very, real. Like I'm experiencing this with finals where. You attend all these lectures while the professor is talking, you're like, I understand every single bit of what is going on right now. I'm getting this all. But then when you go to like write down a summary of it on the final, you realize like there's actually a tremendous amount more processing you need to do to like click these words together because they're like, wait a minute. That's that series of words isn't exactly what's in my head. And I think what's happening is you probably do understand it. But it's a translation process from this abstract idea into this textual representation. And that takes a bunch of time. Uh,

Jon:

right. It's also a skill, and this is why, you know, like I've always been a practitioner in my life. Like I have not spent. You know, I'll, I'll describe things like junior engineers on a team. Um, but I just have not spent a lot of time like taking what I do and sort of converting that into spoken, spoken language. Uh, and I think, I don't know, I mean this is why communication as a skill is so valuable.'cause, you know, you can sort of transmit information much better.

Matt:

And honestly, this is one of the benefits I see of doing this podcast. I mean, obviously we love every one of our 20 listeners, recurring listeners, but, uh. But it's also just an opportunity to practice that skill on a regular basis about having an idea and then real time trying to convey that in a way that doesn't, doesn't have so many, like disfluencies and and filler words, I guess.

Jon:

Exactly. Yeah. Communication, a skill which Nixon possessed. And this book discusses this strange loop that existed between Nixon and the Supreme Court. I thought this section was interesting'cause I don't know, it's one of many sections where he's discussing something that was a little bit more, uh, contemporaneous. But then you look at modern times and there's like just these amazing examples of this where a president is basically fighting against another. S you know, sort of branch of the exec, uh, government. He got the executive branch versus the judicial branch basically. Um, and yeah, they were just discussing Nixon. Uh, I think it was Watergate, actually. I didn't write it in my notes, but I think it was like what he did during Watergate. He was, the Supreme Court was, uh, you know, basically, I don't know if they were like bringing charges. That's probably the wrong terminology, but,

Matt:

Well, they were, they were requesting, uh, they were requesting recordings from him.

Jon:

Right? Yes. Which is where all those brilliant recordings come from, that you can listen.'cause Nixon was paranoid and so he recorded literally everything and then Watergate happened and he had to give those recordings up, which to him was probably like a complete nightmare bear.

Matt:

But, um, he said he, he, and, and where this comes in is essentially like. There's this shell game of like, who, who, and maybe not shell game, but it's like, who's the last person to say what, what order should be adhered to or how to interpret it? And he said he would only ad, uh, obey a quote unquote definitive ruling. But it's like, who decides what is a definitive ruling, you know? Uh, so. Yeah, there. And, and this is, this is a case, we read this this semester, uh, this, this Watergate, uh, case. Yeah. Nixon versus United States. It, it deals with basically, uh, presidential, uh, not immunity. There's another word, and this is good that I'm, but uh, it's presidential priv uh, executive privilege is the, is the word, which is basically like, you know, if you ever think about, if you've heard the term attorney client privilege, that privilege is like the privilege to not. Reveal a certain, like, not provide certain information because that channel of communication was quote unquote privileged. Uh, so executive privilege is the idea that communications in the executive branch is, you know, there is, there is a, an executive privilege, but the problem in this case was. People were accused of committing crimes. So like, and so it was an incredibly serious situation. So the Supreme Court was basically like, yeah, we understand like we can't just be subpoenaing or like requesting these records willy-nilly just for any old situation. But when someone is accused of committing a crime, and this is an important, like we've deemed that this is pertinent to that investigation. Sorry. But like. The, the court's interest in resolving this criminal case actually outweighs the executive's, uh, end line.

Jon:

Which is wild. Uh, by the way, like I super appreciate that you uh, know this and kind of can bring that knowledge to this podcast'cause that's super interesting. Uh, it's also kind of sad'cause I guess at one time we did live in a rule of law nation and

Matt:

Dude. I mean, this is the same. This is, it's so pertinent. It's all coming back around again because you have JD Vance out here saying the court doesn't have the ability to check an executive's legitimate powers. And it's like, no, no. Like the whole point of the judiciary is like, we have this constitution, right? And. They need to say, if something like, and I guess at the risk of turning this into like, uh, a constitutional law podcast, like just to, you know, take a quick step back. It's like you could have a situation where a majority of the people want to take away the rights of one specific class of people, right. But. S someone wrote a real thought, like people thought a really long time ago, and they're like, this is really not something we should ever be able to take away from, from anyone. And they made it so that you couldn't, like you couldn't do that even if a majority of the current population wanted that to be true. Right. Uh, like. You know, and, and so the whole point of the courts is to step in and say, Hey, I know there's like a popular, popular mandate of like people who want due process to not exist, but like, you can't do that. That's something that we said in this kind of higher tier, the constitution that like. People have to be able to do so when you hear like the, the general refrain from people in the Trump administration is like, we're working with a popular mandate. And the rebuttal to that is, yeah, I, I agree. There probably is a popular mandate for a bunch of this stuff. But the whole, the whole organization of our country is such that there are certain things that even if you do have a popular mandate to do them, you're not allowed to do that. And the arbiter of that decision is the judicial branch?

Jon:

Yeah. I mean it's a, yeah, we, we live in a rule of law nation, which is like. There's always a piece of paper above you on the totem pole that, you know, you have to adhere to that, which I think is great. It's, it removes all of this, you know, personal basically what I, what it seems like Trump is trying to do, which is like become a dictator. Like it removes the ability to, to do that, which just seems like a really good thing.

Matt:

No, it's super. Yeah, it's super important. And, and it was, it was informed by living under a king who was able to both make what the rules were and enforce them. So, which was like, they were like, all right, well this is terrible, but we gotta change that. Um.

Jon:

Yeah. It's interesting because when our nation had the ability to write its own rules, I feel like it really thought it through and, and wrote the correct rules. But this, this chapter starts with him discussing how a computer program can't fulfill its own desires unless it writes itself. Which I thought was really interesting'cause I feel like, uh, we're we're right on the cusp of AI being able to write themselves and

Matt:

This is, I mean, this goes back to this emergent thing where it's like the lower level is, is quote unquote writing the higher level without, without any awareness of that. Uh, you know what I mean? Like, he talks about the, you know, these emergent, and maybe this is later down in the chapter, but it's like. About consciousness and about how the brain is, like, there are attributes of consciousness that exist at this high level that like, there's no way to, to like look at a neuron and be like, all right, well this is what this neuron is like con contributing to

Jon:

Yes.

Matt:

consciousness. Um,

Jon:

Yeah, no, I wrote, I wrote in my notes, uh, strange Loops, the Crux of Consciousness, which I think is a phrase he used, and I think it's exactly what you're talking about, where it's like you have all these low level, simple things happening, but they're happening at such a vast scale that you're getting these sort of higher level emergent properties where, and it's, it's like almost impossible to tie the two things together, like

Matt:

what's crazy is like you then have. So it's like the lower level stuff, these neurons, like they're creating consciousness and then the consciousness is like, and this is, this is exactly what you're talking about, the strange loop where it's like the consciousness is coming back and it's looking at the lower, you know, the lower things again, uh, which is just, uh, so funny.

Jon:

it's wild. And he, there was a part that I didn't really understand, maybe understood this better, but he's, he was sort of describing how goodell's proof. Is a similar thing where like you can't, you can't ever like get around Goodall's incompleteness using formal, rigorous, you know, math style rules, like typical ethical number theory. But he was mentioning how at a higher level you can see that, that there it, you know, it's a true statement or he says you can see that G is true on a

Matt:

Right.

Jon:

And I mean, I guess all that means is like. Goodell's incompleteness is this thing that can be discussed, but you just can't like, put it in a, in a rigorous proof or something. I, I guess I didn't really understand, like, it seemed like a similar idea where you have this, these lower level machinations, uh, and then you have this higher level sort of understanding of something, but there's a disconnect between the two things.

Matt:

I, I think this goes back to something we discussed in a previous chapter, which essentially said that humans have their own limit to ization, right? Like, and it just so happens that like we're, we're operating at. A few levels of ization higher where we do have the ability for G for that first G to be like, yep, we can tell that that's true. But then you get to that higher, there's a point where even humans would not be able to like comprehend the possibility of like that highest level G being true.

Jon:

Yeah.

Matt:

I think that that's how I am taking his, his argument,

Jon:

So like the, the intelligence of a given species can be stated as like, how many levels of G do you, do you comprehend?

Matt:

Yeah. Yeah, yeah. Oh, like, yeah. And I think it's, I think it's on a specimen by specimen basis. You know, each, each individual has their own G level.

Jon:

this book, I'm at like 0.7 I think.

Matt:

Yeah. No, no, I'm, yeah. I'm definitely not a, not quite at one, but, uh, you know, close, um.

Jon:

Getting there.

Matt:

But, uh, so there, he, at the very beginning of the chapter, he talks about this concept of machines possessing originality.

Jon:

Yeah.

Matt:

And I, the first thing that caught that came to mind was like, I have computers that are making original things all the time. Like they're generating UU IDs that I've never seen before. You know what I mean? There you go. Like, that's a, that's a, uh, an original. A thing that has never probably existed in the entire universe before. Um, and, and as much as that kind of probably sounds glib and like, like reductio, a absurdum of the, of the point of what he's saying, I, I actually think that all of originality is just, it's just exploring a really high dimensional space. Like it's just people. Doing something and then seeing the result and then being like, okay, they're evaluating the result and then they're doing something again. So there's nothing I, I, in my opinion, there's nothing to say that a machine could not do that same fundamental loop.

Jon:

I completely agree with you. Um, I think machines are doing original things all the time. I mean, it's obvi, I feel like a broken record, but especially today, I mean, you have machines that are producing imagery that's like beautiful or you know, has high value in terms of like. Being able to entertain or be beautiful to another person. Um, but even before like large language models, I think computers were, were producing original things and at least enabling people to like, explore things and be able to like come to original conclusions. I mean, that's certainly true.

Matt:

Yeah, I mean, especially if you imagine a world where you have an agent who could elicit feedback from a human, it's like. That's game over. It's like, you know, they could create a, they could create a take, they could show it to a human, they could ask it for its feedback, or you know, yes or no. Is this good? And then they could go back, like, you know, I could give this agent a task, like make me a funny, uh, you know, like a two hour comedy or whatever. And then it just. It just goes around and then it actually like does iterations. It asks multiple people, you know, it could totally do that, and then you could wind up with something that it is original in the context, in, in the, in the way that like anything that a human would make would be original.

Jon:

Totally. Yeah. Um, I thought it was interesting. I didn't actually write a lot of notes for this chapter. I think this chapter was. Kind of tying up the book and sort of expressing what the theme was like. You had mentioned Strange Loops being the Crux of Consciousness. I think that's like a huge part of this book's theme. But I thought it was funny that towards the end he. Mentioned this idea of a Bach vortex, whatever that is. Uh, and then he mentioned shepherd tones, which I feel like we mentioned shepherd tones when we were discussing like the first chapter of this book. So I thought it was this, I, I don't know if it's the first chapter, I'm sure we mentioned shepherd tones like a long, long time ago. Because I remember this, I remember I was like describing a shipone and you were like, that's called a shipone. Um, and I just thought it was funny that like. It, it sort of like created a strange loop, you know, between us, this book and this podcast where we like de describe this. She tone towards the beginning and now he's like hearkening back to it.

Matt:

I mean, yeah, that that is true. It's like. Were we, were we improperly affected by the end of this book Somehow in anticipation of our reading it,

Jon:

yeah, like we're a character in the, it's funny'cause in the end dialogue, the, the reacher car. Uh, which I think is how you pronounce that

Matt:

I think you're right.

Jon:

it looks like Ricer Carr, but it's, it comes from an Italian, you know, term. Musical term. But anyway, he introduces like himself as a character in the dialogue. Uh, it's actually a crazy dialogue. Like it was kind of fun to read it. Charles Babbage pops up and like declares himself an idiot, and then he creates someone that's like eight times smarter than himself. And it's Alan Tering, but Alan Turing's also an idiot. It was crazy. There's a lot going on. Uh, but anyway, he like introduces himself as a character. Uh, and I guess we are also characters in the book.

Matt:

Now we are part of, we should write, we should both write our own. Like 21st chapter dialogue, which

Jon:

Maybe that's what this podcast is. This whole podcast, as we've been covering this book is like the 21st dialogue.

Matt:

that's Whoa. You're bro, you're blowing my mind right now. We are actually the last chapter of this book, so now you have to go read the whole book again, the whole book, and then listen to this whole podcast again.

Jon:

And Douglas Hofstetter owes us a lot of residuals.

Matt:

oh. Yeah. Every, every from here forward. I think now this, now that this is the con concluded, this is officially the last chapter of the book. Um, but there was one, um, there was one thing. Let's just see we're done on time. Okay. 23. Um, there were a couple, uh, I did have a couple of other, um, other points. He talked about self-modifying games, uh, and I wanted to talk about Baba is you a little bit, I don't know if you thought about this as well.

Jon:

I love that game.

Matt:

Um, so Baba is you, is a game where it's this grid based puzzle game where the rules of the game, I mean, it's, it couldn't be a more perfect example of strange loops because the rules of the game are blocks in the world that you can interact with. Uh.

Jon:

Yeah, like you make fra you can construct phrases in the game. You know, like for example, you could say like, flower is water or something, and you, you move, like you're saying, you move these words around in the game and construct flower is water and then you can like drive a boat across flowers because flower is now water. And so it's an, it's a totally amazing game, uh, and I remember you and I playing it together, and it was tons of fun.

Matt:

It was, it, it became really hard, uh, at the end. But, so I mean, one of the cool things is that it would describe the WIN state. Uh, like, so basically in every level you would have to quote unquote win, right? Um, and so, but you could define what the win property it was. So like one thing you could do is like, say. Baba is win and then you just immediately win. I, I am pretty sure, uh, or you, maybe it's you is win. Um, so, uh, but I think, I think the interesting question there is if we're trying to tie it back to the exercises that he's kind of going through in the chapter, it's like, what is the quote unquote inviolate level? Just, I mean, as, just to take a step back, the in, whenever you have this like tangled hierarchy or. Uh, yeah, this tangled hierarchy, you, there's some lower level, which it allows the tangled hierarchy to exist. And, and just by way of example, like. Uh, he uses, I don't know if you've ever seen this, the, uh, Escher hands drawing each other. So there's, there's two hands, and each hand is kind of like coming off the paper and drawing the other hand. Right? And it's like, obviously in the real physical world, like that would be impossible. But the in violet level is like the paper itself underlying it, which like allows the improper hierarchy to exist. Uh, where like he describes. He describes a situation where there's three books each, which refer to like, they're each writing the story of the other book. Like they, they all exist in the the book of another author and it's like, well, obviously in the real world that could not po be possible, but the inviolate level would be the. Fourth book underneath it, which is describing these other three books. Uh, and so I guess, Yeah.

Jon:

Well, I was just gonna say this in violet level concept is super interesting in the context of what Google DeepMind is, has been doing

Matt:

Oh,

Jon:

with, well,'cause like they had a, uh, recent publication where they were discussing how they had a, a large language model basically play through like, I don't know, millions of hours of doom and they were able to basically create an AI driven doom. Where each frame of the game is a generative image

Matt:

Yeah.

Jon:

it listens to the user's input, but there's no game engine. There's

Matt:

crazy.

Jon:

yeah, there's nothing listening to like, oh, the user is pressing the left key. Therefore my character rotates left. Like it's doing that by virtue of. F how, you know, how a generative pre transformer works where it's, it's predicting what should happen. Um, and so what you get is something that's very doom like, but it's just this completely novel experience. Like it doesn't have any of the levels of doom. It's like, you know, you turn a corner and there's ips and demons there, but they're, you know, there's no, no one like built that level. Like it's just being generated by ai. It's kind of insane.

Matt:

That is insane because you know, I've, it's pretty easy, I think, to take where we're going with video generation and then imagine that. Someone could just be like, Hey, I want a movie that's like, click without Adam Sandler, right? Like, uh,

Jon:

Such a crazy example. I love that.

Matt:

um, and then it would just like go off five minutes, generate the full movie and then, uh, it would be perfect. But I never imagined that you could do that with video games where. You could just describe a new, like an AI could real time come up with a video game. It's, it's kind of like the perfect dungeon master where they're just, they're, they're able to come up with next parts of the stories at like 30 frames a second or whatever it is, you know.

Jon:

Yeah. Well, and, and actually, uh, going back to the whole in violet level thing, this is why I think it's so interesting because like. You as the gamer. Like let's say the game was like an action game like doom, where you're shooting things, but then all of a sudden you're like, Hmm, this is kind of boring. So you start playing the game slightly differently. You start sort of like moving more slowly and being more exploring. You could imagine a scenario where the AI like sort of morphs the game into like an exploration discovery game, or maybe it morphs it into like a missed style puzzle game or something. And so you, you know, it's almost like you're removing the like inviolate level where it's like, this can now be anything. It can sort of, the experience can sort of like map to whatever you desire at the time,

Matt:

Well, yeah, I mean I, I think in that world, you know, to, I guess today if we're thinking about what the, in violet level is, it's like the ability to represent some sort of 3D geometry,

Jon:

right? It's like the engine, like, like anything within the capability, the engine, you

Matt:

The engine, but like there's a lot of implicit assumptions there about like Euclid and geometry and what have you. But now, now it's at the, the pixel level. So the inviolate level is like anything that could be expressed at, you know, anything that you could possibly perceive is now the layer layer at which this could operate. Um, and you could imagine with absolutely wild things that like. You couldn't even conceive of and could not possibly produce with a, with a normal game engine. You know, like, you know, like a perfect example I think of this, of, of how this would be useful is, uh, there's this YouTuber I watch who makes games in the fourth dimension, or like using hyperbolic geometry and like that's a perfect example where it's like that person needs to write a whole new game engine because it's just the, the sort of thing that. Is baked in at the engine level. Uh, and like, you can't, I mean, again, we get back to what you were saying before about like the, the, the AI needs or like the computer program needs to write itself. It's like in order to get access to that inviolate level, like yeah, you do. And that's what this other programmer needed to do. But in a world where they're just pushing raw pixels, it's like you just say, Hey, like, uh, here's what hyperbolic geometry is. Like, can you make this game, have hyperbolic geometry? And they're just like, sure, whatever.

Jon:

Yeah, that's a crazy, I've heard this discussed before, uh, where that's one of the most, or one of the interesting applications of generative imagery or generative video is that it's a simulation. You know, whenever you generate a video, and I think I've actually discussed this on the podcast before, so apologies for the repetitiveness, but it's like, let's say you tell an AI to generate, like, I wanna throw a basketball out of a window, and you see how it bounces. That's a simulation of physics. And so, and just like you're discussing, you can tell the AI to like generate some insane, like, oh, I want to do a fly by of a black hole or something. And it will do what it knows of physics and try to generate like a semi accurate portrayal of that. Um, and I, I don't know, that's one of the, the values of AI is like simulating real things and, you know, potentially gaining insight from those simulations.

Matt:

Yeah. Yeah. Um, I think, I think that's all I had. Um, but yeah, I mean, I think this was a good, this was like a really great chapter to end the, end the book. Well, uh, well, so I guess that's kind of the end of notes I had for the, for the, uh, final chapter. Did you have anything else that you wanted to talk about for the, the chapter itself?

Jon:

I don't think so. I mean, yeah, like you're saying, I thought this was a great, and I mean it, he wasn't covering a lot of material in this chapter. He was sort of tying things up and just mentioning interesting examples of these concepts like the Nixon versus Supreme Court thing. There was a few interesting ideas in the end, dialogue, weirdly, something we've discussed before. Uh, and, and one of the reasons I, I think we made a joke about this a long time ago where he had like this diagram that was the ideal AI architecture and like at the very top it had lisp and we were, we were both like, what the hell? This is crazy. But then he mentions in the final dialogue that programs and data are the same thing,

Matt:

Right?

Jon:

which I think more so is what he meant, where. Within an ai, you essentially need data and a program to be able to be the same thing so that the AI can generate new data and run it as a program.

Matt:

Mm-hmm.

Jon:

Um, and so like that idea sort of suddenly made sense to me, or, or at least that diagram that had lists at the top like suddenly made sense to me because like that's the power of a language where data and the language are the same thing. Um. But yeah. And then, uh, just, uh, everyone should listen to the sloth cannon and the reacher car, which is another Bach piece. Uh, that's at the beginning of the musical offering, or I don't know if it's at the beginning, but it's part of the, the musical offering,

Matt:

um, one thing, I mean, just to, just to pick up from what you were saying, uh, about, um, dude, what were you just saying before you

Jon:

the data, data and

Matt:

Yes. Oh, okay. Right, right. So, um. Yeah. I mean, and we could, we could talk about this for a very long time, but I think early on in, in the progression of machine learning, they found a way to take a bunch of data and then compress it into this efficient structure and use that data as like, okay, in certain circumstances we're gonna delegate to this. Like efficient data structure that can tell us something about this data or like perform some task that you know is based on this data. Like what is this number? Okay, we can feed these pixels in and it'll tell us what this number is. Um, and I, it's interesting because I think the next level, and I'm not sure if this is happening, but it's like you put the structure of the, like the topology of the. Because, because the way that, that, or one of the most popular ways to do that is with these neural networks, right? You, you have a node and it has a weight, and then like these, this information flows through these like neurons basically. But so far I have not seen any system where you can, where it actually like has, IM like the, the structure of the system itself is kind of in or maybe a godelian way like. Represented as data in the system, uh, like the topology of the network is data that it is operating on.

Jon:

Oh yeah. I mean, it's funny'cause I, I was on a team at Google called Generative Apps and one of the things I was always, you know, advocating for was potentially creating like a AI first, like mini language where. You could, and you know, we were obviously discussing apps, so this would be like a templating language for describing an app. You know, like you have an app, it has a button as a slider, it has a text field or whatever. But you can also imagine an AI first language that basically has primitives for like a neural network itself. You know, like, and I don't know, this is, this is gonna get a little hand wavy, but. A lot of something that's very vogue right now is like agentic ai, where you have ais that, that each have their own little capabilities. You know, tool, tool invocation is a big part of this where you know, oh, this AI is the one that knows how to peruse Wikipedia and like find information and then like, sort of parcel it out in, in nice ways that these other ais can understand. Uh, I think we've discussed this like model, or I can't remember what it's called, MCP or something, the, the protocol that enables ais to communicate with one another.

Matt:

Right.

Jon:

Um, but anyway, you can imagine like almost a language on top of that where the AI itself is able to like, take a problem and sort of say, oh, I need four agents. I need one agent that's able to like look at Wikipedia. And it writes a little agent all by itself. You know, it writes a little tiny program using this AI first language. Uh, and, and I feel like this is the type of thing that Douglas Hofstetter was predicting in that very early diagram on ai, uh, because he's discussed age agentic AI a bunch of times in this book. And, you know, he's never used that phrase, but, um, he's definitely discussed the same idea. Um, and I, but I think the. The core idea that I think is something that you were just getting at is like the AI needs to be able to kind of structurally change itself or like introduce new components to its structure, new models, new, you know, sort of specialized like ais that have their own idiosyncrasies or their own specializations. Uh, and that's like, you know, going to be a core part of a overall like God AI that can sort of accomplish anything.

Matt:

Hmm.

Jon:

Um, so anyway, just an interesting sort of tying back to that whole. Data and about program being the same thing.

Matt:

Yeah. Um, just to, just to call out, one thing that I have heard of, I don't have you heard of, uh, auto GPT?

Jon:

Uh, it sounds super familiar.

Matt:

It's, it's, it, I don't think it is quite as powerful as what you described where it's writing code for one of its agents, but it can. It, it can do that task where you give the one a high level task and then it can decide and you, you know, you're just giving, you know, you're giving a task to just one of these GPTs, but you say, okay. Uh, if you wanna be able to generate a, a new agent to do something, like you can do that and here are the agents that you have available to you. Uh, and it's pretty, it's pretty cool. I mean, I was using it a while ago, uh, so I'm sure it's much more sophisticated now, but, um, but yeah, I mean, that's kind of gonna be, uh, I, I think that's gonna be really powerful and kind of scary, I guess.

Jon:

Oh yeah. Yeah. But that's, that's about all I had for this, uh, you know, awesome book. Totally. I was kind of blown away by this book. It's so epically. Scaled and I, I don't think I've read another book like this, like it was very unique.

Matt:

Oh yeah,

Jon:

and just a, just a really cool book. I would highly recommend people check it out.

Matt:

I think it's gonna be a while before, uh, AI can. Can write a book such as this, I'll say. Uh, but yeah. No, I agree. Really, really good book. Um, so I think, I think that's it. And now I guess we do, our typical pattern is we have a reflections episode. Do you remember? We have those, uh, questions. Uh, so, so maybe next, so next time I think we'll do a, do it. And those are generally, I think, a little shorter, but it's like a high level like. Talk about it all and we'll, we'll answer whether or not, uh, Douglas Hofstadter accomplished what he was, what he was striving for.

Jon:

I look forward to that.

Matt:

all right, well, I will see you all next time.

Jon:

See you next time, Matt.