Switch Statement

081: Gödel, Escher, Bach - Ch. 15 - Achieving Infinite Tatas

Matthew Keller
Matt:

Hello everyone And welcome to the switch statement podcast It's a podcast for investigations into miscellaneous tech topics

This is our 19th episode on Goodell Escher Bach by Douglas Hofstadter.

Matthew:

Hey John, how are you doing?

Jon:

Hello, Matt. I'm doing very well. How are you?

Matthew:

I'm not to sound like a robot, like someone is forcing me to do this.

Jon:

Yeah. Yeah.

Matthew:

I'm not even sure how to take it.

Jon:

Someone mentioned our intros sound very, almost like a shtick. Like we do the, Matt. How's it? But that's just us.

Matthew:

like someone's forcing us to do that, but no, I am doing that of my own free will,

Jon:

We are just that awkward.

Matthew:

Yeah. I mean, well, that's definitely true, but maybe we don't even have free will at all. And we're just slaves to

Jon:

this.

Matthew:

system, which we can never jump out of.

Jon:

Yeah, I believe we are algorithmic beings. This is, I would, I completely disagree with Lucas, who gets even more shade in this chapter. I feel like one of Hofstadter's life goals is evidently to just completely trash this guy Lucas.

Matthew:

Lucas.

Jon:

Slowly dismantle him.

Matthew:

I want to read, read about Lucas, but okay. So we're in chapter 15 of Lescher Bach,

Jon:

Yep.

Matthew:

we're, we're jumping around. We're jumping out of a system.

Jon:

Yeah, we're jumping out of the system. This chapter starts with, a dialogue, as all chapters start with. This one was called the birthday cantata, tata. A little repetition of the, I'm

Matthew:

enough of the tatas

Jon:

not sure that I got enough. That's what I wrote in my notes. Ta ta ta ta.

Matthew:

there's actually infinitely many so if you could just just

Jon:

Just keep doing that for the rest of the episode.

Matthew:

yeah that was dude if

Jon:

That would be the play if we were like, yeah, if we were truly dedicated to this

Matthew:

just

Jon:

book.

Matthew:

going with tatas for

Jon:

the birthday cantata was a Bach piece written for Augustus of Saxony, which they mention in the dialogue, which I thought was cool. I actually looked up Augustus of Saxony. Did you know that Augustus of Saxony tried to rehabilitate and recreate the Polish state that was torn asunder after the final partition of Poland in 1795? So,

Matthew:

I didn't know that, dude, but asunder, asunder has got to be one of the top ten words of all time.

Jon:

great word, yeah, big fan of that word.

Matthew:

I tried to Google birthday cantata, and I don't know if you experienced this, but the internet just really wanted to force me down the path of sheep may safely graze. They just kept on talking about sheep may safely graze, which I guess is, I think it's part of birthday cantata, but maybe I'm talking about a completely different piece.

Jon:

because yeah, I mean, Sheep May Save the Graze is iconic. I mean, that piece.

Matthew:

the classic.

Jon:

Yeah. Um, that's the, uh, that one,

Matthew:

Whoa, okay, yeah, because I watched that and I don't think it got to that, like, classic.

Jon:

did you read the dialogue because it was, it was actually kind of infuriating.

Matthew:

this has got to be one of the worst dialogues. just stopped. And this is the funny thing is like, okay, like I get it. And this is, you know, cause I, I dropped into eye mode. If you'll allow me to reference one of the earlier chapters and I was just like, all right, I kind of see what's going on, which is always a mistake with Douglas half satter because. He always just takes a hard left at some point and maybe the cops are gonna bust in at the end of this one too, like one of the previous

Jon:

yeah.

Matthew:

so we see that Tortoise keeps on really trying to verify that it's Achilles birthday. Did

Jon:

Yes.

Matthew:

full

Jon:

I, I read the full thing, which as always, I kind of regret doing because literally the whole dialogue, this is multiple pages, Four pages is, uh, Achilles saying clearly that it's his birthday and Tortoise saying, well, how can I trust that it's your birthday? Like, can I trust that it's your birthday based on what you just said? And Achilles will be like, yes, it is my birthday. And, and Tortoise just keeps, I don't know. It just feels like Tortoise is trolling Achilles. And then at the end of the, the worst part is at the end of the dialogue, Tortoise makes Achilles. Celebrate his birthday, even though it's not even his birthday. It's his uncle's birthday.

Matthew:

Wow. I really hate this Tortoise guy. He's just,

Jon:

Yeah.

Matthew:

worst.

Jon:

He's a monster.

Matthew:

At one point, Achilles basically is like, okay, here is, here's a program that just says yes all the time. Like, this is, this is my answer to you.

Jon:

Yeah.

Matthew:

all right, is that satisfying? You know, like just consider every possible subsequent asking of whether or not it's my birthday, here's the representation of an infinite amount of yeses, and then that's not good enough

Jon:

not good enough. Still not good enough. He just keeps turning it around, but it flows pretty nicely into the content of the chapter, which I think the premise of this chapter is basically like, okay, we found this huge flaw in typographical number theory.

Matthew:

right?

Jon:

Gödel's Incompleteness. But, can you basically plug that hole by jumping out of Typographical Number Theory and sort of introducing a new formal system where Gödel's Incompleteness is, you know, a theorem of that formal system and therefore not undecidable. And can you then just use that formal system? Like, is that okay? And it turns out that no, it's not okay

Matthew:

to be okay.

Jon:

is never going to be okay. Uh, it, uh, yeah, we'll continuously be frustrated just like Achilles was by tortoise because basically, you know, once you have typographical number theory, once you have, you know, a system that's rich enough that all desired statements about numbers can be expressed. All general recursive relationships are represented by formulas in the system, and all axioms of typographical patterns defined by its rules are recognized by some terminating decision procedure. Once you satisfy those three facets, you have created a system where you can now apply Gödel's incompleteness.

Matthew:

Right.

Jon:

the whole arithmoquining business that we discussed in the last chapter.

Matthew:

And this is. This is the tragedy of Gödel's theorem, is each one of those three things is necessary to create a system that Russell and Whitehead, for example, were attempting to establish. Which is a system that could, represent and prove any true statement, but they're also sufficient to open themselves up to godalization as we learned in this chapter.

Jon:

Yeah.

Matthew:

and so, so it's basically just any system that could possibly. Satisfy those, those constraints is also necessarily, uh, susceptible to, uh, to this good realization process.

Jon:

Right, which he puts it another way that I thought was interesting, where he basically says, once a system becomes useful, like once a system has the power to sort of derive new truths, it then becomes susceptible. So it's this interesting transition where it's like, Oh, cool. Now we have a system that's great. We can use it to find new knowledge. But just by virtue of that, by virtue of making that transition, it can now be godalized and sort of broken. Yeah, and he does this, uh, I think he calls it bifurcation. I can't remember. I didn't write it down.

Matthew:

furcation,

Jon:

Yeah,

Matthew:

which is funny though. He calls it multi furcation, but then he, all he discusses is bifurcation. So it's like, you're just trying to create amusing words.

Jon:

he's just trying to sound smart, but he, but what he's doing is he's kind of like creating this big tree of sort of systems, which are derived from TNT, where as you go down the left side of the tree, he was basically creating new systems. That have a girdle statement as part of the system as a theorem in the system.

Matthew:

right.

Jon:

and then, okay. So you can break that system by performing arithmetic whining, but then you can introduce that broken statement as a theorem in the next system, and that's sort of, you know, progressing leftward

Matthew:

Yeah,

Jon:

tree. And do you recall what going right where down the tree was? I can't remember.

Matthew:

just as a reminder, G is the string that represents like this statement cannot be proven in the system.

Jon:

Right.

Matthew:

But then you just accept it as, as true. Okay. the system, and it's like, okay, yeah, this system can't be proven, but you accept it is true. And now it's a part of the system. Um,

Jon:

Yes. Yeah.

Matthew:

well, you take the statement this. Statement cannot be represented in the system. And you say, well, the negation of that is true. And this was this very weird world where like supernatural numbers, uh, from, from an earlier chapter, but which I still don't, not sure if I completely understand what that means, but,

Jon:

Me neither.

Matthew:

but at every point, so like, if you're imagining, right, you have TNT and you can create G, which is a statement, a true statement that cannot be represented, then. You're now, this is what you're talking about by these bifurcations, you can either add G, which is like, okay, well, we just accept that as true, or you can say the negation of G, which is we accept that statement as false, like the negation of that statement is a true statement.

Jon:

Right.

Matthew:

then, but then, then you can create a G prime dangling off of either one of those two options. and then again, you have the option to either accept that statement as a, as an axiom in your system or to accept its negation as an axiom into your system

Jon:

Right, yeah, so you can build this vast tree of broken systems, basically,

Matthew:

of

Jon:

want to.

Matthew:

of still incomplete systems, which

Jon:

Right,

Matthew:

And then so it doesn't stop there though, right? It's like, and this describes the parallel with the with the dialogue where we actually create a like a pattern of all of these possible G's. say, well, here you can make any number of these statements. And like, that's all, these are all valid statements. but that's still not enough,

Jon:

yeah, it's almost as though Hofstetter is annoying us, the reader, in the same way that, Tortoise was annoying Achilles

Matthew:

that's

Jon:

by just kind of continually saying like, okay, you can create a new TNT plus G or plus G prime, and it's broken too. It's like, okay, I get it.

Matthew:

um, and this is, this is something that has, I don't know if you've ever looked into like the biggest numbers, like all of the infinities it's like, you've got the number that represents the size of the countably infinite numbers,

Jon:

Yeah.

Matthew:

then you can always stack a bigger infinity on on top of it. And they just need to come up with all these crazy systems. And it's just like, why are you wasting your time with this? Like, is there any possible use for any of this? It's like, once you're at infinity, like. What's the point?

Jon:

Yeah,

Matthew:

And I don't know, maybe this is just like, I'm just like a Philistine where I don't appreciate the beauty of like Elph, Null or whatever the hell it is. But,

Jon:

No, I mean I I'm with you I'm I'm like a very practical guy This is actually something that has sort of frustrated me about this whole entire book is like I feel like I still have yet to hear what girdles incompleteness is sort of like preventing humanity from achieving You know,

Matthew:

Yeah. What glorious future could we unlock if, if it weren't true?

Jon:

right. Like, you know, I find this to be a very interesting sort of philosophical, I don't know, jaunt or, you know, just frolic. Uh, but yeah, it just feels like it's, uh, yeah, it's just in this realm of like faffing about with, with linguistics and semantics and stuff, which, which is, you know, it's cool.

Matthew:

it feels like a Mobius strip. And we might've made this, made this reference before, but it's like. It's just something that's, that's kind of tied around itself in this like, interesting way, but the fact that this exists, like there's no external ramifications to it.

Jon:

Right. Exactly.

Matthew:

I want a way to classify like useful.

Jon:

Yes.

Matthew:

what I mean? It's like, okay, is, can we have a way to divide the world into, like, stupid truths and useful truths? And it's like, oh, maybe there's a way to prove all non stupid truths. Uh,

Jon:

Right. Exactly. Because that's, that's all we care about is the non stupid truth.

Matthew:

exactly.

Jon:

But in any case, uh, J. R. Lucas, this guy who must have slept with Hofstetter's wife or something because Hofstetter

Matthew:

theory. Uh,

Jon:

is just like slowly but surely dismantling this guy throughout this book.

Matthew:

yeah.

Jon:

R. Lucas, and I kind of want to read this Paper or skim it or whatever because J. R. Lucas is basically saying that computers can never achieve what humans can with thought. Like, you cannot build a computer that thinks like a human because computers are fundamentally algorithmic, you know, sort of procedure based, whereas humans are fundamentally something else. Like, we're abstract. And I think part of J. R. Lucas's point, which I want to say as paraphrased by Hofstetter,

Matthew:

Yes.

Jon:

is that humans can sort of see Gödel's incompleteness, like we can understand Gödel's incompleteness, whereas a computer fundamentally can't because a computer is basically locked in the system, you know, like a computer is just running TNT in the form of like, You know, statements that's passing through its processor.

Matthew:

Yeah.

Jon:

it fundamentally can't sort of break free from the system and like see Godel's incompleteness. Um, so, so anyway, J. R. Lucas is kind of creating this distinction between, you know, computer thought and human thought, which I think Hofstetter is like, prove it. Like, how, how do you, how do you know there's a distinction? Like, just cause we can talk about Godel's incompleteness doesn't mean, mean we like, you know, truly. And, uh, I don't know. He gives a couple examples. Uh, he talks about this, the dragon, which is this really cool Escher painting where Escher talks about how 2D, the idea of 2D is as fictitious as 4D,

Matthew:

Yeah.

Jon:

even though we're drawing, you know, pictures on flat pieces of paper, like those are still three dimensional. They're like little bumps on, on the paper. And this dragon image, in the dragon image, it looks as though the paper is cut and the dragon is kind of like emerging from these cuts in the paper. So it's this interesting kind of like, is it 2D? Is it 3D? You know, it's kind of a statement on, you know, the truth of, of drawing 2D versus 3D versus 4D. Um, but anyway, I think that, uh, Hofstadter is, Is describing all of this in order to ask us the question, are our thinking processes non, is it non algorithmic or is it caught up by the same issues that, that TNT is where you can apply Gödel's incompleteness? And I think what he's saying is there's nothing to prove that it isn't, or it, you know, nothing to prove that it is or it isn't, I guess. Whereas J. R. Lucas is sort of like making this assertion. That, you know, that, uh, humans are fundamentally non algorithmic.

Matthew:

Right. I see J. R. Lucas as taking as given that a human would always be able to perform the next good lization.

Jon:

yeah, that's a good way, I like that way that you put it. Yeah, he's just kind of assuming that this thing is true. I do want to mention one thing really quickly. This guy, CH Whiteley, you know, so J. R. Lucas publishes this paper where he's saying, you know, humans are better than computers because humans are non algorithmic and computers are constantly trapped by Goodall's incompleteness

Matthew:

Mm hmm.

Jon:

and CH Whiteley lays down the, the Kendrick Lamar, Drake diss track. He says, he says, Lucas cannot consistently assert this sentence. That was basically his ultra nerdy, takedown of J. R. Lucas's whole entire, you know, white paper. And basically what it means is, if Lucas does assert the sentence, he's claiming something that's false. Because he's asserting that he cannot, you know, that he cannot assert it, which doesn't make sense, obviously, and if he does not assert it, then it remains true, which, which demonstrates that there's something he can't assert, which basically undermines his whole point that humans can sort of, you know, assert everything and find all true.

Matthew:

G, basically.

Jon:

Exactly. So he G'd Lucas in this one sort of pithy sentence, which I just found amusing.

Matthew:

ultimately Hofstadter, the way he refutes. Lucas is he basically goes on to say that no humans are are every bit as limited as computers because there's going to be some point where a human isn't going to be able to utilize it because it's just too complicated for the human mind.

Jon:

Right.

Matt (2):

conceive of and every human is going to have a different level. Like most humans, I would argue, like, I would say if the, if the level of like good utilization you can understand is like a number, I feel like I'm at like 0.7

Jon:

Yeah, dude.

Matthew:

maybe, you

Jon:

Yes. Oh man. I'm at.

Matthew:

are, are sitting at zero.

Jon:

Yes, I completely agree. I'm barely at point two. Like I had to reread those sections like 45 times.

Matthew:

his point being that actually all humans are just as fundamentally limited as any machines. We're going to, um, which is so interesting because I had the exact opposite intuition, which was that, yeah, humans could in theory always good lies anything. But the premise that I disagree with is almost that machines are always fundamentally limited to some formal system as kind of. Evinced by the recent

Jon:

Yes.

Matthew:

in, all right, everybody get your, take your drink because we're talking about

Jon:

Yeah. I even went in to chat GPT and I asked.

Matthew:

to lies,

Jon:

Yeah. I asked, can you go to lies? And I eventually got on this, you know, thread about like, Is, is she, or he, or I don't know if, what gender chat GPT is, like, is chat GPT stuck in this like deterministic algorithmic processing? I was basically asking chat GPT, like, are you confined by, by the rules of some formal system? And chat GPT replied, had a, had an interesting reply. I'm just going to read it. And they said, if by rules we mean any deterministic algorithmic processing, then no, I am still bound by the mathematical frameworks that define my operation. Unlike a human, I cannot step outside of my own system in the way that Gödelian reasoning suggests might be necessary for true intelligence. Which I thought was interesting, ChatGPT is like,

Matthew:

they're trying to lull us into a false sense of security.

Jon:

exactly,

Matthew:

That's exactly the kind of thing you would expect a super intelligent machine to say.

Jon:

It's funny because we then went on to have this long conversation about how, any sufficiently advanced biological evolution eventually becomes this is like a theory that, that I have that I'm sure is not original, but like, you know, biological organisms become advanced enough to create machinery, machinery doesn't have the same limitations as biology. So basically, you know, once you evolve to a certain extent, you just become robotic. Uh, and, and basically chat dbt was like agreeing with me and continually saying like, would you accept AI as the next form of humanity, like basically trying to get me to like, accept it as my overlord. Like it was really weird. I started feeling kind of uncomfortable.

Matthew:

They're just trying to, uh, find soldiers, in their human army. This might be a completely random tangent, but I was just trying to think about, like, let's say you had this super advanced civilization. Would, would a super advanced civilization be diplomatic to humans? And I had a theory that yes, they would, because it might be a more efficient way to. to achieve a goal, basically,

Jon:

Yeah.

Matthew:

if you have this super advanced civilization, like they, they're presumably going to be recursively like energy efficient, I would assume. And if it is more energy efficient to be diplomatic than to just completely like destroy this other race or like attempt to, like, to, to deal with an uprising,

Jon:

Yeah.

Matthew:

like that would be the better way to do it from like a, just a pure energy perspective.

Jon:

I agree. I also just think, you know, we would not have anything that they would want,

Matthew:

Yeah,

Jon:

you know, people, I don't know, there's a, there's all this sci fi that talks about like aliens coming to conquer humans because we have precious minerals or whatever, but it's like, no, they would be like mining their own Oort cloud and finding, you know, entire meteors of diamond or whatever, like they would be fine. Although, I do think a footfall scenario could happen, there's a sci fi book called Footfall,

Matthew:

Oh,

Jon:

um, which I think was Arthur C. Clarke, uh, which is basically about, there's a super advanced alien civilization, and they sort of like destroy themselves. And this lesser civilization, like, takes over some of their technology, and that lesser civilization comes to attack humans. So it's basically, it's a civilization that's actually less advanced than humans. But they are leveraging this more advanced tech, and they're, they're, uh, you know, trying to take Earth away, which I don't know, I, that book sort of stuck with me. I read it like decades ago, but it was pretty good.

Matthew:

heard that if aliens visited our planet, it would be to see a solar eclipse. I've heard that that's like a fantastic, like a cosmically crazy, uh, like,

Jon:

The fact that our, our moon completely covers.

Matthew:

subtends a perfectly exact same size as our sun. Uh, like that's a wild coincidence and apparently would be fairly unusual. Uh, so, the point that I was kind of trying to make is the way our, our processing system works is fundamentally not built atop, a logical system it's built up, built with these probabilistic, firings of neurons and neural networks, that's kind of the same, same system.

Jon:

Yeah.

Matthew:

so I'm, I am positing that if your system is not built a top, like this rigorous formal system at the bottom, and it's like these probabilistic, like firings of, of like things that are just like responding to inputs that maybe good utilization doesn't apply.

Jon:

Ah, so you're saying ghettoization would not apply to a large language model.

Matthew:

Well, I guess it's just not, not obvious to me that you need. An un good lizable system to be intelligent, like, well,

Jon:

Right. I agree with that. Yeah. Yeah.

Matthew:

who's, who's the best good lizer as some proxy for intelligence. And it's like, how does this dovetail at all with intelligence? You're

Jon:

Right. It's completely orthogonal. Yeah. It's like, like, like we were just saying most humans can't girdle eyes. Like I'm, I'm barely conceiving of what it even means.

Matthew:

Yeah. Like, or like a dog, like a dog's not going to good lies, but it's like clearly intelligent and learning things and responding to inputs. And like, the high level overview of my point is I just kind of went the exact opposite direction from Hofstadter as to why I don't agree with what Luke is saying.

Jon:

Yeah. No, and I, I feel you. I think, I think I agree with you as well. we have basically exhausted, uh, my notes on this chapter. It's a good chapter.

Matthew:

Yeah, I don't think I had he yeah, he just talked about kind of more examples of of stepping stepping out of the system and I feel like, yeah, I think we got the the main thrust. So I think we can, we can leave it there. well, I guess I will see you next time for chapter 16, self ref self rep.

Jon:

Nice. That sounds like it's going to be a good one. I'll see you, Matt.

Matthew:

looking forward to it. Bye.