Switch Statement

079: Gödel, Escher, Bach - Ch. 13 - The GlooP is a Lie

Matthew Keller
Matt:

Hello everyone And welcome to the switch statement podcast It's a podcast for investigations into miscellaneous tech topics

Jon:

This is our 17th episode on Gödel Ascherbach by Douglas Hofstadter.

Matt:

Hello, John, how you doing?

Jon:

Hello Matt, I'm doing pretty good. how are you doing?

Matt:

I'm just, uh, yeah, just hanging out. you have any good. Revelations recently.

Jon:

Uh, well My wife and I went trick or treating yesterday and we were Wayne and Garth from Wayne's World

Matt:

I love it.

Jon:

Which I feel like was this sweet spot of You know having fallen out of the zeitgeist because no one knows who Wayne and Garth are anymore like that's almost completely eradicated

Matt:

It's lost to memory.

Jon:

Yeah, it's lost to memory. I mean, obviously if you're a certain age, it's, it's still in your memory. Uh, but, but that was the whole point is like all the adults were like, hell yeah, because they, they all,

Matt:

They got it.

Jon:

They got it. They also knew that it was like, you know, I don't want to say it's obscure cause like, you know, it's Wayne's world. Like it's not, it's not necessarily obscure, but I think it. was at that right point, that sweet spot of obscurity where. You know, a certain age group really appreciated our costumes.

Matt:

I feel like our, our podcast is kind of like that, where. There's most people have no idea what we're talking about.

Jon:

First and foremost, us.

Matt:

Yeah. Um, but there's a very small, small set of people who knows what we're saying and appreciates it.

Jon:

Yeah.

Matt:

maybe, it's zero. Maybe it's zero, but I don't know.

Jon:

Yeah, you could define the number of people who listen to our program with a primitively recursive

Matt:

Oh, yeah. Oh, it's bounded. It's it's

Jon:

it's bounded.

Matt:

Um, so this is the second half of chapter 13. Of. Good Lesher Bach. I'm always jealous of the way you say a guttural, because when we use our transcription software, it knows what you're saying. And when I say it like a. Like just like an unsophisticated American, like girdle. It thinks I'm saying the word, like G I R D L E a. So, but no, it puts the uhmm loud and everything. Yeah. Uh, when you say it.

Jon:

Man, I'm like really proud of that. Well, I think it also helps if you say his whole name. Kurt Gödel. Because it kind of like flows. Like, his name is kind of nice. Like, it sounds very pleasing to me.

Matt:

Good.

Jon:

Kurt Gödel.

Matt:

last time. I talked about looping. Talk about, boundedness talk about bloops. We're just at the bloop phase. This chapter is called bloop, bloop and gloop, but we only know of bloop.

Jon:

We only know bloop. But! We are about to learn about Floop.

Matt:

Hofstatter does this, does this funny trick, which we've, we've alluded to before about like attaching a number to every single operation in this thing.

Jon:

Yep.

Matt:

Um, and now we're doing this to, to computer programs where we're, we're basically just taking the length of them. And then that is kind of like the first number and then we're alphabetizing them. And then that is kind of the second number. And we use these two numbers together to come up with this overall. Position in a list of every single possible program. Every

Jon:

single Bloop program that receives one parameter and returns a parameter, a numeric parameter.

Matt:

That's a good, that's a really good point. And so we're just, we're just, we're not doing any tests which would return like a Boolean value, true or false or yes or no. It's only noon. And yeah. So exactly what you said. Um, but so why are we doing that? Why are we numbering all of the

Jon:

Well, so we are trying to answer the question that we had in our last podcast, sort of the provocation that we ended with, which is, are all number functions? primitive recursive? And what we can do by indexing all of these Bloop functions, and, and by the way, this number is huge. I mean, the, the number of Bloop programs is,

Matt:

It's infinite.

Jon:

It's infinite. Yeah, I mean, this is like basically every permutation of like correct syntax that you can possibly perform. But just by defining, uh, Uh, this sort of like huge list of, of, uh, blue programs. We can go down this thought exercise that, uh, George Cantor, or I don't know how to say his name, Georg Cantor, maybe, uh, he sort of came up with this diagonal, uh, methodology, or I don't know if there's like an official name for this, but it's, it's like a really cool methodology where if you've created this huge table of like indexed items. And, and by the way, we're referring to these as, as a blue number, where like, you know, blue of n, where n is one, is like the smallest possible blue program with like letters near the beginning of the alphabet. And, you know, blue, where n is two, would be like probably an extremely similar program, except where one of the letters was like a B instead of an A. Um, so anyway, we have all these blue functions. And what Cantor is able to do is put all of them on this massive table and basically look at all of their outputs and use this, this method called the diagonal method where you diagonally go through all of the outputs of all of these blue functions and you create this tremendous number based on all of those values on the, on the diagonal and at the end you increment I And then, you know, you create this big old number based on that, and then you increment each of the digits by one.

Matt:

Yeah.

Jon:

And what you've basically done in doing that is you've created an output that none of the other programs can create. Because you're, you've basically proven that like, okay, the first digit is different from the first one. The second digit is different from the second one. So you've created this new output that none of the other programs were able to create. And so you've therefore proven, basically, that none of, that even if you create a set of all blue programs, there's some other program, or there's some other result that can't be created.

Matt:

I think it was, there was a subtle, a subtle difference. my understanding was. They're stealing the output. Like, if you put, if you put the number. X into this, like diagonal. Function. It will look up the X. Function. And then it will pass that number into it'll pass X into that function. And then it'll add one to that. And that's the output of the diagonal function for that input?

Jon:

Yeah, that's, that is more correct. My, the way I was describing it was, there

Matt:

That's like the original that's the original diagonal. Uh, yeah.

Jon:

Exactly. The original diagonal I don't know what to call it. Diagonal

Matt:

Argument. I think a diagonal argument I think is how he referred to it.

Jon:

Yeah, so the original, and just to give a little bit of additional texture to why I was describing it that way earlier. The original diagonal argument had to do with differently sized infinities. Which is kind of amazing, just an amazing concept. But basically, what he was showing is that if you have a list of all the integers. So all the, you know, Yeah, integer numbers, and you sort of assign those to a decimal number, and then like I was saying earlier, you diagonally go through all those decimal numbers and take the digit at so and so place, and then increment it. You've basically created this new decimal number that isn't yet on the table, and so therefore you've shown that, you know, the infinity of decimal numbers is actually larger than the infinity of counting numbers, which is a good thing. It's a completely crazy concept.

Matt:

Right. I think the thing that's confusing is you, you have to do this infinitely many times. You know, which I think is. One of the more un-intuitive aspects of it. So it's not something you could actually physically do, but, you can kind of imagine that. Let's, you know, you've done this operation infinitely many times. You're at the end. And now you've gone through every single integer cause you're at infinity. But now you have one additional. Real number that is unaccounted for, right? Because it's, it does not match any number in the whole list, which that's. And then, so now. Yeah. And it's kinda, it's hard to reason about cause it's like, does it matter? Cause you had to do something in an infinite number of times, like, doesn't that? Um, but anyway, I guess we're treating that as, uh, as a valid, uh, thing to do.

Jon:

Yeah. No, this is very much in the territory of like, let's add all positive numbers and get negative one twelfth. Like, it's just this kind of like, you really need to be jumping, doing some like, mathematical gymnastics to like, really see the result. But it is, it is fascinating, like when it, when the diagonal methodology like really clicks. And you can sort of see that, like, oh yeah, this, this infinity is larger than this other infinity, like, it's a pretty, um, yeah, it's just a cool sort of facet of math.

Matt:

Right, but, so, so if we're going back to the diagonal argument in the context of functions, Instead of just coming out with. One big, long number. We come out with a function that is kind of a mapping of all of, of all of these numbers. And what's, what's unique about it is so. If you think about a function, you can think about a function as being the mapping between its inputs and its outputs. Right. Like, and that kind of is its unique identity. Right. So, what we're trying to do is come up with. In the, in the same way that we were trying to come up with a number that is not in the set. Uh, with the, with the kind of vanilla diagonal argument, we're trying to come up with a function that is not in the set.

Jon:

Yeah.

Matt:

And so basically we're doing Instead of attacking like each digit and trying to make sure one digit is different where we're kind of making it different on the like input mapping side. So we take one input from every single function and then we just make sure that it's different than at least one other function in. The list of all functions.

Jon:

well, right, and, and this is, gets to what you were talking about earlier, where he defines this function called blue diag.

Matt:

Right.

Jon:

And it takes its input and passes it to the blue function at that same index and then adds one. So you've basically created this function that, you know, has a different result than any of the other functions. And it's sort of, you know, provably has a different result.

Matt:

Right. Exactly. So it's like, it's basically. Stealing all of the output from every other function, uh, you know, based on its index. And this is. This is why that index is important is now we start to get into. The. You know the mapping between cause that that input has to be a number and then we have to be able to look up. Of corresponding function. And make sure that we're not missing any of the possible functions when we do that. Um,

Jon:

Yeah. So it's like in a world where there is that strict mapping of like. you know, function to defined set of output, then this will always be possible to kind of use this diagonal argument to prove that the, the system is limited,

Matt:

Right. Right, exactly. And, and so, but. And this starts to get into maybe my limits of, of understanding, but basically what they're saying is. This proves that there is at least one function. That cannot be represented. I mean. Blue in, in bloop.

Jon:

right? Yeah.

Matt:

because we go back to that. infinite argument. We've got we've infinitely, gone through every single bloop. Program. And we've made sure we've created a function. Now that function is meaningless. Like it's absolutely worthless. So it doesn't do, you know, Um, but that doesn't matter because all we've, but we did what we were trying to do, which is we've created a function that does not map. Uh, or that does not match the output of. Any of the existing blue funk blue functions or blue programs, I guess he refers to them.

Jon:

Yeah. So we're kind of proving that Bloop is just fundamentally. Restricted. And we even mentioned the restriction in our, our last episode, which was that, you know, bloop is primitive primitively. Recursive, meaning all of its for loops have fixed bounds. So what we can do is we can liberate bloop from these shackles, or we can free bloop and create a new language called Fluke, which basically stands for free loop instead of bounded loop. And in fluke. You can have unbounded loops,

Matt:

Right. We get the wild loop.

Jon:

exactly, yes, you have a while true loop and, and presumably you can also perform recursion where you can kind of call yourself within yourself,

Matt:

This is interesting. He doesn't, he does not say that. I, well, at least if, if I recall correctly, he doesn't explicitly say that you can now call. And call yourself. But that also doesn't matter though, because you can, if you do have a while loop, You can turn any recursive function into an iterative function by basically maintaining a stack of. Options. Uh, and now maybe I'm getting like a little bit technical here, but, um, But at the, so even if we don't allow that in fluke now, self, self referential procedures. That's still. Okay. Because the while loop is enough. Uh, and we will see that, you know, that is generally enough. Uh, to, to kind of represent. Well, what we'll say, I dunno, there's also gloop. Uh,

Jon:

Yes. Yeah. So, but so floop removes this massive restriction from bloop, but it introduces this new. Property, which is, there are fluke functions that you call them and they never finish.

Matt:

Right.

Jon:

And so this property that bloop had where you could create a strict mapping between like a blue function and its input and its outputs. You can't do that with fluke because a lot of times in fluke it has no output or it just never stops or you can't prove that it will stop or not. Uh, and so Floop kind of, you know, sure, you've created this more powerful thing, you've sort of unshackled Bloop, but you've also eliminated, uh, this ability to kind of define what Floop is even going to do.

Matt:

Do you see this as being like is bloop the. The record player. That's so simple that it can't. Like that it can't play a record with high enough fidelity to break itself.

Jon:

Ooh.

Matt:

I'm trying to do the mapping between the examples and I wasn't sure if is fluke. Now, the thing that it kind of is, is sharp enough to cut itself, uh, basically,

Jon:

I think so. Yeah, I think that's, I didn't really get that when I was reading this, but now that you're saying that, that sounds, uh, like the analogy he was going for. Like, Floop is finally this really mega powerful tool, but it can also destroy itself. Right.

Matt:

terminate. Um, And this, and if anyone has ever studied computer programming, like you're, the alarm bells should be going off of the halting problem, basically, you know, um, And this is a very famous, uh, problem with, you know, that was it. Was it Alan Turing that, that originally talked about

Jon:

I believe so, yeah. I believe

Matt:

And, Hofstadter talks about this, but the common understanding is you can't have a program that will determine if another pro program will terminate because, and now. there's a trivial, uh, proof of, of this. Let's say you do have a program that can determine whether or not a program will halt. Right. Well, you pass. You know, you pass up the program's own program into itself. And if it says it would halt, then you just do a, an infinite loop and you never halt. And like that's. You know, that's a contradiction, obviously. So, um, but so the, the, the point of it is just to say that. You can't in general know if a program will, will halt. Hopefully that wasn't like.

Jon:

No, I, I, I, think that actually made sense. And it's, and, yeah, this definitely gets into, uh, you know, Alan Turing, I think. He, he proposed this idea of Turing completeness, which basically means that within whatever system, you know, typically a programming language or some logical language, there's these primitive operations that you can perform. And in being able to perform all of those operations, you are Turing complete, but part of the side effect or one of the side effects of being Turing complete is that there's this halting problem. Okay. Where you can't possibly, and yeah, just like you're saying, like you can't possibly know if a program will terminate and you can't possibly write another program that will determine whether something will terminate or not. Um, and Douglas Hofstadter has another amazing sentences in this where he's talking about how, you know, Floop is more powerful than Gloop, or well, not Gloop, but Bloop. Floop is more powerful than Bloop. And he's, he says, in one swell fluke, he's talking about introducing this new ability to have like unbounded loops. And I just thought that was amazing. But like typical Douglas Hofstadter, like so incredibly nerdy that I just rolled my eyes, but, uh,

Matt:

yeah. He also earlier, um, he was to cause. In an, in a previous episode, we. He had something called the Contra across the And then he. He like, he describes that. Uh, it's very. You know, because of Kantar we were just talking about Kantor's Trek with diag diagonal, the, a diagonal argument. And then he was like, oh, the contract, cross the printer. could well have been named the Cantor across the Puente to guess, which of course like Cantor is a. You know is an anagram of contrast. So, uh, so it's just another one of those things where it's like, man, you thought about all of this for way too long. I'm impressed, but I'm also like,

Jon:

you took too much time. Yeah,

Matt:

Yeah. Yeah. How long did they come up? Tick'em. Uh, take you to come up with that, that joke.

Jon:

Actually speaking of that, I have to cut back to the dialogue, uh, just because there was this crazy part in the dialogue. So at the very end of the dialogue, they introduced this like, I think it was a music box or something. It was some kind of box and it had this letter in it with diagonal, you know, where it had all, it was like the list of greatest mathematicians. Um, and so it had like, Boole or Newton or whatever, just a bunch of mathematicians, and diagonally the letters were highlighted, and if you took those letters and, like, incremented them, so, like, turn a B to a C, turn a Z to an A or whatever, it would spell Cantor.

Matt:

Oh,

Jon:

And so this was like the throwback to this, to the dialogue, but then immediately after this happened in the dialogue, a bunch of like cops burst in the room they're like, Oh, Achilles, you stole this music box. And it had the 10, like the coins that they had given Bach to wrote the Goldberg variations and Achilles was like, no, it was tortoise. Because Tortoise was like, holding the music box in his hands at the time. So, basically the whole entire dialogue was just so that Achilles could get Tortoise to like, pick up this music box and then he could incriminate him for having stolen it. And that's like how the dialogue, that's how the dialogue ended and I was just like, oh my god, like this is, it's pretty rough like Achilles just completely like, framed this guy.

Matt:

Man, maybe I should start, reading the dialogues. That sounds pretty exciting. Uh,

Jon:

I don't know, I mean it was mostly like Yeah, hard to get through, but that one part at the end was just like, dang, like he just suddenly, uh, got real.

Matt:

Yeah, that was cruel. Achilles, framed, tortoise.

Jon:

He framed Tortoise, yeah, and

Matt:

That doesn't seem like something Achilles would do.

Jon:

He's been fairly nice this whole book, and now all of a sudden he's like revealing this dark side. It's very strange. Yeah.

Matt:

Just introduced this flu, which it solved some of our problems, but it introduced, uh, some new ones, but then. Kanter comes along again with his diagonal argument. And it happens again, right? He's we're able to do that same trick where we put every red program in a list. And we create the red diag and now we have some. Functions that can't be produced by. Fluke programs. Is that right?

Jon:

Yeah, this is where stuff got a little fuzzy for me, we went through this exercise, very similar to Bloop, where we created this, this sort of table, But my understanding was also that for Floop, since not every program actually produces a value, it kind of like breaks the table. So it's like you, you can sort of perform the same diagonal exercise, but you also sort of fundamentally can't perform the diagonal exercise.

Matt:

Right. Right, exactly. So that's, that's a good point because we need to filter out. All of the. They refer to them as green programs, which are programs that may never, at least for at least one of its inputs, never terminates.

Jon:

Right.

Matt:

And then you get these red programs, which always terminate, but the problem is we don't, we don't even have a real way to. You know, to, to evaluate that anyway. why do we even do this exercise? You know what I mean? Like, I guess it's unclear to me why. He even has us do this because it's predicated on the existence of a termination tester, which we know can exist.

Jon:

Right, well, and I think what he's, you know, he's trying to posit that, you know, Bloop has these, these issues. We were able to fix it with Floop by making unbounded loops. Is there this third thing, Gloop, which fixes the issues of Floop, which, you know, basically the halting problem, and what this chapter, I think, is aiming to prove. Or at least to, you know, create a solid argument for is that gloop is not possible. Floop is like the maximum achievable, logical tool. And, and in, in admitting that, that floop is the, the best logical tool we have. You're also admitting that there is this vast territory of, green programs that we will never really know. You know, where they are, and there's also this sort of new set of red diag truths that we can never express. So it's sort of a restatement of Gödel's incompleteness, sort of showing incompleteness with logic itself, which I guess is what Gödel's incompleteness originally shows as well, uh, but you're showing it with these, formal systems for, you know, performing logical operations. You're basically showing that those formal systems are like, reach this fundamental limitation that we can't figure out a way to get past.

Matt:

Right. Right. And. And I guess when we're thinking about those red red programs, so red programs are the ones that terminate.

Jon:

Right.

Matt:

and yes, we don't have a way to. To figure out which ones those are, but that those exist, right. Even though we can't. We can't write that termination test or program, like technically speaking. It's it's some value that all programs have. But, so if you could, hypothetically. Assemble them. Then even then you still, you could still do this red diag Trek, which would come out with a function that. You couldn't express with floop. And so that's, but that's, what's, that's, what's weird to me though, is that's saying that there is a function that. Can't be expressed with flu.

Jon:

Yeah. Yeah, exactly.

Matt:

Man, but, but even if you toss aside the halting problem, cause we've kind of liked your hand, waved away the set of programs that don't halt.

Jon:

Yeah, you're still left with sort of unrepresentable truths.

Matt:

And now, so you're, so you're, you started to use this, this truth. does he describe, describe them as truths?

Jon:

I don't know if he does in this chapter, I guess I'm sort of, you know, I, my take when Douglas Hofstadter is talking about these formal systems is like, these are humans utilities for working with truth. You know, the like humans have these scientific. Uh, tools, like math itself is, is simply a tool, like math doesn't, you know, math is this totally abstract thing, Like it's, it's, there's no realness to it whatsoever, but it's a tool for working with real things and like coming up with new real things.

Matt:

Right.

Jon:

And so when I hear him say that these formal systems have these limitations, what I am hearing is that our ability to discern truth is limited.

Matt:

I think you're right, because we are going to get there. Cause that's the whole point of girdles incompleteness theorem is that there's some things that are true that are. That can not, can never be proven in. You know, in these formal math mathematical systems.

Jon:

Yeah.

Matt:

But I guess. I just don't know if we need one more, like step two. Because right now, we're just talking about functions and I don't know how we map. How do we get from functions? Which feel like they're just these atomic objects, which like. There's not like a truth statement to them. To the function itself, you know what I mean?

Jon:

Yeah, yeah. Well, and, I'm totally, like, editorializing, so it's, so I am definitely sort of creating my own, like, meaning from, from this chapter. I mean, the chapter is much more, it's much more of just the discussion of, like, the capabilities of these languages. It's not really, I don't think he ever does that leap of, like, you know. Talking about truth or talking about new scientific discovery or whatnot.

Matt:

I think he's going to get there. Like I think he has to, because that's the whole point that there are true statements that can never be proven. I guess we'll just have to keep reading, to see how, how he finally gets there.

Jon:

Yeah, I'm sad that there's no gloop.

Matt:

Yeah, it would be, it would be cool. I, I, I almost don't accept it. I'm like, there's gotta be a gloop, right? Like, uh, just put, just make a big lookup table. all right, well, I will see you next time for chapter 14.

Jon:

See you next time Matt.