Switch Statement

038: The Design of Everyday Things Ch 7: Welcoming Our Robot Overlords

July 07, 2023 Jon Bedard Season 3 Episode 13
Switch Statement
038: The Design of Everyday Things Ch 7: Welcoming Our Robot Overlords
Transcript
Matt:

Hello everyone And welcome to the switch statement podcast It's a podcast for investigations into miscellaneous tech topics

This is episode 13 in our series on the Design of Everyday Things by Don Norman.

Matt:

Hey John how's it going

Jon:

Hey Matt. How are you?

Matt:

I'm doing all right

Jon:

Do you think that we will turn into robots soon?

Matt:

I am very excited for the technical fusing of humans and technology

Jon:

I am stoked. I would put metal into my brain right now today.

Matt:

Call up Ilan I think Yeah I think he'll I think he'll do it

Jon:

Yeah just not Elon. I feel like I would rather have, like Peter Stormer from Minority Report, put something in my brain than, uh, sorry, that was a weird, really weird reference. The reason I'm mentioning this is because there was a really interesting part of this chapter, which is design in the world of business, uh, where, you know, we've already sort of covered. Some of the interactions between running a real business and trying to design things. And now we're gonna talk about, um, you know, kind of the responsibility of design, um, how humans and culture interacts with design. And something he talks about is the pace of technology versus the pace of culture and, and generally just the pace of human evolution where the pace of technology very, very fast. Pace of, of culture and human evolution generally very, very slow. But I thought this was interesting because I think we might be entering an age, maybe in the next few decades, where the actual pace of human evolution is going to almost be like industrial era eyes, where we're gonna be like putting little chips in our brain and kind of, yeah, we're gonna be like in the

Matt:

to be able to upgrade Uh You know human cognition to point now

Jon:

Exactly.

Matt:

um this is this is very interesting Uh I wanted to get back to a point that you said ah and now I'm forgetting what that point was Um

Jon:

Human evolution, culture, slow moving culture.

Matt:

Oh yeah Okay So actually the point that maybe this is a point for for later cause it's about his his book maybe maybe yeah I'll I'll I'll say I'll say if it

Jon:

I found that section to be hilarious, by the way.

Matt:

One of the things that w was very interesting to me about This section Is he wrote this book In this book originally or the psychology of everyday things in 1988

Jon:

Mm-hmm.

Matt:

And I'm sure if you wanted to you could buy the original book and read it and I'm sure there's like a lot of value to that book In the early two thousands he or I don't know exactly what year it was but he spent a tremendous amount of time to make a virtual Create a copy of his book He paid like a ton of people it was incredibly expensive It's impossible to consume Like there's literally there's no computer in the world that could run that right now I mean maybe you could boot up an old version of like Mac alas or something to run it And it made me It was just crazy because it just makes me think about the light fragility of technology

Jon:

Yeah.

Matt:

Because if you're building something that's based off of Something that hasn't existed for very long There's a much higher likelihood that like It's not going to exist for very long and you know so It just makes me think about like the durability of of anything you know

Jon:

no, absolutely. I mean, we need to be burning everything to Ruby disc drives and putting it in like a Scandinavian seed bank. 10,000 feet underground.

Matt:

Well this is what I'm saying It's like write everything down Like I have this You know I have this remarkable but it's mostly just you know written but I'm thinking like I'm thinking I want to like just print it out You know what I mean Just have paper copies of these things because like who knows like this thing might Di I might be able to I might lose access to it but like if you have it as this paper written word Like I dunno it's never gonna It's never gonna go Like it's never going to become like inaccessible

Jon:

Right, but you'll just have a closet filled with papers. You need a good filing system.

Matt:

Yes Yes Um

Jon:

I thought that section with the book was hilarious though, where he

Matt:

is that

Jon:

first of all, just like, like basically the way it worked or I don't know. You can correct me if I'm wrong, cuz. Um, I might be misremembering a little bit, but it was like you could read the book in digital form, you could flip through the pages or whatever. It's kind of like a, you know, like a Kindle almost. But then if you like, didn't understand a section, Don Norman himself would pop up in the corner. He would kind of like peel back the page and pop up and like describe it to you or offer, you know, little anecdotes or something.

Matt:

if he if he detects misunderstanding It's like no that's a signifier And then he just kind of like you know he's like a little troll that like pops out

Jon:

Yeah, he'll pop out like rattle on about doorknobs for 25 minutes and then go back into his little Heidi

Matt:

Dude I would pay for that Like honestly I think it would be very entertaining to talk with Don Norman I'm sure he could just re just ramble for an infinite amount of time about Doors

Jon:

him and Jesse shell, I want them to have like a little anecdote battle. Where they just go back and forth and like, I don't know, quote Cicero and just talk about weird, you know, weird shit. I think

Matt:

I do think Jesse shells quote game Uh would be way way stronger

Jon:

Oh yeah, no, Jesse's Jesse Shell's quote game was utterly amazing off the charts. Like I was a really big fan of, Um Martin Fowler's quote game. Uh, because I, I think I'm just partial to like engineering quotes. They always sound hilarious to me. Uh, but Yeah I think Jesse Shell has him beat. He just has a, a much wider

Matt:

Well the breadth Yeah

Jon:

the breadth. Um, but going back to what I thought was so funny about the digital book thing, is he basically blamed technology. For the failure of that book. Like his, his claim, his claim was like, you know, oh, technology hadn't caught up to my idea yet. Like, my idea was ahead of its time, basically, which just, I don't know why. I mean, maybe I'm like developing this narrative against Don Norman, but to me it just, it just read like, like an excuse basically

Matt:

These these podcasts are

Jon:

failed, and he's just making an excuse.

Matt:

These podcasts are basically one long character assassination on Don Norman Uh

Jon:

I know, which I feel bad about cuz I do like this book. I

Matt:

No he I don't want it to be mistaken that w Uh we both have a lot of respect for Don Norman and like vast majority of this book is is really valuable

Jon:

Absolutely. I I think

Matt:

more fun to

Jon:

like if I, oh, sorry. I think what I would say is, I just find him to have this like curmudgeonly, you know, outlook, which I find, I find it amusing. I don't, I don't think it like, um, undermines his larger points that he's making cuz I do think he's, he's he, some very valuable insight that he's offering. But he also just has this like, you know, Clint Eastwood from Grand Torino type thing that I just enjoy. So it's, it's not that I'm making fun of him, I'm just enjoying. His material.

Matt:

Um But about the future of books I mean obviously so this book was written in 2013 If I'm not mistaken Um And so obviously he can't he couldn't have known at the time how good a chat agents would become

Jon:

Yeah.

Matt:

And I do think that the future of books has actually Newly very interesting because especially something that's attempting to teach you Like Can you can you sell someone like an AI agent that has been Really trained on the contents of a book and then more so it's like you know You can with an AI agent like now you're free to Have like a vastly larger pool of of texts Like when you read Are you when you write a book It's like you You know you have to decide things to put in or not put in What have you But You can almost have like a storyteller and then like you could go on tangents If the person who is interacting with the storyteller has a question or wants to end like that seems like amazing And it actually goes back to Uh his reference to Socrates Where you know He was kind of using uh so just to explain Socrates is like if you're a TA if if all of your knowledge is written down like you're not gonna be able to debate this person Which I think is an interesting idea but like obviously you know his his viewpoint Uh wasn't the one that ultimately won in society but um But I almost think it's kind of coming back around because obviously like it would be like if you could have a book that could debate you like that would be better but we just haven't had the technology until now

Jon:

Yeah. Yeah. I, I think actually one of the more prescient things in this book is this, the end of this chapter where Don Norman talks about the rise of the small, and, you know, he spends a lot of this chapter talking about how humans are tool users. Uh, you know, we're just not good on our own. We don't have claws, we don't have sharp teeth. So we have to develop tools, spears, axes, and that's how we have always operated. Humans have just always been tool users and the rise of the smallest, basically this concept where, you know, it used to take teams of designers and teams of manufacturers and like material experts in order to come up with even trivial products. You know, you want a Pyrex beaker, oh, that takes a hundred people and like, you know, like huge technological innovations. Yeah, but these days we have LLMs, we have all sorts of technologies where you can basically on your own act as what used to be a team of people. It's like, do you need a designer? Maybe not. Maybe you can just use some AI image generator and give it prompts. Like I, I have no artistic skill whatsoever, but maybe I can just use AI imagery for my next video game project. Um, if you're not a programmer, maybe you can use an l l m to like, fill that need for you. Like maybe you're a designer who wants to build a video game. Um, so yeah, I thought, I thought this section on the rise of the small was kind of especially true today, given the, the rise of these ais.

Matt:

Yeah I And like only only going to get more so and I think I think if there's a concern that I have about this upcoming AI revolution it's How Companies are going to respond when they're able to do Their work with much fewer people like does Yes It's going to empower individuals to be able to do things they never could before But it's also going to be able to It's going to enable corporations to Cut a bunch of their staff probably So You know I I I'm curious how that will all shake out

Jon:

Yeah, me too. No, I mean it's, a lot of folks talk about the, you know, the scariness of artificial intelligence and while I am generally optimistic about artificial intelligence, there is definitely a dark side to it, and I think you're putting your finger on it where, You know, like a lot of folks talk about kind of a post scarcity society where we're generating enough resources that no, you know, people don't want to, people don't have to work if they don't want to, but that's not how it's going to be, at least in the near term. You know, the way our society works today is if you can enrich yourself, that's what you do. You're not gonna just give that wealth to other people. I mean, The wealth stratification in the United States is absurd, so I, I do really worry about that. I worry about a huge part of the workforce just losing their jobs and there being no replacement for that, and them just not having an ability to provide income for themselves.

Matt:

Aye I definitely agree with that And I think if there's a split Like if we're separating the population in to Two pools Like the group of people for whom The this upcoming AI Revolution is going to be beneficial versus detrimental I think it's it's about self motivation If you have ideas and you are Able to make those ideas happen or are you you know you're able to take initiative I think the AI revolution is going to be amazing for those people because they're going to be able to get done stuff that would take you Like you're saying A hundred people you're gonna be able to make a beaker As if by magic With a single person but so then there's also a group of people which I don't want I you know I think there's a risk of moralizing and saying that those other people are bad but there's this vast group of people who are like I don't I don't want to leave Like I I don't want it I don't have an idea for a business I don't want to have an idea for a product I don't want to act in that role But Those people shouldn't be punished for that Like they just they just want to um they want to show up they want to do a job Well they want to feel fulfilled by that And then they want to go home and it's like that should be okay

Jon:

I agree. I feel like we fall into that category in a lot of ways. You know, it's like we've been honing this, this software engineering craft for so many years and like that's that know, that's kind of what I want to do for my eight hours a day.

Matt:

I think we're I think we're on the boundary You know I think there are some people who like they have to always be their own boss and like they If they're not building something themselves like they're like they're just fundamentally unhappy I think you and I can operate in the cog role Um and maybe that's like a slightly uh Uh Negative framing But um but I do think we have we have examples where We're able to take initiative We have an idea on our of our own and we can put that into practice

Jon:

Yeah, yeah, we have, we have done that, um, with the help of others.

Matt:

Yeah Yeah Yeah That's definitely true Um So was there anything else that you wanted to talk about in

Jon:

There was one part I thought was interesting and yeah, we've really jumped around in this chapter, which is, You know, largely my fault. But, uh, there was a part where he talks about the best human plus, plus the best computer. I really struggled to get that one out. Uh, is not as effective as, as average humans kind of diligently working well together in tandem with computers. You know, like he talked about Gary Ka Kasparov versus Deep Blue, which is like this famous chess match where Gary Kasparov got beaten. It's the first time a a, uh, computer beat, a chessmaster. And, um, yeah, he just talked about how like it's, it's not necessarily, you don't necessarily need geniuses and you don't necessarily need this extremely powerful computing power. You can work, you can get a lot of mileage out of average human beings, um, and, and fairly average. Computing power. And I thought that section was really interesting cuz this is sort of something, and you know, I'm, I'm probably stretching this a little bit, but this is something I have come to believe in my career that I would rather have a team of like fairly above average human beings who are just kind of diligent, work well together, you know, willing to put in the hours. I would rather have that than have like, One or two absolute rock stars who can like, I don't know, you know, change the world, but they're primadonnas.

Matt:

I think it is beneficial to have a team who we're all operating at roughly the same level Like if you have if you have uh Jeff Dean on your team and then you have someone who's more of just like a standard engineer You can wind up with that standard engineer performing worse than if they were on a team with other like minded people or like you know like skilled people You have this engineer he's running circles around everyone and they're like You know they look great To upper management It's like look at how much stuff they're doing but then everyone else is like I like he this person is changing stuff constantly I don't eat I don't understand half of it And like you know so I do think there's something to be said for a group of people They're all kind of You know there were there working cohesively and then in you know in a way that um Just yeah maximizes the potential of everyone

Jon:

Right. I, I like that word cohesively. I think that's like, that's one of my top five words to like, describe my ideal team. You know, just that cuz you know, a lot of times in teams there's, there's just tension between members of the team. Um, and it, it's so important to just have people who can kind of agree to disagree. Um, You

Matt:

Yeah disagree and commit Uh I mean to make another

Jon:

Yeah. Now that they're Bezo system,

Matt:

Um

Jon:

uh,

Matt:

Um Okay But so I I like it definitely agree with that Um I did want to talk a little bit about the moral obligations of design and routines that in the last chapter our last Uh episode and He just talks about how You get these business Decisions like this business-based decision to like make a new phone every year make a new car every year Like very intentionally changed the whole style of your clothes You know he has this anecdote of how Henry Ford Looked at Ford's that failed And The engineers were like oh okay He wants to find the parts that failed and make those better But it was like no no no I want to find the parts that didn't fail And I want to produce those more cheaply so that they all fail together in one massive Like explosion Um

Jon:

which sadly was one of the more brilliant discoveries.

Matt:

why he was sick That's why the form was so successful Don Norman proposes an antidote to this which is like subscription services where you're paying a monthly fee to this company

Jon:

Mm-hmm.

Matt:

and then they can just produce the best durable goods as possible because you're still paying them But

Jon:

I,

Matt:

I hate subscription services

Jon:

too. And I just, I don't think that'll work psychologically. Like I'm sure you can come up with some ideal framework where that makes a ton of sense. But I'm just thinking about human nature and the psychology of a subscription service. Uh, I just don't think that could work. You know, my, my way of doing this, and I've experienced this in my life where I'll just use an example, like 15 years ago I got really into coffee and french press, and I bought a burg grinder from KitchenAid, and it costs$400. And at the time, for me, that was like a tremendous amount of money. Um, but that thing was amazing. I could make perfect French press coffee with that thing, and that thing worked. I, I literally just threw that thing away like three months ago because it finally broke after 15 years of service. And you know, you could argue that, oh, KitchenAid, they only made$400 off of me over that 15 year period. But I think you could also argue that like, you know, I've told people about that product, like I'm not like an evangelist for them or anything, although I guess here that's exactly what I'm doing. Uh, but it was just such a satisfying product to use and it just did the job really well, and it costs way more than the average coffee grinder. Uh, but I was willing to pay that price because, because of how well it worked and how it just did the job.

Matt:

Right Like a business that is going to sell a grinder for 80 bucks uh that is going to last you two years Would have made out like they probably spent less You know less to make it And then you know they probably made more overall profit and then they were able to sell However like seven of them to you Um

Jon:

exactly. Yeah.

Matt:

Yeah I just um I don't know I don't have a good I don't have a good answer to that It's like I'm basically just lamenting um that like Because I don't believe in you know there I think there's There are some people who They're like well this is wrong And it's like yeah like I don't disagree that it's wrong but it's also kind of a game theory outcome And I and I generally like I don't believe in attempting to like Force You just like make a like if you try to make a law like oh you're not allowed to make products that fail earlier than they should you know it's like that's that's ridiculous and unenforceable And like I feel like a lot of legislation kind of smacks of that where it's like it just is attempting to be like game theory be damned force you to do something you you really need to change the rules of the game so that Uh you know so that the optimization aligns with what is best for the environment and what have

Jon:

Right. Well, this is why I am a huge fan of, you know, incentivizing good environmental practices, which can be done through legislation, you know, like, um, You literally charge companies for their carbon output. Like this is what's, you know, in economics they call it an externality where, you know, your company like Chevron, operating in South America or whatever, is just literally destroying the entire ecosystem of the Amazon River and nobody, nobody cares because they're not being charged for it. Well, if the US government had charged them 15 billion for those damages, Which, if you're really looking at it, they probably amounted to way more than that. In the grand scheme of things, Chevron would've been a lot more hesitant to, to do those things. So the key is just putting in place legislation that charges companies for the damage that they're causing to the environment. Um, and I'm even okay with things like carbon offsets where companies can, can pay for each other. You know, like if there's some company that's. Uh, like Tesla is producing cars, but they don't, they don't have a large carbon footprint, so they are able to sell that carbon footprint to other companies. Um, I'm even okay with that because I feel like that allows change to be more incremental. Um, but yeah, I think

Matt:

cap and trade

Jon:

Exactly. Exactly. So I think we should be putting this type of legislation in place. Because there's, it's just the way our system works. Like unless companies are incentivized to, to do the right thing, they're just not going to,

Matt:

I think it's a really I think it is a really tricky Problem And you know a lot of smarter people than I have have attempted to come up with a system that You know these companies just find a way to immediately circumvent Uh so You know I I think there there's a viewpoint which is like the problem is capitalism itself You know and then Which I like I don't fundamentally disagree I think my concern is that like the alternative the absolute outcome of the alternative would be worse Frankly

Jon:

It's also just to, it's not taking into account enough nuance of the situation

Matt:

Or yeah just like the reality It's like okay well this is what we have right now

Jon:

Right,

Matt:

And actually maybe this comes back full circle or it's like the uh radical versus incremental uh You know It's like yeah there there might be a better system but is it worth it to like destroy the system that we have and bed on this unproven other system that will almost certainly have its own faults

Jon:

right.

Matt:

so All right Well I didn't have anything else Was there were there any other um good nuggets

Jon:

I think on that note, you know, now that we are letting Skynet take over the world, I think that's a good note to end the podcast on.

Matt:

As long as we become Skynet There's no problem

Jon:

Yeah, as long as Skynet's got its tendrils in me and I'm, I'm jacked in, I'm good. You know, it doesn't need to be real. Like the guy eating the steak in the matrix, like, I'm fine with that. I'm

Matt:

That he was probably enjoying that

Jon:

He looked happy.

Matt:

I for one can't wait to just synthesize any reality that uh I want to and I'll just be a battery for the robots

Jon:

Yeah. I don't, that's easy. I mean, they, you don't even have to do anything. You're literally just, you're just in the chamber sleeping. That's

Matt:

Although I don't know I mean Keanu Reeves was working at some Stupid job in the matrix which is like robots Like I dunno give me something good to do

Jon:

Yeah. Like why? Yeah, exactly. Like that whole con, like why doesn't everyone just have their own reality where they are the king of the universe? That's how, that's how it should be.

Matt:

Yeah All right let's wrap it and wrap it there because we're going and going off the deep end Um Well this is the last episode in this series We have not I think we have an idea about what we want to do next I don't know if we know exactly Do you want to

Jon:

we're gonna get real nerdy. We're gonna read some white papers.

Matt:

We're going to lose our six listeners Who Uh

Jon:

thank

Matt:

is a white paper Isn't most paper white

Jon:

Um, some paper is pink. I've seen pink paper,

Matt:

Oh no that's a bad sign I

Jon:

pink slip.

Matt:

Wait is that Pink slip is like when you all And that transfers the ownership of a car right

Jon:

Or you're in trouble with the principal of a school?

Matt:

Oh okay So we're going to talk about Paper is that where we're at by

Jon:

Yep. We're talking about paper. Um, we're talking about white papers, uh, which in particular, I think we said we were gonna read. Attention is all you need. Which is a pretty famous white paper in the world of computer science that discusses this concept of attention, which I don't know what it is yet because I haven't read the white paper, but it's part of what makes these LLMs so powerful. It's basically a component of a neural network that enables, uh, a large language model to focus on the right words of, you know, a sentence in order to make the model more powerful. And that's, that description is probably inaccurate, but we are going to read the white paper and correct

jon_raw:

it.

Matt:

Fantastic All right Well I will see you there John

Jon:

See it.