Humanist technologist John Maeda understands the design past and sees the design future. What can he teach us about the new computational universe that is changing our world?
You can also listen to this podcast episode on Designer Sketches.
Lily pads on a pond, in a papier-mâché style, generated by Microsoft Designer
When John Maeda’s book, How To Speak Machine: Computational Thinking For The Rest Of Us, was published in November 2019, ChatGPT was on version 2 and most of us hadn’t heard of it yet. It wasn’t until ChatGPT3 appeared in the summer of 2020 that it started to seep into the public consciousness. But as a designer and technologist with a long history of working with AI, Maeda had already glimpsed the future, both conceptually and practically.
You can also listen to this podcast episode on Designer Sketches.
Lily pads on a pond, in a papier-mâché style, generated by Microsoft Designer
When John Maeda’s book, How To Speak Machine: Computational Thinking For The Rest Of Us, was published in November 2019, ChatGPT was on version 2 and most of us hadn’t heard of it yet. It wasn’t until ChatGPT3 appeared in the summer of 2020 that it started to seep into the public consciousness. But as a designer and technologist with a long history of working with AI, Maeda had already glimpsed the future, both conceptually and practically.
John Maeda is an interesting hybrid of engineer and designer. After studying computer science at MIT, he completed a PhD in design at Tsukuba University’s Institute of Art and Design. With an early interest in combining computers and art, some of his works, like the Morisawa 10 Poster, are part of the permanent collection at the Museum of Modern Art. He returned to MIT as a professor in the Media Lab, working to foster cross-competencies between designers and engineers. Then he served as president of the Rhode Island School of Design (where my brother studied architecture). After RISD, he made a shift into the commercial world, taking influential positions at Automattic (makers of Wordpress), Kleiner Perkins (a venture capital firm in Silicon Valley), and Publicis Sapient (a global consulting company). Today, he has what might be one of the most important design jobs in the world: Vice President of Design and Artificial Intelligence at Microsoft.
Now that Pandora’s box has been opened and the explosion of humanistic AI has grabbed everyone’s attention, we’ll probably see an entire genre of books and other writings on the coming AI-pocalypse. But Maeda’s message in How To Speak Machine is one of hope. By outlining the foundational concepts of how machines work, he encourages us to be a little less afraid of them. In each chapter, he patiently and eloquently describes the qualities of these new digital machines and how they’re different from the old mechanical ones. They run infinite loops, they get incomprehensibly large, they effortlessly track everything, they reinforce what they’re fed, and they do all this without ever being fully completed. As you come to better understand these qualities, you realize the exciting potential and concerning risk of AI.
When I first read this book back in 2020, I realized the implications and was already recommending it to every designer I was either managing or mentoring. But it wasn’t until I started seeing those animated GIFs showing up on Twitter of AI generating working code and aesthetic graphics based on simple text prompts that it started to sink in how impactful this was going to be and how fast it was going to happen. That was already a few years ago, but it feels like the rate of acceleration is still increasing. Certainly anyone working in technology but not actually writing code should read this book — especially designers. It will help you better understand not only the language of machines, but the culture of our fellow human software engineers. As we enter this new phase of technology, engineers, machines, and the rest of us will have to communicate and work together to write a happy ending to this chapter of human history.
Before we dive into speaking machine with John Maeda, I want to give a shout-out to Mr. Tom Froese of the Thoughts on Illustration podcast. We connected through Substack over our Paul Rand episodes, and he was gracious enough to recommend Designer Sketches, resulting in a number of new subscribers and followers. So thank you, Mr. Tom Froese! I hope I can return the favor somehow, and I hope I can meet the expectations of my new audience members.
What, or who, are these machines?
To start us off, Maeda tries to help us wrap our heads around what computational machines are, why they’re different, and why it’s important. In a 1999 interview, David Bowie referred to the internet as an “alien life form.” What would he say about ChatGPT today? Here’s how Maeda visualizes the inter-connectedness of the machine:
Today we’re at the point when holding any digital device is like grasping the tiny tentacle of an infinitely large cyber-machine floating in the cloud that can do unnaturally powerful things.
What makes these machines different, and consequently the design of them, is that although we’re writing code that we can read, and it’s running on machines we can touch, when software runs, it becomes alive and lives inside a world that is largely beyond human comprehension:
On the one hand, program code is what lies at the heart of software and you can read it, but that’s like confusing the recipe for cake with the cake itself. The software is what comes alive inside the machine due to the program codes—it’s the cake, not the recipe. This can be a difficult conceptual leap.
Where it starts to get a little scary is when you move beyond “logic-based” code and into the realm of neural networks, the kind of code powering the AI we’re seeing more and more of everyday. In this case, there is no recipe, there are only “switches” that learn patterns through many, many repetitions. If you’re not getting the results you expect, it’s less about fixing a bug and more about teaching it differently.
There’s no actual computer code when it comes to a neural network—there’s just a black box that learns patterns.
This is why it matters: it’s going to be everywhere. And unless you’re going completely off the grid, you’re going to be impacted by it and probably have to interact with it at some level. Trying to function in a modern society without things like a mobile phone or a debit card can be done. But because so much economic infrastructure has been built up around them, it’s harder and harder to get by without them. While we don’t have to learn to code per se, Maeda contends we do have to learn how to speak machine.
But now that computing impacts virtually everyone at the ultra-fine level of their daily micromovements and at the scale of the entire world, it is more urgent than ever to know how to speak both machine and to speak humanism.
These are the major concepts that Maeda says define machines. I won’t cover all of them in detail, but instead focus on a few that I find particularly interesting. I definitely recommend reading the book for the full picture.
- Machines run loops
- Machines get large
- Machines are living
- Machines are incomplete
- Machines can be instrumented
- Machines automate imbalance
Let’s look at the first couple, which though conceptually challenging, are a bit more operational. Maeda starts by telling us the story of how he first learned to code. He wrote a program to help his mom track sales at her tofu shop, but even though the commands were repetitive, he typed them out line by line. Then a teacher showed him the simplicity of the loop:
But I realized that if I instead could think in LOOPS the way a computer natively thinks, it could get my work done with elegance—automatically.
Instead of saying something like “do this, and then do it again for the next thing” and so on, you simply say “do this however many times you need to and get back to me when you’re done.” And the machines just do it with perfect precision.
Next, you get into scale. Most of us are familiar with the term “big data” but it’s not easy to grasp the magnitude of the sizes we’re talking about, especially when you get into exponential curves. Maeda recounts a riddle that asks if a pond is completely covered by lily pads after 30 days by doubling their number everyday, on what day is the pond only half covered. A lot of us would instinctively answer the 15th day. But it’s actually the 29th day!
Computation has a unique affinity for infinity, and for things that can be allowed to continue forever, which take our normal ideas of scale—big or small—and easily mess with our mind.
Now when you combine loops, which can also be nested, with massive scale, you get into some freaky stuff, like loop through “all the personal data” of “all the users” on “all the connected machines.”
There are literally no limits to how far each dimension can extend, and no limits to how many dimensions can be conjured up with further nesting of loops. This should feel unnatural to those of us who live in the analog world, but it’s just another day inside the computational universe.
Although we will still run into some practical physical limits in the short term, such as how to store, structure, and transmit such large amounts of data or how much energy is needed to run all these AI algorithms, we have to assume these problems will be solved. Technology will find a way.
The coming zombie apocalypse
What does Maeda mean by “machines are living”?
When you consider the power of loops that prevent computational machines from ever getting tired while accessing an infinite cloud of capabilities, there’s only one word that can describe these nonliving, pseudo life forms: zombies! And these invisible zombies should concern you for two main reasons: 1) you’ll never win an argument with one of them, and 2) it will get harder to tell whether you’re communicating with a zombie or not.
He continues:
The logical outcome of that computational power—a rising army of billions of zombie automatons—will tirelessly absorb all the information we generate and exponentially improve at copying us.
Where does it all lead?
And in the back of our minds we need to be wondering what the future implications might be for servicing an entire race of machines to become better collaborators with each other than we ourselves could ever be.
The question of how to know whether you’re communicating with a zombie is an interesting one. It feels like we’re at a point where machines are becoming more humanistic and humans are becoming more like zombies. It’s not hard to see how this line becomes even more blurry. I think this has a few implications:
- Humans who are great communicators will have a competitive advantage in the workplace. When you have access to machines that are passable at humanistic interaction, you’ll want humans who are much better than passable to make it worth it to hire them. (Tip for parents with young children: get your kids off the screens. It’s not helping them.)
- Liberal Arts will become cool again. The STEM fields have certainly been the place to be over the past couple decades. But as our ability to create technology eclipses our capacity to adjust to it as a society, we’ll start to look to people who can help answer not can we do something, but should we. I’ve already seen job listings for a Chief Ethics Officer.
- In-person interactions will be preferred. Putting aside the arguments over returning to office vs working remotely, when you can’t tell if what you’re seeing is real, there will be a natural tendency to want to see people “in the flesh.” Sure, outcomes matter, but companies will lose the appetite for paying people who are simply running some AI bots to do their jobs for them.
Don’t assume that because of our current image of zombies as these gross, decaying creatures means we won’t develop real human emotions of affection towards the AI zombies. One of Maeda’s professors at MIT was already creating AI chatbots back in the 1960s and became concerned that it wouldn’t be that hard to emulate a human if the machine knew “everything about the person it was talking with.” With big data and the internet, that day is here. The machines know everything about us. And whether we know everything about them or not, we may come to like or even love them.
When someone takes the time to listen to you deeply, you want to love them back. We love when we are listened to because it signals respect and acknowledges our existence—even when respect and acknowledgment is delivered by a machine.
Timely design and the myth of incompleteness
In the early days of Facebook, their motto was “move fast and break things.” Now that they’ve grown up and weathered a few controversies, they’ve dropped this brash attitude to product development. They realized that the scale of human impact they had required a greater level of responsibility. We could argue about how well they’re handling that responsibility today, but Maeda asserts that incompleteness is still a core part of the machine ethos. In fact, you can decide to never ship a finished product:
This unique property of computational products means not only that their production and distribution costs are financially advantageous, but that product development costs can be significantly lowered by making a choice to never ship a finished product. You can always “replace” the product digitally with a brand-new and incrementally improved one, and remove all the financial risk from investing heavily all the way to a finished product.
Maeda explains that the Temple of Tech has actually redefined quality, and in their view “timely design is more important than timeless design.”
So the new definition of quality is the opposite of the Temple of Design’s definition of quality: a finished product painstakingly crafted with integrity. The new definition of quality, according to the Temple of Tech, is an unfinished product flung out into the world and later modified by observing how it survives in the wild.
Having your product “flung out into the world” sounds suspiciously similar to moving fast and breaking things. But we’re starting to see that this view doesn’t fly anymore. Google has been feeling the pressure to catch up with the abilities of OpenAI’s ChatGPT, but they embarrassingly had to retract the release of their AI tool, admitting that “Gemini image generation got it wrong.” They will certainly make modifications after “observing how it survived in the wild,” but I bet they don’t want to experience that kind of feedback again. Maybe they’ll want to reconsider and revisit some of those timeless design principles.
While I hesitant to use the words “finished” or “done” in the context of software design — because I know things can always change and there is always more we can do — software maker 37signals takes a different approach. In a recent post, their Head of Product Strategy, Brian Bailey, makes the case for why you should consider it done:
What’s on the cutting room floor is there for a reason. Like any artistic endeavor, software gets better through subtraction, too. If it wasn’t essential then, we don’t assume it’s essential later. After we ship, we let it go.
When talking about the new product they released recently, a chat tool designed to be paid for once instead of incessantly as a subscription, he says definitively, “Campfire is done.” Sure, there will be some bug fixes, but they’re not planning on continuous iteration and releases. Their view is that “we overestimate how much users want change.”
Maybe it’s unfair to compare the well-established chat tool pattern to the cutting edge field of AI. But the end users are still people. And we’ve seen that if there is too much change too fast, they’ll start to push back. Maeda quotes a friend, saying, “speed and thoughtfulness need to coexist in order to make good things—not just fast things.” But I wonder how well these two can coexist in a world where the speed is really fast and the impact and subsequent risks of getting it wrong are very high.
Response to Surviving the AI Illustration Apocalypse
In the recent and wonderfully thought-provoking episode, Surviving the AI Illustration Apocalypse, Mr. Froese tackles some difficult questions about what this new world of machines means for creative professionals. It’s very topical for the discussion of this book, so I wanted to share a few thoughts I had in response to his episode. I don’t think any of the following would qualify as “spoilers” and it’s certainly not intended as a substitute for listening to the episode yourself — so definitely do that. But I thought some of these ideas tied in well.
There are three general thoughts I had in response to Mr. Froese’s episode, which I’ll expand on here:
- “Made by humans” will still mean something.
- Creativity will become more accessible.
- Finding your “specific knowledge” to create value will still matter.
“Made by humans” will still mean something
While Mr. Froese warns about “going a bit dark,” he does look on the bright side, and his comments got me thinking more about how we might expand on the positive potential of AI. I agree that, even as AI starts to infiltrate almost every market and profession, there will always be a market for human-crafted goods and human-provided services. It may get smaller, but it may end up bigger than we think. An encouraging sign over the past decade or so has been the rise of organic and locally-grown food options. While there are plenty of easier and cheaper ways to consume calories, a significant number of people choose to spend more for higher-quality food, which makes them feel better about their health and supporting those producers. As more and more things are produced by machines, being able to put a “made by humans” label on something could engender the same kind of sentiments as a “made in the U.S.A.” label does in the United States.
Creativity will become more accessible
In his episode, Mr. Froese talks about the humanity of the creative process. Although some of us rely on our creativity for our profession, there is also value in creating just to create. I spend time on things that make no practical sense but I just like to do. And I like making things, even if I end up being the only one who ever uses them. The process of making is also a process of learning, so I think we will continue to do that regardless of how technology progresses. But it’s also interesting to think about how the creative world has been exclusive for a pretty long time. Even putting aside intentional exclusion for a minute (which is a whole separate and worthy topic), some people either just aren’t good enough at something or don’t think they are. I think AI has the potential to unlock more creativity in more people, enabling people to express themselves in ways they couldn’t before. I’m not a programmer, but I’m excited about the possibility of being able to build software with the help of AI. And I’m certainly not an illustrator but I could see using technology to help me learn how to take something that’s in my head and bring it into the world in a tangible way. That’s something many people can’t do today, but will be able to do more of with AI. And while you could look at that as increasing the supply of creatives, resulting in more competition, I think we can also see it as increasing the supply of humanity.
I was just talking to my wife about the same idea with the high school music technology class she teaches. As creative content generation starts to work its way into music as well, how do you design a curriculum that embraces this trend, focusing more on what you’re creating (and why) and less on how you’re creating it?
A related to point to this idea of making creativity more accessible: AI will never make fun of you for asking a dumb question. It will never ridicule you for something you tried or the way you did something. As I’ve been learning more about how to speak machine and working on some side projects involving code, it’s been really nice to get some very non-judgmental help that I might be a bit self-conscious about if I was asking other humans. I think this will be an interesting area of growth as well: enabling people to take on difficult topics or questions that they might not otherwise feel comfortable talking to other people about.
Finding your “specific knowledge” to create value will still matter
As Mr. Froese mentions, technology will progress, whether we want it to or not. Eventually, regulators will attempt to put some protections in place, with varying degrees of effectiveness. But I think the idea of “specific knowledge” that Naval Ravikant talks about will still be just as relevant, if not more so. In product terms, how do you differentiate yourself? What value are you providing to people? How do you make what you’re offering less of a commodity? Naval says the way to do this is to “productize yourself.” Make what you’re offering unique because you’re the one offering it. Mr. Froese is a great example of this: he’s not just an illustrator. He’s a podcaster and a teacher. As he talks about in his episode, you want to work with particular illustrators sometimes because of who they are, not simply the artifact they are delivering. I think a potential positive we could see moving forward is a dramatic increase in entrepreneurship: more people carving out their own path to profitability rather than relying on a company to provide them with a paycheck. Maybe we’ll have fewer mega-size companies as we return to more hyper-local economies where neighbors are more reliant on each other for products and services they could get elsewhere but instead choose to invest in their communities. I think AI can make this process of discovering and refining your specific knowledge easier. It will be able to fill in the gaps and allow you to focus on the unique value you can offer.
Design makes everything palpable
Getting dressed the other day, I grabbed a shirt I had received as swag at some point from the now defunct design tool company, InVision. It says, “design makes everything possible.” Working as a designer in tech for almost a couple decades now, I was certainly sympathetic to this statement. But I started to question it. Seeing how rapidly the technology is developing, I can see how the specific role of design becomes less relevant, and less necessary to make something “possible”. As I just talked about with creativity becoming more accessible, maybe all you really need is people with ideas and the technology to make them real. Or maybe there is still a place for design in helping to give those ideas some form before they become real. I thought about how I might reword the shirt and came up with: “design makes everything palpable.” Maybe they can get by without us, but by working with designers, you can better anticipate how an idea will be experienced by human senses. And then I opened up John Maeda’s Design in Tech Report 2024 and was tickled to see this:
I believe design’s future net new AI value is creating palpable customer-centric criticality value.
I’m still not sure I understand exactly what that means. And I’m still a bit skeptical. I’ll probably need to read through the report a few more times. But I’m at least encouraged that Maeda is thinking about this and sees some path forward for design. At this point, we have to assume the technology will be capable of almost anything. The question now is what are we, humans with values, going to do with it. Whatever happens, we know it’s going to be… complex.
We can use our knowledge of computation to make complicated systems that sometimes have complex implications. Our brains can be trained to tackle the complicated pieces, but our values need to drive the questions around how we take on the complex aspects.
Now listen to what I think is the perfect musical pairing for this post: subhuman by The Dream Eaters