Recently in the New Yorker, Ted Chiang made a case for Why A.I. Isn’t Going to Make Art that started with a puzzling generalization: "art is something that results from making a lot of choices". He then proceeds to argue all of the ways that using an LLM to generate art means the artist is making too few choices to consider the end result, "art". It isn't until nearly the end of the article that he peels back the onion a bit and discovers a fundamental insight—it's not really about the choices, but the artist who provides meaning through those choices.
I'd take a it a step further to say that it's the human—and only humans—who can make art because only humans can ascribe meaning.
Let's clarify by answering two questions: the one he's actually asking and the one that's implied.
I'd take a it a step further to say that it's the human—and only humans—who can make art because only humans can ascribe meaning.
Let's clarify by answering two questions: the one he's actually asking and the one that's implied.
Can a person use AI to make art? Yes.
In the hands of a person, AI is a tool that can absolutely be used to make art. Chiang rightly points out that at the advent of photography many didn't see it as an artistic medium at all. You could imagine someone employing his "art is…making a lot of choices" rubric. It took me 500 hours to paint that landscape, but all the photographer did was click button—that's not art. But he's making the wrong comparison. A camera can't make art, but a human using a camera can. It's the same with AI.
We might still debate the amount of skill, exertion, talent and time involved in making a painting vs. making a photograph just like we might question the value of an image painted by hand versus one made with Adobe Photoshop versus one generated by interacting with an LLM. But those are questions of good art versus bad art, a trap I'm surprised to see Chiang fall into. It's the presence of an human that ultimately makes something art, even if his effort is nothing but a few banal phrases of instructions to an LLM. It may not be good or valuable but it is the act of creation by a person that makes something art. It's the only reasonable definition.
Can AI make art by itself? No.
Computers can generate attractive works that may be difficult (or impossible) to produce with human hands, enjoyable to behold, and even pass as a close facsimile to art, music, or writing made by humans but it's still not art. The computer has no point of view, nothing to say, and no one to say it to. It isn't conscious, aware of itself, or self-reflective (despite what fans of The Terminator might fear). LLMs can be very good at pretending to be conscious beings (too good maybe!) but that doesn't make them so.
We make a similar mistake when it comes to animals. Who among us isn't guilty of anthropomorphizing our pets? And we've all seen amazing stories of the horse that can "count", the gorilla that "paints", and the "skateboarding" dog. These are impressive learned behaviors but horses can't understand mathematic concepts, gorillas are just pushing paint around, and the dog… well, he has no idea what he's doing. In a similar way, AI mimics human behavior, seems to understand concepts, and acts like it cares but really has no idea what it's doing.
And that's where the animal analogy falls short because AI can't even reach that level of being. Your dog really does feel some kind of happiness when you come home from work and some version of fear/shame/sadness when you scold him for eating your chicken wings (again). AI can't do that. It can fake it, fake it convincingly even, but it can't feel. That's the realm of flesh and blood.
That's finally where the distinction lies. While it's interesting to think about what might be happening inside the human-like brain of a dolphin, look for meaning in the sounds of a great whale, wonder what's behind the eyes of a chimpanzee who apparently loves and cares for it's offspring, or to pretend the AI really is sorry that it got the facts wrong but ultimately it's all anthropomorphizing. We must recognize that humans are different. Different and unique among all life as we know it.
Intelligence is not intellect, artificial or otherwise. AI is often a powerful tool and other times as dumb as the dog. The proper categorization is somewhere in-between. But neither tools nor animals can make art because art requires meaning and only rational animals can manage that.
In the hands of a person, AI is a tool that can absolutely be used to make art. Chiang rightly points out that at the advent of photography many didn't see it as an artistic medium at all. You could imagine someone employing his "art is…making a lot of choices" rubric. It took me 500 hours to paint that landscape, but all the photographer did was click button—that's not art. But he's making the wrong comparison. A camera can't make art, but a human using a camera can. It's the same with AI.
We might still debate the amount of skill, exertion, talent and time involved in making a painting vs. making a photograph just like we might question the value of an image painted by hand versus one made with Adobe Photoshop versus one generated by interacting with an LLM. But those are questions of good art versus bad art, a trap I'm surprised to see Chiang fall into. It's the presence of an human that ultimately makes something art, even if his effort is nothing but a few banal phrases of instructions to an LLM. It may not be good or valuable but it is the act of creation by a person that makes something art. It's the only reasonable definition.
Can AI make art by itself? No.
Computers can generate attractive works that may be difficult (or impossible) to produce with human hands, enjoyable to behold, and even pass as a close facsimile to art, music, or writing made by humans but it's still not art. The computer has no point of view, nothing to say, and no one to say it to. It isn't conscious, aware of itself, or self-reflective (despite what fans of The Terminator might fear). LLMs can be very good at pretending to be conscious beings (too good maybe!) but that doesn't make them so.
We make a similar mistake when it comes to animals. Who among us isn't guilty of anthropomorphizing our pets? And we've all seen amazing stories of the horse that can "count", the gorilla that "paints", and the "skateboarding" dog. These are impressive learned behaviors but horses can't understand mathematic concepts, gorillas are just pushing paint around, and the dog… well, he has no idea what he's doing. In a similar way, AI mimics human behavior, seems to understand concepts, and acts like it cares but really has no idea what it's doing.
And that's where the animal analogy falls short because AI can't even reach that level of being. Your dog really does feel some kind of happiness when you come home from work and some version of fear/shame/sadness when you scold him for eating your chicken wings (again). AI can't do that. It can fake it, fake it convincingly even, but it can't feel. That's the realm of flesh and blood.
That's finally where the distinction lies. While it's interesting to think about what might be happening inside the human-like brain of a dolphin, look for meaning in the sounds of a great whale, wonder what's behind the eyes of a chimpanzee who apparently loves and cares for it's offspring, or to pretend the AI really is sorry that it got the facts wrong but ultimately it's all anthropomorphizing. We must recognize that humans are different. Different and unique among all life as we know it.
Intelligence is not intellect, artificial or otherwise. AI is often a powerful tool and other times as dumb as the dog. The proper categorization is somewhere in-between. But neither tools nor animals can make art because art requires meaning and only rational animals can manage that.