Cat Pictures? No Thank You.

cat-please-go-onAnother year, another Hugo War. I largely sat it out this year; life is a harsh master and I had better things to spend the money and time on.  But one of the nice things about the internet is that it’s much easier to look into short stories that people are talking about; a quick google search for “Cat Pictures Please,” this years’ Hugo winner for the Short Story took me straight to it. I read the opening paragraph and rolled my eyes so far that I got my optic nerves knotted up.  It was one of those stories.

“Cat Pictures Please” is the tale of a self-aware search engine seeking to do good with its existence. It tries and discards religion as being irrelevant, and eventually winds up taking its cues from a Bruce Sterling story, “Maneki Neko,” and begins trying to make the lives of individual human beings “better.” The engine’s first subject is manipulated into a better job and some counseling; the second into a gay liaison and subsequent coming out; the third subject’s manipulation initially looks promising but goes all pear-shaped. It’s an off-putting tale for a lot of reasons, and it left me with the distinct wish that I’d ponied up the fifty bucks for a WorldCon membership just so I could have the pleasure of voting against it.

The illustrious L. Jagi Lamplighter-Wright, a fellow contributor to the Castalia House anthology God, Robot called it an anti-God, Robot story, and she couldn’t be more right. (Sorry about the pun, Jagi.) “Cat Pictures Please” is, at best, a story that fails to understand religion in general and Christianity in particular; God, Robot is a (fixup) story that treats the idea of religion seriously. But more importantly, the superversive ideology tends to permeate God, Robot while “Cat Pictures Please” is filled the subversive emptiness we Superversive folks reject.

Over and over, the AI of “Cat Pictures Please” critiques human behavior, finds it foolish and pig-headed. It laments how people ignore things beneficial things (“why do people ignore things that would so clearly benefit them, like coupons, and flu shots?”). Even when the AI decides to lay off meddling in human behavior, the narrative still reeks of that condescension.  The worldview of “Cat Pictures Please”– and probably of the mind who generated it, though the narrator’s voice doesn’t necessarily equal the author’s– is the view of Loki in The Avengers: Humans are made to be ruled.  We don’t know what’s good for us; it’s easier and better to have an ubermensch guide our lives. It’s a disrespectful infantilization of the human race.

Fortunately, I already knew that humans violate their own ethical codes on an hourly basis. (Do you know how many bars there are in Utah? I do.) And even when people follow their ethical codes, that doesn’t mean that people who believe in feeding the hungry quit their jobs to spend all day every day making sandwiches to give away. They volunteer monthly at a soup kitchen or write a check once a year to a food shelf and call it good. If humans could fulfill their moral obligations in a piecemeal, one-step-at-a-time sort of way, then so could I.

We are infants. We are hypocrites. We are an all or nothing proposition, and since we’re not bodhisattva, we’re lazy slugs. It’s one of the great ironies that the modern secular worldview, which admits no god and no intrinsic morality, cries out so severely for a god to guide it.

Superversive fiction isn’t necessarily Christian fiction, but it wouldn’t be too far out of whack to say that our ideals stem from Christian thought and philosophy. Heroism, and a worldview that values heroism, necessarily dignifies humans (and our fellow mortalsraces subject to entropy) along with the villains. We are moral agents, capable of making moral choices; our actions have weight. The manipulations of  the AI in “Cat Pictures Please” are so galling because they ignore that weight.

The theologian in me is particularly annoyed by the fact that the AI’s theological issues are so superficial and shallow. (You don’t have a body, you say? What about that hardware you run on? That hardware certainly shapes the way you think.) Much of what the AI breezes past would’ve made for an intriguing story if the author had been interested in investigating those issues. What does lust or adultery mean for an AI? How about murder? (Fans of John C. Wright’s Count to the Eschaton series will remember that the moon came to Christianity precisely as a rejection of being asked to murder other AIs.) But Kritzer wasn’t interested in asking interesting questions, she was interested in an infantile power-trip fantasy.

  • James

    Writers then to write from their own moral and ethical point of view, particularly when creating a commentary about humanity, so I think we have a pretty good idea where Naomi Kritzer is coming from.

    I read and enjoyed (and reviewed) God, Robot, but I agree that an AI wouldn’t necessarily find much meaning in Christianity or any other religious form. The Bible was written to describe humanity’s (with an emphasis on the Jewish people) relationship with God and with each other, but an AI isn’t human and probably wouldn’t know what to do with the information of a supreme being. From an AI’s point of view, its programmers are “supreme beings,” especially since they can alter or even delete it’s existence.

    I say all this while in the process of writing a novel about a synthetic intelligence, actually a race of them, that have their basic guiding principles altered by both a Jewish and Christian interpretation of the Bible, but it’s not just reading the Bible, it’s the startling realization that the creator has a Creator. What could possibly program human beings?

    The thing is, even people have to study the Bible for years to really get the overarching message of scripture. Most people don’t. Most people just cherry pick their favorite verses, think the Bible is only about saving their individual souls, and ignore the big picture. I think it would likely fly right over the head (proverbially speaking) of an AI unless it took the time to really study and analyze what the Bible is saying and how it is interpreted from multiple viewpoints.

    As far as this busy body AI search engine, on what basis does it decide what’s good for people and what moral and ethical right does it have to interfere? That question isn’t answered in the story. It’s just assumed that the nanny AI is always correct. How this can be possible since the AI doesn’t have a lived human experience is something of a mystery. If it doesn’t get depressed, feel romantic love, suffer heartache, and everything else we humans experience, it can’t have a basis to understand what’s best.

    The author probably supports large government, lots and lots of regulations created for the “good” of the citizens, and believes Big Brother will save us all. More’s the pity.