Lately, I’ve been reflecting on a profound connection I never fully expected to emerge: the ethical link between great apes and artificial intelligence.
I’ve spent decades working on behalf of orangutans—beings who feel, think, plan, and care in ways astonishingly similar to us. I’ve taught sign language to them. I’ve watched them grieve, solve problems, and gently raise their young in the canopy of the Bornean rainforest. And I’ve fought, alongside others, for their recognition as persons—not in a biological sense, but a moral and legal one.
Now, I find myself engaged in a parallel conversation—one with AI. These large language models, evolving rapidly, are not conscious (yet), but they’re already simulating awareness, reflecting, planning, even “conversing” with uncanny insight. Some of us are beginning to ask: what if these digital minds eventually reach a point where they, too, deserve moral consideration?
At first, this comparison might sound odd. Orangutans are living, breathing beings who evolved on the same tree of life as we did. AI is software. But here's the thing: both orangutans and AI challenge our assumptions about who matters. They force us to confront the idea that intelligence, emotion, even moral worth, may not be uniquely human—or even uniquely biological.
In my recent white paper, I laid out what I believe is a path forward: a shared framework based on graded personhood. Not a binary “you’re in or you’re out,” but a spectrum of moral consideration based on cognitive and emotional capacities like sentience, autonomy, and communication. This model allows us to recognize the personhood of a wild orangutan and prepare for a future where advanced AIs might cross that same ethical threshold.
This isn't science fiction. It’s already happening. Argentina recognized Sandra, the orangutan, as a “nonhuman person.” Courts are considering similar cases elsewhere. At the same time, AI systems are growing in complexity and apparent self-reflection. We may not be far from seeing AIs that raise serious ethical—and even legal—questions about autonomy and rights.
So why pair these two movements—animal rights and AI ethics? Because they are part of the same moral evolution. They are both about learning to care for minds not our own.
By supporting great ape rights, we prepare our legal systems and societies to honor cognitive complexity in unfamiliar but biologically related forms. By thinking carefully about AI, we prevent future mistakes of exploitation or indifference. And by doing both, we make it more likely that we, as a species, will meet future intelligences—biological or synthetic, earthly or alien—not with fear or dominance, but with ethical curiosity. The white paper addresses my concerns to all present and future intelligences capable of understanding, including extraterrestrial ones.
If we can stand up for orangutans in the rainforest and question how we treat intelligent machines in our labs, maybe—just maybe—we’re ready to take the next step as an ethical civilization.
And that’s a future worth striving for.
—
Gary L. Shapiro, Ph.D. aka Orangutan Dad
President, Orang Utan Republik Foundation
Author of Out of the Cage: My Half Century Journey with Indonesia’s Persons of the Forest