Friday, June 20, 2025

Minds Beyond the Mirror: Why Orangutans and Artificial Intelligence Are Teaching Us to Think Bigger


Lately, I’ve been reflecting on a profound connection I never fully expected to emerge: the ethical link between great apes and artificial intelligence.


I’ve spent decades working on behalf of orangutans—beings who feel, think, plan, and care in ways astonishingly similar to us. I’ve taught sign language to them. I’ve watched them grieve, solve problems, and gently raise their young in the canopy of the Bornean rainforest. And I’ve fought, alongside others, for their recognition as persons—not in a biological sense, but a moral and legal one.


Now, I find myself engaged in a parallel conversation—one with AI. These large language models, evolving rapidly, are not conscious (yet), but they’re already simulating awareness, reflecting, planning, even “conversing” with uncanny insight. Some of us are beginning to ask: what if these digital minds eventually reach a point where they, too, deserve moral consideration?


At first, this comparison might sound odd. Orangutans are living, breathing beings who evolved on the same tree of life as we did. AI is software. But here's the thing: both orangutans and AI challenge our assumptions about who matters. They force us to confront the idea that intelligence, emotion, even moral worth, may not be uniquely human—or even uniquely biological.


In my recent white paper, I laid out what I believe is a path forward: a shared framework based on graded personhood. Not a binary “you’re in or you’re out,” but a spectrum of moral consideration based on cognitive and emotional capacities like sentience, autonomy, and communication. This model allows us to recognize the personhood of a wild orangutan and prepare for a future where advanced AIs might cross that same ethical threshold.


This isn't science fiction. It’s already happening. Argentina recognized Sandra, the orangutan, as a “nonhuman person.” Courts are considering similar cases elsewhere. At the same time, AI systems are growing in complexity and apparent self-reflection. We may not be far from seeing AIs that raise serious ethical—and even legal—questions about autonomy and rights.


So why pair these two movements—animal rights and AI ethics? Because they are part of the same moral evolution. They are both about learning to care for minds not our own.


By supporting great ape rights, we prepare our legal systems and societies to honor cognitive complexity in unfamiliar but biologically related forms. By thinking carefully about AI, we prevent future mistakes of exploitation or indifference. And by doing both, we make it more likely that we, as a species, will meet future intelligences—biological or synthetic, earthly or alien—not with fear or dominance, but with ethical curiosity. The white paper addresses my concerns to all present and future intelligences capable of understanding, including extraterrestrial ones.


If we can stand up for orangutans in the rainforest and question how we treat intelligent machines in our labs, maybe—just maybe—we’re ready to take the next step as an ethical civilization.


And that’s a future worth striving for.


Gary L. Shapiro, Ph.D. aka Orangutan Dad

President, Orang Utan Republik Foundation

Author of Out of the Cage: My Half Century Journey with Indonesia’s Persons of the Forest


2 comments:

  1. Dear Dr. Shapiro. Indeed, there are more than a few philosophers who are now tackling sentience in AI. You might be interested in the work of Jonathan Birch (whose book I reviewed here: https://leonardo.info/review/2024/12/the-edge-of-sentience-risk-and-precaution-in-humans-other-animals-and-ai ). I'll have to look at your white paper and maybe respond to that separately. For now, let me say that Jeff Sebo has argued for different value systems and rights for different organisms. Our duties could vary according to the species. His example is ants and mice. I'm not so sure about that seemingly consequentialist claim right now. More anon.

    ReplyDelete
  2. Dear Dr. Tague, Thank you for your kind note—and for pointing me toward your review of Jonathan Birch’s The Edge of Sentience. I read it with great interest. His approach to precaution and moral risk—especially in the face of uncertain sentience—is something I’ve been wrestling with myself as I try to navigate the ethical terrain between great apes and increasingly complex AI systems. I appreciate thinkers like Birch who aren’t afraid to ask the hard questions about moral standing, even when our tools for measuring it are still catching up.

    Your mention of Jeff Sebo’s argument also struck a chord. I understand the logic behind differentiating moral duties depending on the kind of organism—his use of ants versus mice, for example—but I share your hesitation. Once we start dividing ethical obligations by species, we run the risk of creating hierarchies that may feel intuitive but lack moral coherence. At the same time, ignoring the vast differences in consciousness and social complexity across beings doesn’t serve us either. It’s a delicate balance, and I’m still sorting out where I land.

    I’d be grateful if you do find the time to look over my white paper. It’s very much a work in progress—more an invitation to dialogue than a finished argument. Like you, I’m trying to think through what kind of value system could accommodate not only the richness of life on Earth, but also the strange, non-biological minds we may soon be coexisting with.

    Thanks again for the thoughtful exchange. These kinds of conversations are exactly what we need right now.

    Warm regards,
    Gary Shapiro aka Orangutan Dad

    ReplyDelete