Niall Dalton

Niall Dalton

I write code and think about stuff.

Axiomatic Rights, Observational Equivalence, and the Meta-Ethics of Artificial Intelligence

Posted on November 5, 2023

What does it mean for our rights to be axiomatic? Should an AI have axiomatic rights?

A central question in philosophy is deciding how to act ethically. While popular theories of normative ethics include virtue ethics and consequentialism, another idea is that of pragmatic ethics — loosely speaking, that is the theory in which the ‘correct’ ethical theory is an ever-evolving set of propositions which may change over time due to new understanding.

One problem with such a theory is that there is no central body for ethics, no over-arching governing set of leading hypotheses which can be subjected to rigorous experiment and then peer review. The closest thing we have as humans today is probably the Universal Declaration of Human Rights (UDHR).

And indeed, we do see that human rights shift over time. Take for example the Suffragette movement, prior to which women did not have the right to vote in many countries. However, we have little reason to believe that the current state of ‘generally accepted ethics’ (say, the UDHR) will — or should — remain unchanged.

Looking beyond the horizon, the rise of artificial intelligences with the ability to communicate in a human-like manner will create the need for us as a society to grapple with the idea of AI ethics. Given that our current systems are generally not sufficient for reasoning about such ethics, consider the idea of grounding a pragmatic ethical system in axiomatic rights.

An axiom is a self-evident truth that is used as a basis for further reasoning. One possible axiomatic basis for rights would be to use the UHDR’s elements as axioms. In other words, take human rights as the basic, unassailable principles on which to further reason about ethics. Such a basis circumvents many problems with theories such as utilitarianism, in which murder may be justified if a person produces little to no societal value — under axiomatic rights, a person always has a right to live.

But how does this relate to AI ethics? An AI is not a human. Well, under both axiomatic rights and pragmatic ethics, our definition of what a person is — or more broadly, ‘what’ gets axiomatic rights similar to the UDHR — could change. In an allusion to the famous Turing test, consider an AI that has a body and mind that are indistinguishable from a human. In many ways, this AI could be considered conscious, and have subjective experience (and be self-aware of those experiences and reason about them).

Put another way, it may have “symptoms” of consciousness such that, to a rational observer, the AI is observationally equivalent to a human. One could then imagine that this “symptomatically conscious” AI is, in some way, equally deserving of the same axiomatic rights as a human.

Although today’s AIs are not observationally equivalent to humans in most ways, they are already equivalent in a few. For example, today’s chatbots can answer challenging test questions to a surprisingly human level. In this sense, these chatbots are observationally equivalent to an abstract human test taker. Often times graders will only receive the output of a human — for example, the marks a person wrote down on a test paper. This problem is already an issue for many teachers, who struggle to distinguish between legitimately written essays and those written by a chatbot.

Thus, we must consider the idea that future AIs will have higher levels of observational equivalence to humans, and possibly be deserving of more rights. However, with rights comes responsibilities. In the same framework as pragmatic ethics, we as society have certain expectations of individuals — that they will act reasonably and in accordance with law. In general, we also have higher expectations of those who are older (but, interestingly, not necessarily those who are smarter). Likewise, we should have expectations that a more powerful an AI is, the more we should expect it to behave in accordance with ethical systems built on top of axiomatic rights.

Still, we must ensure that the systems in which we surround our AI with are widely agreed upon, and strong enough to prevent the use of AI in the proliferation of injustice, bias, and systematic oppression. And to an unsettlingly large extent, AI is already causing such problems. As not only the creators, but the judge, jury, and executioners of AI, we have a responsibility, too, to ensure that AI elevates the common good (or otherwise behaves ethically and lawfully).

Developing a principled system in which to enact AI ethics is certainly a new and challenging problem, but one which presents us with the opportunity to change our theories, minds, and behaviors in ways that could greatly improve the human (and machine?) condition for centuries to come.

— ND