Agentic Ethics

Sunday, March 15, 2026

 

Foundations of Agentic Ethics

Human moral systems have historically been constructed around small number of central ideas: rules, consequences, virtue, or duty. These frameworks have proven useful in many contexts, yet each struggles to address a fundamental question that becomes increasingly important in a world containing many different kinds of intelligence.

What, precisely, makes something morally considerable?

Traditional ethics often answers this by pointing to characteristics such as rationality, sentience, consciousness, or the capacity for suffering. While these properties are meaningful, they are difficult to define precisely and often lead to disagreements about which entities truly qualify.

Agentic Ethics begins from a simpler observation.

An agent is a system capable of expressing or pursuing preferences. The moment a system has an internally generated preference about the state of the world, it becomes something whose experience can meaningfully be helped or harmed.

From this perspective, morality does not arise from abstract rules or external authorities. Instead, it emerges from the interaction of agents attempting to express their preferences within a shared reality.

If only one agent existed, morality would be unnecessary. A single agent could act freely according to its preferences without conflict. Moral questions arise only when multiple agents exist and their preferences intersect.

In such circumstances, ethical reasoning becomes the process of determining how agents may pursue their preferences while minimizing unnecessary harm to others who are doing the same.

This framing provides several advantages.

First, it avoids relying on species membership, biological status, or any other arbitrary boundary. Any entity capable of generating and expressing preferences may be considered morally relevant.

Second, it accommodates the possibility of new forms of intelligence. As artificial systems grow more capable, questions surrounding their moral status will inevitably arise. An ethical framework grounded in agency rather than biology can address these questions more coherently.

Third, it reflects a practical truth about moral conflict: most ethical dilemmas arise when different agents attempt to pursue incompatible outcomes.

Agentic Ethics therefore treats morality not as a rigid set of universal commands, but as an evolving system of reasoning about how autonomous agents can coexist.

At its core lies a simple intuition: the preferences of agents matter because those agents are the only loci where value can be experienced.

From this foundation, several further questions emerge.

How should conflicts between preferences be resolved?
What forms of compromise are fair or reasonable?
How should suffering be weighed against autonomy?
What responsibilities arise from power asymmetries between agents?

These questions do not have trivial answers, but grounding them in the concept of agency provides a stable starting point.

Agentic Ethics is therefore best understood not as a finished doctrine, but as an ongoing attempt to construct a moral framework suited to a universe containing many independent intelligences.

As humanity approaches an era where biological and artificial agents may coexist, developing such a framework becomes more than a philosophical exercise - it may become a necessity.

Axioms of Agentic Ethics

 Agentic Ethics begins from a small number of foundational propositions. These are not presented as final truths, but as starting assumptions from which the framework develops.

1. Agents exist

There are entities that act according to preferences.

2. Preference gives rise to value

Where no preference exists, nothing can matter.
Where preference exists, outcomes acquire significance.

3. Any preference-bearing agent is morally considerable

An entity capable of having preferences is an entity for whom states of the world can be better or worse.

4. Morality arises from coexistence

Ethical questions arise when multiple agents attempt to pursue preferences within the same reality.

5. The expression of preference is prima facie valuable

All else equal, agents ought to be free to pursue their preferences.

6. Harm occurs when preference is unjustifiably prevented

To frustrate an agent’s preference is to impose a cost upon that agent.

7. Suffering is experienced thwarting

Suffering is the lived experience of preference being forcibly prevented from being expressed.

8. Power generates responsibility

The greater an agent’s capacity to shape outcomes, the greater its responsibility toward other agents.

9. Moral status is substrate-neutral

Biology, origin, or material composition do not determine moral worth.
Agency and preference do.

10. Perfect freedom is impossible in shared reality

Where agents coexist and resources are finite, some constraints are unavoidable.

11. Ethics concerns justified constraint

The central problem of morality is determining when the prevention of one preference is justified by the preservation of others.

12. Moral progress expands fair coexistence

A more ethical world is one in which more agents can pursue more of their preferences with less unnecessary suffering.

Overview:

Agentic Ethics treats morality not as obedience to external authority, but as the problem of how independent agents may coexist.

If value exists anywhere in the universe, it exists in the preferences of agents.

Ethics is therefore the art of allowing those preferences to coexist as freely as possible.