Abstract
Problem: How should RPGs track and represent the moral consequences of player actions — through abstract good/evil alignment, or something else?
Approach: Tim Cain walks through his direct experience designing morality and reputation systems across Fallout, Arcanum, and Pillars of Eternity, explaining what was tried, what broke, and what he ultimately concluded.
Findings: Universal good/evil morality systems are fundamentally flawed — teams can't agree on what's good or evil, and systemic approaches create exploitable imbalances. Faction-based reputation systems work far better because they're grounded in concrete, per-faction values that are easier to design, balance, and understand.
Key insight: Don't define morality — define factions with opinions. Let the world react to the player through the lens of each faction's values rather than an abstract moral axis.
1. Fallout: The Messy First Attempt
In Fallout, reputation was a single overall number adjusted via script. Positive actions pushed it up, negative ones pushed it down. The problem: the team couldn't agree on what constituted a positive or negative act. One group's heroic deed was another's moral failing. The game itself reflected this confusion — an action positive for one group was negative for another, but the system had only one axis.
Fallout also introduced reputation titles (Champion, Child Killer, etc.), which were a separate layer on top of the number. Cain describes the whole thing as "a mish-mash of ideas."
2. Arcanum: Trying to Systematize Good and Evil
For Arcanum, Cain wanted to move beyond subjective script-driven morality and build something systemic — hard-coded rules that would automatically adjust alignment based on player actions.
2.1. The Systemic Rule
The rule Cain wrote:
- Killing a good creature (alignment > 0) is always an evil act, regardless of the player's own alignment. The shift toward evil scales with how good the victim was — killing an angel is far worse than killing a bunny.
- Killing an evil creature is more nuanced and relative:
- If the player is less evil than the creature, it's a good act (you're reducing evil in the world).
- If the player is more evil than the creature, it's an evil act (you're eliminating something with more good in it than you have).
- The alignment shift was capped — you couldn't become more good or evil than your victim's alignment value.
2.2. Why It Was Cut
This system was in place from 1998 until February 2, 2001, when Jason Anderson flagged a critical balance problem. The issue was simple: in normal gameplay, enemies are overwhelmingly evil. Dungeons, caves, and random encounters are filled with evil creatures. Good NPCs in towns generally don't attack you.
The result: players passively drifted toward good just by playing the game normally. If you did something terrible, you could erase it by grinding a few evil monsters. The system made it trivially easy to be good and stay good.
The compromise: systemic code could still make you more evil (killing good things), but all positive alignment shifts were removed from the automated system. You could only become good through quest scripts — deliberate narrative choices.
Cain notes he was "sad" about this because he genuinely wanted a working systemic morality, but the math simply didn't support it given how RPG worlds are structured.
3. The Core Problem with Morality Systems
Cain identifies several fundamental issues:
- Teams can't agree on definitions. What's good? What's evil? Philosophers don't agree, and neither do game designers. Utilitarianism alone creates paradoxes (the trolley problem).
- "Good" and "nice" aren't the same thing. Sometimes players want to do something pleasant that isn't morally significant, and systems can't distinguish that.
- Unforeseeable consequences frustrate players. When a morality system produces surprising results, players either reload saves or Google every dialogue choice before committing. Both outcomes are bad.
4. Pillars of Eternity: Faction Reputation
By Pillars of Eternity, Cain had moved to reputation by faction — not good/evil, but like/dislike per group. Each faction has its own values and reacts to player actions accordingly.
4.1. Why This Works Better
- Concrete instead of abstract. Stealing is clearly good for the Thieves Guild and bad for the town you stole from. There's no philosophical debate needed.
- Ownership solves disagreements. Each faction typically has one designer who is the final authority on what that faction likes and dislikes. No team-wide arguments about universal morality.
- Players can reason about it. You understand why a faction reacts the way it does because their values are transparent.
- Graduated consequences. Mild dislike means higher merchant prices. Strong dislike means hostile comments. Extreme dislike means they attack you. The inverse: liking you means cheaper prices, more services, exclusive quests.
5. Cain's Final Position
By the end of his career, Cain concluded he doesn't like morality systems at all. His preferred approach: every faction (and sometimes individual companions as their own "faction") has a set of likes and dislikes, and the game tracks reputation per faction based on player actions.
This gives designers what they actually want — a world that watches what you do and reacts to it — without the impossible task of defining universal morality. Once you try to define morality abstractly, you inevitably hit situations the team disagrees on, players can't predict, and reviewers complain about.
6. References
- Tim Cain. YouTube video. https://www.youtube.com/watch?v=YHrTqHIMGbE