Optimizing for fairness, not goodness


When I was starting out on AbstractOps, I drafted a set of values, based on my own personal experiences. The current (fourth) version of AO's values have evolved to fit the team today; but my original drafts capture my values best. Here are a few of them.

Doing the "right thing" is really hard, because most people don't always agree on what is "right." Just look at the wide divergence in political views across the spectrum, all of whom are trying to do the "right" thing.

In my opinion, fairness is an easier litmus test that is more "universal". It is so deeply ingrained in humans — even mammals — that there is a fascinating study where monkeys rejected unfair outcomes.

I've tried to use this every time I have a hard decision in front of me. What's fair to everyone involved? Is this how I'd want to be treated? If a third party (whether they have an interest in the situation or not) were to fully understand the details of the situation, would they think it was reasonable, or would they think it was unjust?

I've also realized that fairness is a product of pragmatism and empathy. Whenever I've made sure I'm exercising both, I've found it hard to think of the conclusion as "unfair."

That said, disagreements can often arise when someone's talking about when you disagree on fundamental assumptions. For example, fairness could mean an fair approach or a fair outcome and those might mean different things if you disagree on what happens between the "approach" and "outcome."

For example, I think that in the US, Republicans think about fairness of approach: why should person X pay more taxes than person Y, just because person X is better at making money? But Democrats think about fairness of outcome: how is it fair that person A has three private jets but person B works three jobs to feed their family?

These two diverge because of what happens in the middle.

Republicans believe: Approach → Personal Responsibility → Outcome

Democrats believe: Approach → Unfair system → Outcome

This is why the parties keep talking over each other... they haven't agreed on the black box in the middle. Personally, if I had to choose, I think a fair outcome is more important than a fair approach, but reasonable minds can disagree on this, in part because a fair approach is much easier to set up than fair outcomes (because of unintended consequences).

Ultimately, I believe that any path to agreement has to be based on: "For this problem, is the black box in the middle fundamentally working or broken?" I think it's fair to start with a fair approach to solving any problem, and then if outcomes are consistently unfair, it means the black box is broken; then efforts have to overcompensate, or be adjusted, to ensure fair outcomes.