1 year ago
Wed Jun 26, 2024 3:26pm PST
Complex, but Robust Human Moral Decisions from Moral Machine
Consider a dataset D', which is a noisier version of 'ground truth' dataset D. If this dataset is sufficiently large, a model can be trained on it and be a better estimate of D than D' itself.

I applied this in cognitive science in graduate school [1] [2]. We would train a machine learning model on a large behavioral dataset (e.g. Moral Machine [3]), and then compare this ML model with a choice model (read: logistic regression) and identify residuals which we would use to featurize the choice model.

This let us find some complex, but robust empirical phenomena:

1. Age of Responsibility

Consider the dilemma when choosing to save an old person versus a child (gender matched). Similarly, consider the dilemma when choosing to save an old person plus an adult versus a child plus an adult. A linear utility model considers these two dilemmas to be equivalent. Do people?

They do when the young side is crossing legally or there is no crossing signal. But, when it's just the child crossing illegally, they are not penalized as much as a child and an adult crossing illegally. This is controlled when changing car side and gender. This suggests some preferential moral status of kids, in line with many legal norms.

2. Asymmetric Notions of Responsibility

Now, consider choosing between a high-value individual versus a low-value individual. By the Moral Machine standards, two axes of value are woman-man and fit-fat. Consider three scenarios when heading towards the high-valued individual.

a. The high-value individual is crossing legally.

b. There is no crossing signal

c. The high-value individual is crossing illegally.

Interestingly, the empirical proportion of (b) is the mean of (a) and (c).

However, this does not generalize when the car is headed towards the low-value individual. In that case, the lower-valued individual is saved at a rate almost identical to when they are crossing illegally. This suggests some type of motivated reasoning.

We applied a similar methodology to build an integrative theory of risky choice that combines prospect theory and expected utility theory [2].

[1] https://www.pnas.org/doi/full/10.1073/pnas.1915841117

[2] https://www.science.org/doi/10.1126/science.abe2629

[3] https://www.nature.com/articles/s41586-018-0637-6

comments:
add comment
loading comments...