New study demonstrates human cognitive system that evolved to solve moral dilemas
Monday, 7 November, 2022 | NEWS, PAPERS, PUBLICATIONSResearchers Ricardo Guzmán (CICS) and Leda Cosmides (UC Santa Barbara) presented first evidence of a human system designed to make trade-offs between competing moral values in decision making. The work, published in the prestigious journal Proceedings of the National Academy of Sciences (PNAS), describes the mechanism of rationality behind intuitive moral judgments.
Human beings face daily dilemmas in satisfying both their needs and their responsibilities to others. How do individuals strike a balance between conflicting moral values in deciding, for example, how to respond to work obligations, such as family, and their own interests when time is limited? Ricardo Guzmán, a researcher at Center for Research in Social Complexity, showed that humans have an entire non-conscious cognitive system that is responsible for these decisions.
Guzmán -lead author of the article published in PNAS magazine– and Leda Cosmides -researcher at UC Santa Barbara University- developed and tested a model based on principles of evolutionary psychology and an analysis of compensation decisions analogous to Rational Choice Theory . The evidence obtained with the application of the model demonstrated a system whose function is moral compensation, that is, weigh opposing ethical considerations and calculate which of the available options to resolve the dilemma is the most morally “correct”.
The results challenge the dual process model, which holds that individuals’ minds cannot reach resolution by weighing conflicting moral values against each other. However, “for many dilemmas, striking a balance between two conflicting values (a compromise judgment) would have promoted adjusting over neglecting one value to fully satisfy the other (an extreme judgment),” the authors note.
With this model, authors propose that the minds of individuals carry out a series of non-conscious calculations that respond to dilemmas by constructing “correction functions”: the subject conceives a series of specific temporal representations of the situation in question (that is, glimpse all the options, feasible or not). These options are “ranked” in terms of moral rectitude and then an optimization algorithm selects, among the feasible solutions, the one with the highest level of correctness.
Testing a moral trade-off system
The research team, which also includes UDD Doctor of Social Complexity Sciences María Teresa Barbato and Daniel Sznycer, from the University of Montreal, collected research data from 1,700 subjects, which showed that people are fully capable of making moral compromises while satisfying this strict standard of rationality.
Researchers applied a Dilemma of Sacrifice (in which people must be harmed to maximize the number of lives saved), but modified. As the authors explain, in this proposal “the menu of options to solve the dilemma included compromise solutions”. To measure moral rationality, they used the Generalized Axiom of Revealed Preferences (GARP), a demanding standard of rationality that posits that people choose the best option available to them, given their personal preferences.
Each subject responded to 21 different scenarios, in which the human cost of saving lives varied (for example in a war: bombing key points with multiple civilian casualties, or a long field battle with heavy losses of soldiers?). Results showed that most subjects decided that compromise solutions were morally more appropriate for some of these scenarios, striking a balance between the duty to avoid inflicting fatal harm and the duty to save lives.
The article “A moral trade-off system produces intuitive judgments that are rational and coherent and strike a balance between conflicting moral values”, is available in open access through PNAS and at the following link.