Ramon Alaman, Jonay ORCID: 0000-0002-8642-0422, Lafond, Daniel
ORCID: 0000-0002-1669-353X, Marois, Alexandre
ORCID: 0000-0002-4127-4134 and Tremblay, Sébastien
ORCID: 0000-0002-7030-5534
(2025)
Inverse Counterfactual for AI-Assisted Decision Support: Enhancing Knowledge Elicitation for Capturing Aircraft Pilot Decisions.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
.
ISSN 1071-1813
Preview |
PDF (VOR)
- Published Version
Available under License Creative Commons Attribution Non-commercial. 1MB |
Official URL: https://doi.org/10.1177/10711813251358254
Abstract
Integrating AI into decision-support systems (DSS) for safety-critical domains like aviation requires aligning system behavior with pilot mental models to provide relevant information. Using the Cognitive Shadow—a DSS that models operator decisions and notifies discrepancies—we evaluated a novel knowledge-elicitation technique: the inverse counterfactual. After selecting their preferred option, users modified a single factor to make their second-best option preferable, creating paired cases across their decision boundary. In a simulated adverse-weather avoidance task, 44 participants completed 130 baseline trials and generated counterfactuals for 20 additional cases. Contrary to expectations, the current implementation of the technique did not enhance human-AI model similarity, as measured by the degree of agreement in a 20-case test phase. However, when counterfactuals involved minimal edits—remaining near the decision boundary—predictive accuracy improved and DSS recommendations were more often accepted. Larger edits degraded performance. These findings demonstrate the feasibility of counterfactual elicitation for improving model alignment with user mental models.
Repository Staff Only: item control page