Explainable Artificial Intelligence (XAI) Research
The goal of DARPA's explainable artificial intelligence (XAI) program (2017–2021) was "to enable end users to better understand, trust, and effectively manage artificially intelligent systems." See:
- Gunning, D., & Aha, D. (2019). DARPA's explainable artificial intelligence (XAI) program. AI Magazine, 40(2), 44–58. https://doi.org/10.1609/aimag.v40i2.2850
- Gunning, D., Vorm, E., Wang, J. Y., & Turek, M. (2021). DARPA's explainable artificial intelligence (XAI) program: A retrospective. Applied Al Letters, 2(4), e61. https://doi.org/10.1002/ail2.61
- Clancey, W. J. (2019). Critical Thinking about AI and Explanation (annotated presentation). National Academies Board on Human System Integration (BOHSI) Panel: Explainable AI, System Transparency, and Human Machine Teaming.
- Clancey, W. J. (2019). Explainable AI Past, Present, and Future: A Scientific Modeling Approach. Presentation for the Ontology Summit 2019 Explainable AI Session 1, February 20 Conference Call.
- Clancey, W. J., & Hoffman, R. R. (2021). Methods and standards for research on explainable artificial intelligence: Lessons from intelligent tutoring systems. Applied Al Letters, 2(4), e53. https://doi.org/10.1002/ail2.53
- Hoffman, R. R., Clancey, W. J., & Mueller, S. T. (2020). Explaining AI as an exploratory process: The Peircean abduction model. https://arxiv.org/abs/2009.14795v2
- Hoffman, R. R., Jalaeian, M., Klein, G., Jentsch, F., Clancey, W. J., & Mueller, S. T. (2022). Top ten recommendations for the development and assessment of AI systems. Proceedings of the 27th International Command and Control Research and Technology Symposium (ICCRTS 27). International Command and Control Institute. https://internationalc2institute.org/27th-iccrts-proceedings-home
- Hoffman, R. R., Klein, G., Mueller, S. T., & Clancey, W. J. (2021). Recommendations for the empirical assessment of human–AI work systems: A contribution to AI measurement science. Technical Report, DARPA Explainable AI Program. https://doi.org/10.31234/osf.io/z3yek
- Hoffman, R. R., Miller, T., & Clancey, W. J. (2022). Psychology and AI at a crossroads: How might complex systems explain themselves? The American Journal of Psychology, 135(4), 365–378. https://doi.org/10.5406/19398298.135.4.01
- Hoffman, R. R., Miller, T., Klein, G. & Clancey, W. J. (2018). Explaining explanation, Part 4: A deep dive on deep nets. IEEE Intelligent Systems, 33(3), 87-95. https://www.researchgate.net/publication/326726086_Explaining_Explanation_Part_4_A_Deep_Dive_on_Deep_Nets
- Hoffman, R. R., Miller, T., Klein, G., Mueller, S. T., & Clancey, W. J. (2023). Increasing the value of XAI for users: A psychological perspective. Künstliche Intelligenz, 37, 237–247. https://doi.org/10.1007/s13218-023-00806-9
- Klein, G., Hoffman, R. R., Clancey, W. J., Mueller, S. T., Jentsch, F., & Jalaeian, M. (2023). “Minimum necessary rigor” in empirically evaluating human–AI work systems. AI Magazine, 44, 274–281. https://doi.org/10.1002/aaai.12108
- Mueller, S. T., Hoffman, R. R., Clancey, W. J., Emrey, A. K., & Klein, G. (2019). A literature meta-review synopsis of key ideas and publications and bibliography for explainable AI. DARPA XAI Technical Report. https://doi.org/10.48550/arXiv.1902.01876
- Mueller, S. T., Veinott, E. S., Hoffman, R. R., Klein, G., Alam, L., Mamun, T. I., & Clancey, W. J. (2021). Principles of explanation in human-AI Systems. The 35th Conference on Artificial Intelligence: A virtual conference. Workshop 11: Explainable agency in Artificial Intelligence. https://doi.org/10.48550/arXiv.2102.04972