Explainable Artificial Intelligence (XAI)
(From Wikipedia, the free encyclopedia)
"An Explainable AI (XAI) or Transparent AI is an artificial
intelligence (AI) whose actions can be easily understood by humans. It
contrasts with "black box" AIs that employ complex opaque algorithms,
where even their designers cannot explain why the AI arrived at a
specific decision. XAI can be used to implement a social right to
explanation. Transparency rarely comes for free; there are often
tradeoffs between how smart
an AI is and how transparent it is, and
these tradeoffs are expected to grow larger as AI systems increase in
internal complexity. The technical challenge of explaining AI decisions
is sometimes known as the interpretability problem."
I have recently started research in XAI following attendance at an XAI workshop at IJCAI in Melbourne 2017. My research group has two preliminary publications.
Hall, D., Hur, N., Soulsby, J., & Watson, I. (2018). An Approach to Producing Model-Agnostic
Explanations for Recommendation Rankings. In The Proc. of the
1st Workshop on XCBR: Case-Based Reasoning for the Explanation of
Intelligent Systems at The 26th Int. Conf. on Case-Based Reasoning,
Stockholm, Sweden, p.17-21 (http://iccbr18.com/wp-content/uploads/ICCBR-2018-V3.pdf).
Lee, S., Li, S., Lim, H., & Watson, I. (2018). An Approach to Producing Model-Agnostic
Explanations for Recommendation Rankings. In The Proc. of the
1st Workshop on XCBR: Case-Based Reasoning for the Explanation of
Intelligent Systems at The 26th Int. Conf. on Case-Based Reasoning,
Stockholm, Sweden, p.12-16 (http://iccbr18.com/wp-content/uploads/ICCBR-2018-V3.pdf).
Last updated July 2018