DSS Special Issue On Explainable AI For Enhanced Decision Making

July Artificial Intelligence (AI) defined as the development of computer systems that are able to perform tasks normally requiring human intelligence by understanding, processing and analyzing large amounts of data has been a prevalent domain for several decades now. An increasing number of businesses rely on AI to achieve outcomes that operationally and/or strategically support (human) decision making in different domains. At present, AI based machine learning (ML) has become widely popular as a subfield of AI, both in industry as in academia. ML has been widely used to enhance decision making including predicting organ transplantation risk (Topuz et al., 2018), forecasting remaining useful life of machinery (Kraus et al., 2020), student dropout prediction (Coussement et al., 2020), money laundering (Fu et al., 2021), money laundering detection (Vandervorst et al., 2022) amongst others. In the early days, AI attempts to imitate human decision-making rules were only partially successful, as humans often could not accurately describe the decision-making rules, they use to solve problems (Fügener et al., 2022). With the development of advanced AI, exciting progress has been made in algorithmic development to support decision making in various fields including finance, economics, marketing, human resource management, tourism, computer science, biological science, medical science, and others (Liu et al., 2022).

Recently, advances have heavily focused on boosting the predictive accuracy of AI methods, with deep learning (DL) methods being a prevalent example. The stringent focus on improved prediction performance often comes at the expense of missed explainability, which leads to decision makers’ distrust and even rejection of AI systems (Shin, 2021). Explainable AI describes the process that allows one to understand how an AI system decides, predicts, and performs its operations. Therefore, explainable AI reveals the strengths and weaknesses of the decision-making strategy and explains the rationale of the decision support system (Rai, 2020). Numerous scholars confirm that explainable AI is the key to developing and deploying AI in industries such as retail, banking and financial services, manufacturing, and supply chain/logistics (Kim et al., 2020; Shin, 2021; Zhdanov et al., 2022). In addition, explainable AI has also received attention from governments due to its ability to improve the efficiency and effectiveness of governments’ functionalities and decision supports (Phillips-Wren et al., 2021).

In fact, in many cases, understanding why a model makes certain decisions and predictions is as important as its accuracy. Because model explainability helps managers to better understand models’ parameters and apply them more confidently, allowing managers to communicate the analytical rationale more convincingly for their decisions to stakeholders (Wang et al., 2022). Among others, exploring the applications of AI explainability and precise understandability in decision making is one of the main contributions of this special issue.

Therefore, this special issue proposal on “Explainable AI for Enhanced Decision Making” deals with the following topics as an illustrative but not restrictive list:

* Explainability and interpretability in AI decision support systems

* Use explainable AI for corporate investment decisions

* Explainable AI in banking, insurance, and micro enterprises

* Explainable AI in healthcare, transportation, and education

* Property risk assessment using explainable AI

* Make enhanced business decisions using explainable AI

* Use explainable AI to make predictions for IT industry decisions

* Explainable AI, big data, and decision support systems

* Explainable AI, applications and services

* Explainable methods for deep learning architectures

* Decision model visualization

* Evaluate decision making metrics and processes

* Measuring explainability in decision support systems

“Please note that we are particularly interested in research papers that focus on the explainability aspects of AI based ML research. All articles that simply focus on improving the accuracy of AI algorithms/machine learning classifiers, without highlighting the benefit to improved explainable decision making, are strictly not encouraged”.

Dr. Mohammad Abedin

Senior Lecturer in Fintech & Financial Innovation

Teesside University International Business School

Teesside University, United Kingdom

Email: [email protected]

Prof. Dr. Kristof Coussement

Professor of Business Analytics

IESEG School of Management, France

Email: [email protected]

Dr. Mathias Kraus

Assistant Professor of Data Analytics

Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany

Email: [email protected]

Prof. Dr. Sebastián Maldonado

Professor of Information Systems

University of Chile, Chile

Email: [email protected]

Dr. Kazim Topuz

Assistant Professor of Business Analytics & Operations Management

The University of Tulsa, United States

Email: [email protected]

Submission Guidelines

Kindly submit your paper to the Special Issue category (VSI: Explainable AI) through the online submission system (/decsup/default1.aspx) of Decision Support Systems. All the submissions should follow the general author guidelines of Decision Support Systems available at /journals/decision-support-systems/ /guide-for-authors.

Submission Timeline

• Paper submission system opens: November 1st, 2022.

• Paper submission deadline: June 15th, 2023.

References

Coussement, K., Phan, M., De Caigny, A., Benoit, D. F., & Raes, A. (2020). Predicting student dropout in subscription-based online learning environments: The beneficial impact of the logit leaf model. Decision Support Systems. /10.1016/j.dss.2020. Fu, R., Huang, Y., & Singh, P. V. (2021). Crowds, Lending, Machine, and Bias. Information Systems Research, 32(1), 72–92. /10.1287/isre.2020.0990

Fügener, A., Grahl, J., Gupta, A., & Ketter, W. (2022). Cognitive Challenges in Human–Artificial Intelligence Collaboration: Investigating the Path Toward Productive Delegation. Information Systems Research. /10.1287/isre.2021.1079

Kim, B., Park, J., & Suh, J. (2020). Transparency and accountability in AI decision support: Explaining and visualizing convolutional neural networks for text information. Decision Support Systems, 134(July 2019), . /10.1016/j.dss.2020. Kraus, M., Feuerriegel, S., & Oztekin, A. (2020). Deep learning in business analytics and operations research: Models, applications and managerial implications. European Journal of Operational Research, 281(3), 628–641. /10.1016/j.ejor.2019.09.018

Liu, H., Ye, Y., & Lee, H. Y. (2022). High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks. Operations Research. /10.1287/opre.2021.2217

Phillips-Wren, G., Daly, M., & Burstein, F. (2021). Reconciling business intelligence, analytics and decision support systems: More data, deeper insight. Decision Support Systems, 146(March), . /10.1016/j.dss.2021. Rai, A. (2020). Explainable AI: from black box to glass box. Journal of the Academy of Marketing Science, 48(1), 137–141. /10.1007/s Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human Computer Studies, 146, . /10.1016/j.ijhcs.2020. Topuz, K., Zengul, F. D., Dag, A., Almehmi, A., & Yildirim, M. B. (2018). Predicting graft survival among kidney transplant recipients: A Bayesian decision support model. Decision Support Systems, 106, 97–109. /10.1016/j.dss.2017.12.004

Vandervorst, F., Verbeke, W., & Verdonck, T. (2022). Data misrepresentation detection for insurance underwriting fraud prevention. Decision Support Systems, 159, . //10.1016/j.dss.2022. Wang, L., Gopal, R., Shankar, R., & Pancras, J. (2022). Forecasting venue popularity on location-based services using interpretable machine learning. Production and Operations Management. //10.1111/poms. Zhdanov, D., Bhattacharjee, S., & Bragin, M. A. (2022). Incorporating FAT and privacy aware AI modeling approaches into business decision making frameworks. Decision Support Systems, 155, . //10.1016/j.dss.2021.113715