فصلنامه مطالعات مدیریت راهبردی

فصلنامه مطالعات مدیریت راهبردی

شناسایی الزامات طراحی و قابلیت تبیین سیستم‌های هوش مصنوعی

نوع مقاله : پژوهشی

نویسندگان
1 دانشجوی دکتری، دانشکده علوم اداری و اقتصادی، دانشگاه فردوسی مشهد، مشهد، ایران
2 استاد، دانشکده علوم اداری و اقتصادی، دانشگاه فردوسی مشهد، مشهد، ایران
3 استاد، دانشکده علوم تربیتی و روانشناسی، دانشگاه فردوسی مشهد، مشهد، ایران
4 استادیار، پژوهشکده فناوری اطلاعات، پژوهشگاه علوم و فناوری اطلاعات ایران، تهران، ایران
چکیده
با گسترش ‌کاربرد الگوریتم‌های هوش مصنوعی در سازمان‌ها و بخش‌های دولتی، دغدغه‌هایی در مورد مسئولیت‌ اجتماعی کاربست عامل‌های هوشمند همانند شفافیت، پاسخگویی و انصاف در محافل دولتی و دانشگاهی مطرح شده است. بر همین اساس، هدف پژوهش، تبیین مدل ساختاری- تفسیری الزامات طراحی سیستم‌های هوش مصنوعی با قابلیت تبیین در تصمیم‌گیری‌های مبتنی بر مشارکت انسان و هوش مصنوعی است. برای رسیدن به این هدف، از روش آمیخته اکتشافی طراحی مبتنی بر اقدام- دلفی فازی و مدل‌سازی ساختاری- تفسیری برای توسعه و ارزیابی اصول طراحی سیستم هوش مصنوعی با قابلیت تبیین استفاده می‌شود. زمینه پژوهش، ادارۀ کل تنقیح قوانین و مقررات در معاونت حقوقی قوۀ قضائیه است. مشارکت‌کنندگان پژوهش، دست اندرکارانی از ادارۀ کل تنقیح قوانین و مقررات و مرکز فناوری اطلاعات بوده که به همراه پژوهشگران تیم پژوهش را تشکیل می‌دهند و در مجموع 15 نفر هستند. بر اساس یافته‌های پژوهش، مدل، پنج ویژگی قابلیت درک، قابلیت حکمرانی، قابلیت اقناع، دقت پیش‌بینی (توصیفی)، شفافیت و سودمندی را در بر می‌گیرد. این قابلیت‌ها در دو بُعد طبقه‌بندی شدند. بُعد توانش شامل قابلیت درک، قابلیت حکمرانی و قابلیت اقناع  است. بُعد محقق‌سازی نیز شامل ویژگی‌های دقت پیش‌بینی، شفافیت و سودمندی است. افزون بر این، مدل می‌تواند سازوکار تقویت هوشمندی در تعامل انسان-هوش مصنوعی را تبیین کند. 
کلیدواژه‌ها

موضوعات


عنوان مقاله English

Identifying the design requirements of explainable artificial intelligence systems

نویسندگان English

Zahra Hemmat 1
Mohammad Mehraeen 2
Rahmat Allah Fattahi 3
Farhad Shirani 4
1 PhD student, Faculty of Economics and Administrative Science, Ferdowsi University of Mashhad, Mashhad, Iran
2 Professor, Faculty of Economics and Administrative Science, Ferdowsi University of Mashhad, Mashhad, Iran
3 Professor, Faculty of Education and Psychology, Ferdowsi University of Mashhad, Mashhad, Iran
4 Assistant Professor, Information Technology Research Department, Research Institute for Information Science and Technology, Tehran, Iran
چکیده English

Introduction
The use of artificial intelligence algorithms in public sectors is increasingly expanding. At the same time, there are still concerns about the social responsibility of using intelligent agents, such as transparency, accountability and fairness in governmental and academic communities. Based on this, the purpose of this research is to develop a theorical model that includes the requirements for the design of explainable artificial intelligence systems in Human-Artificial Intelligence interaction for decision making. The most important issue is understanding how the system reaches a decision by users, which can be investigated. Because as much as human decision-makers are expected to provide explanations (explain) about their decisions, artificial intelligence systems can also be asked to explain proposed solutions. In turn, this topic provides prescriptive knowledge for system designers to create new insights for users by considering the linkage of user information with various sources through the use of artifacts. Therefore, the scope of this research is the enhancement of intelligence in which artificial intelligence models provide recommendations for human users. According to these cases, the research’s goel is to identify the descriptive and prescriptive knowledge required for designing a class of recommender systems based on human-artificial intelligence interaction. To achieve this goal, it is borrowed from the theory of ability to specify the relationships between the extracted categories. Finally, it is discussed how these requirements are used in design cycles.
Methodology
A Mixed methods research designs: Action design rsearch and Fuzzy delphi method and interpretive structural modeling approach used to develop and evaluate the design principles of that. We follow the design research developed by Mlarki et al. The General Department of Revision of Laws and Regulations in the Legal Department of the Judiciary has been selected as the context of research. The participants of this research are professionals from the General Department of Revision of Laws and Regulations and the Information Technology Center, who together with researchers (supervisors and students) constitute the research team, which is a total of 15 people. Using the triangulation technique, data have been collected from different sources. And data analysis was done in two steps: in the first step, by continuous refining the concepts, the extracted components were aggregated in theoretical dimensions. Also, a data structure was created by combining concepts, components and dimensions. In the second step, the research lens was used to develop the theory.
Results and Discussion
The research develops a framework that conceptualizes the characteristics of explainable artificial intelligence systems that include of understanding capability, governance capability, persuasion capability, predictive (descriptive) accuracy, transparency and usefulness.  These characteristics classified in two dimensions. The affordance dimension includes the ability to understand, the ability to rule, and the ability to persuade. The actualization dimension includes the features of accuracy of prediction, transparency and usefulness. In addition, the model can explain the mechanism of enhancing intelligence in human-artificial intelligence interaction. Therefore, we proposed following propositions in designing explainable artificial intelligence system for humans -artificial intelligence interaction:1) understanding capability of intelligent agent in the human - artificial intelligence interaction lead to intelligence reinforcing; 2) governance capability of intelligent agent in the human - artificial intelligence interaction lead to intelligence reinforcing; 3) persuasion capability of intelligent agent in the human - artificial intelligence interaction lead to intelligence reinforcing; 4) transparency of intelligent agent in the human - artificial intelligence interaction lead to intelligence reinforcing; and 5) the accuracy of prediction of intelligent agent in the human - artificial intelligence interaction lead to intelligence reinforcing. Finally, from the indiscernibility aspect, the findings of this research emphasize the explainable algorithmic activities and increase the understanding and persuasiveness abilities of users through the feature of algorithmic transparency. Since it is difficult to assess indiscernibility. In other words, algorithm-based decision-making process is understandable for some and not understandable for others. The design requirements of this research are a practical guide for clarifying the algorithmic activity in policy-making according to the user understanding and the purpose of use of artificial intelligence.
Conclusion
We argued that users and artifacts create an affordance in each other that lead to learning. Accordingly, the following can be considered as the theoretical contribution of this research. First, through developing a theoretical model, we established the mechanism of designing systems based on human-artificial intelligence interaction. In addition, from a human-oriented perspective, we identify the characteristics that users need to enhance intelligence which are: capability of understanding, capability of governance, and capability of persuasion; Second, it provides knowledge about the solution space. In other words, the intelligent agent actualized the user's affordance through the accuracy of prediction and transparency; Next, we provide sets of requirements for the implementation of human-artificial intelligence systems. These set of requirements in a theoretical model, constitute a guide for the artificial systems design principles in organizations, which has been the main concern of prior research; and finally, we showed that the design of Human-Artificial Intelligence (AI) interaction systems for decision making is not only technical, but also technical, social and organizational elements are intertwined in different cycles, which correspond to three related and interdependent aspects of AI management, i.e. automation, learning, indiscernibility: In terms of automation, the findings of this research showed that the policy-making functions cannot be coded. As a result, AI automation is constraining and AI should be tools to augmentation. In other hand, augmentation can be automated, so augmentation can become automatic over time. Accordingly, artificial intelligence tools can automate policy-making processes through the features of transparency and predictive accuracy that needed for policy maker's affordance, i.e. the ability to understand, the ability to rule, and the ability to persuade. In terms of learning, the findings of this research have emphasized the capacity of machine learning algorithms for semantic search to increase the accuracy of artificial intelligence prediction.
 

کلیدواژه‌ها English

Ability to understand
Ability to rule
Ability to persuade
Prediction accuracy
Transparency
Usefulness
  1. Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access6, 52138-52160.,doi: 10.1109/ACCESS.2018.2870052.

 2.Alon-Barkat, S., & Busuioc, M. (2023). Human–AI interactions in public sector decision making: “automation bias” and “selective adherence” to algorithmic advice. Journal of Public Administration Research and Theory33(1), 153-169, https://doi.org/10.1093/jopart/muac007.

 3.Araujo, T., Helberger, N., Kruikemeier, S., & de Vreese, C. H. (2020). In AI we trust? Perceptions about automated decision‑making by artificial intelligence. AI & Society, 35(3), 611-633, https://doi.org/10.1007/s00146-019-00931-w.

  1. Burton-Jones, A., & Volkoff, O. (2017). How Can We Develop Contextualized Theories of Effective Use? A Demonstration in the Context of Community-Care Electronic Health Records. Information Systems Research, 28(3), 451-679, https://doi.org/10.1287/isre.2017.0702.

 5.Castano, S., Falduti, M., Ferrara, A., & Montanelli, S. (2022). A knowledge-centered framework for exploration and retrieval of legal documents. Information Systems, 106, 101842, https://doi.org/10.1016/j.is.2021.101842.

 6.Chen, H., Wu , L., Chen, J., Lu, W., & Ding, J. (2022). A comparative study of automated legal text classification using random forests and deep learning. Information Processing and Management, 59(2), 102798, https://doi.org/10.1016/j.ipm.2021.102798.

 7.Cobbe, J. (2019). Administrative law and the machines of government: judicial review of automated public-sectordecisionmaking. Legal Studies, 39(4), 1-20, https://doi.org/10.2139/ssrn.3226913.

 8.De Fine Licht, K., & de Fine Licht, J. (2020). Artificial intelligence, transparency, and public decision‑making: Why explanations are key when trying to produce perceived legitimacy. AI & Society, 35(4), 917-926, https://doi.org/10.1007/s00146-020-00960-w.

 9.Di Vaio, A., Hassan, R., & Alavoine, C. (2022). Data intelligence and analytics: A bibliometric analysis of human–Artificial intelligence in public sector decision-making effectiveness. Technological Forecasting & Social Change, 174, 121201, https://doi.org/10.1016/j.techfore.2021.121201

 10.Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

 11.Du, W., Pan, S. L., Leidner, D. E., & Ying, W. (2019). Affordances, experimentation and actualization of FinTech:A blockchain implementation study. Journal of Strategic Information Systems, 28(1), 50-65, https://doi.org/10.1016/j.jsis.2018.10.002.

 12.Ehsan, U., & O. Riedl, M. (2020). Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach. International Conference on Human-Computer Interaction (pp. 449-466). Springer.

  1. Fügener, A., Grahl, J., Gupta, A., & Ketter, W. (2022). Cognitive challenges in human–artificial intelligence collaboration: Investigating the path toward productive delegation. Information Systems Research33(2), 678-696, https://doi.org/10.1287/isre.2021.1079.

 14.Gilpin, L. H., Bau, D., Z, B. Y., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining Explanations: An Overview of Interpretability of Machine Learning. IEEE 5th International Conference on data science and advanced analytics (DSAA) (pp. 80-89). IEEE.

 15.Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Pedreschi, D., & Giannotti, F. (2018). A Survey Of Methods For Explaining Black Box Models. ACM computing surveys (CSUR), 51(5), 1-42, https://doi.org/10.1145/3236009.

 16.Haesevoets, T., De Cremer, D., Dierckx, K., & Van Hiel, A. (2021). Human-machine collaboration in managerial decision making. Computers in Human Behavior, 119, 106730. https://doi.org/10.1016/j.chb.2021.106730

  1. Jacovi, A., Shalom, O. S., & Goldberg, Y. (2018). Understanding convolutional neural networks for text classification. arXiv preprint arXiv:1809.08037.

 18.Jarrahi, M. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577-586, https://doi.org/10.1016/j.bushor.2018.03.007

  1. Kuo, Y.-F., & Chen, P.-C. (2008). Constructing performance appraisal indicators for mobility of the service industries using Fuzzy Delphi Method. Expert systems with applications, 35(4), 1930-1939, https://doi.org/10.1016/j.eswa.2007.08.068.

 20.Kulesza, T., Burnett, M., Wong, W.-K., & Stumpf, S. (2015). Principles of Explanatory Debugging to Personalize Interactive Machine Learning. Proceedings of the 20th international conference on intelligent user interfaces (pp. 126-137). IUI.

 21.Liao, Q. V., & Varshney, K. R. (2021). Human-centered explainable ai (xai): From algorithms to user experiences. arXiv preprint arXiv:2110.10790.

  1. Maier, J. R., & Fadel, G. M. (2009). Affordance based design: a relational theory for design. Research in Engineering Design, 20(1), 13-27, https://doi.org/10.1007/s00163-008-0060-3.

 23.Miller, T. (2019). Explanation in Artificial Intelligence:Insights from the Social Sciences. Artificial intelligence, 267, 1-38.

  1. Mullarkey, M. T., Hevner, A. R., & Ågerfalk, P. (2019). An elaborated action design research process model. European Journal of Information Systems, 28(1), 6-20, https://doi.org/10.1080/0960085X.2018.1451811.

 25.Myers, M. D., & Venable, J. R. (2014). A set of ethical principles for design science research in information systems. Information & Management, 51(6), 801-809, https://doi.org/10.1016/j.im.2014.01.002.

 26.Ogunbiyi , N., Basukoski, A., & Chaussalet, T. (2021). An Exploration of Ethical Decision Making with Intelligence Augmentation. social sciences, 10(2), 57, https://doi.org/10.3390/socsci10020057.

 27.Pan, S. L., Li, M., Pee, L. G., & Sandeep, M. S. (2020). Sustainability Design Principles for a Wildlife Management Analytics System: An Action Design Research. European Journal of Information Systems, 30(4), 1-22, https://doi.org/10.1080/0960085X.2020.1811786.

 28.Peeters, R. (2020). The agency of algorithms: Understanding human-algorithm interaction in administrative decision-making. Information Polity, 25(4), 507-522, https://doi.org/10.3233/IP-200253.

 29.Rahbari, E., & Shabanpoor, A. (2023). The Challenges in Employing of AI Judge in Civil Proceedings. Legal Research Quarterly, 25, 419-444, 10.52547/JLR.2022.228967.2335 [In Persian]

 30.Riedl, M. O. (2019). Human-centered artificial intelligence and machine learning. Human Behavior and Emerging Technologies, 1(1), 33-36, https://doi.org/10.1002/hbe2.117.

  31.Schoonderwoerd, T. A., Jorritsma, W., Neerincx, M. A., & van den Bosch, K. (2021). Human-centered XAI: Developing design patterns for explanations of clinical decision support systems. International Journal of Human - Computer Studies, 154, 102684.

 32.Sein, M. K., Henfridsson, O., Purao, S., & Rossi, M. (2011). Action design research. MIS quarterly, 37-56, DOI:10.2307/23043488.

 33.Sil, R., & Abhishek, R. (2021). Machine learning approach for automated legal text classification. International Journal of Computer Information Systems and Industrial Management, 13, 242-251.

 34.Shrestha, Y. R., Ben-Menahem, S. M., & von Krogh, G. (2019). Organizational decision-making structures in the age of artificial intelligence. California Management Review, 61(4), 66-83, https://doi.org/10.1177/0008125619862257.

 35.Sowa, K., Przegalinska, A., & Ciechanowski, L. (2021). Cobots in knowledge work Human– AI collaboration in managerial professions. Journal of Business Research, 125, 135-142, https://doi.org/10.1016/j.jbusres.2020.11.038.

  36.Tim, Y., Pan, S. L., Bahri, S., & Fauzi, A. (2017). Digitally enabled affordances for community‐driven environmental movement in rural Malaysia. Information Systems Journal, 28(1), 48-75, https://doi.org/10.1111/isj.12140.

 37.Vincent, V. U. (2021). Integrating intuition and artificial intelligence in organizational decision making. Business Horizons, 64(4), 425-438, https://doi.org/10.1016/j.bushor.2021.02.008.

 38.Watson, R. W. (1978). Interpretive structural modeling—A useful tool for technology assessment? Technological Forecasting and Social Change, 11(2), 165-185, https://doi.org/10.1016/0040-1625(78)90028-8.

  1. Wiegreffe, S., & Marasović, A. (2021). Teach me to explain: A review of datasets for explainable natural language processing. arXiv preprint arXiv:2102.12060.
  2. Wolf, C. T. (2019). Explainability scenarios: towards scenario-based XAI design. 24th International Conference on Intelligent User Interfaces, (pp. 252-257).

 41.Zerilli, J., Knott, A., Maclaurin, J., & Gavaghan, C. (2019). Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard? Philosophy & Technology, 32(4), 661-683, https://doi.org/10.1007/s13347-018-0330-6

  • تاریخ دریافت 10 اسفند 1401
  • تاریخ بازنگری 02 مهر 1402
  • تاریخ پذیرش 05 بهمن 1402