An Explainability-Centric Requirements Analysis Framework for Machine Learning Applications
-
Graphical Abstract
-
Abstract
Data-driven intelligent software based on machine learning technology is an important means to realize industrial digital transformation. The research and development processes of data-driven intelligent software require the combined use of software requirements engineering, data and domain knowledge engineering, machine learning and so on. This process involves many subjects and roles, making it extremely challenging to clearly explain why and how the domain knowledge, business logic and data semantics relate to each other. Hence, a systematic requirements engineering approach is needed to explicitly address the explainability requirements issues of data-driven intelligence applications. It is still a fast-evolving research field which requires the proper embedding of various domain models and end-to-end machine learning technology fused into a given business processes. A key research question is how to deal with explainability as a core requirement for safety-critical scenarios in industrial, medical and other applications. We provide a research overview on requirements engineering for machine learning applications, in relation to explainability. First, the research status quo, research foci and representative research progress are reviewed. Then, an explainability-centric requirements analysis framework for machine learning applications is proposed, and some open important issues are put forward. Finally, based on the proposed framework, a case study of industrial intelligence application is discussed to illustrate the proposed requirements analysis methodological framework.
-
-