Accessible Requires Authentication Published by De Gruyter Oldenbourg July 2, 2019

Explainable software systems

Andreas Vogelsang

Abstract

Software and software-controlled technical systems play an increasing role in our daily lives. In cyber-physical systems, which connect the physical and the digital world, software does not only influence how we perceive and interact with our environment but software also makes decisions that influence our behavior. Therefore, the ability of software systems to explain their behavior and decisions will become an important property that will be crucial for their acceptance in our society. We call software systems with this ability explainable software systems. In the past, we have worked on methods and tools to design explainable software systems. In this article, we highlight some of our work on how to design explainable software systems. More specifically, we describe an architectural framework for designing self-explainable software systems, which is based on the MAPE-loop for self-adaptive systems. Afterward, we show that explainability is also important for tools that are used by engineers during the development of software systems. We show examples from the area of requirements engineering where we use techniques from natural language processing and neural networks to help engineers comprehend the complex information structures embedded in system requirements.

ACM CCS:

References

1. F. Chiyah Garcia, D. Robb, X. Liu, A. Laskov, P. Patron, and H. Hastie. Explain Yourself: A natural language interface for scrutable autonomous robots. Proceedings of the Explainable Robotic Systems Workshop (HRI), 2018. Search in Google Scholar

2. H. Femmer, D. Méndez Fernández, S. Wagner, and S. Eder. Rapid quality assurance with requirements smells. Journal of Software and Systems (JSS), 123:190–213, 2016. Search in Google Scholar

3. J. Hayes, A. Dekhtyar, and J. Osborne. Improving requirements tracing via information retrieval. Proceedings of the 11th IEEE International Requirements Engineering Conference (RE), pp. 138–147, 2003. Search in Google Scholar

4. IBM. An Architectural Blueprint for Autonomic Computing. White Paper, 2005. Search in Google Scholar

5. P. Le Bras, D. Robb, T. Methven, S. Padilla, and M. Chantler. Improving user confidence in concept maps: Exploring data driven explanations. Proceedings of the Conference on Human Factors in Computing Systems (CHI), pp. 1–13, 2018. Search in Google Scholar

6. B. Lim, A. Dey, and D. Avrahami. Why and why not explanations improve the intelligibility of context-aware intelligent systems. Proceedings of the Conference on Human Factors in Computing Systems (CHI), pp. 2119–2129, 2009. Search in Google Scholar

7. A. Perini, A. Susi, and P. Avesani. A machine learning approach to software requirements prioritization. IEEE Transactions on Software Engineering (TSE), 39(4):445–461, 2013. Search in Google Scholar

8. J. Winkler and A. Vogelsang. Automatic classification of requirements based on convolutional neural networks. Proceedings of the IEEE 24th International Requirements Engineering Conference Workshops (REW), pp. 39–45, 2016. Search in Google Scholar

9. J. Winkler and A. Vogelsang. “What does my classifier learn?” A visual approach to understanding natural language text classifiers. Proceedings of the 22nd International Conference on Natural Language & Information Systems (NLDB), pp. 468–479, 2017. Search in Google Scholar

10. J. Winkler and A. Vogelsang. Using Tools to Assist Identification of Non-requirements in Requirements Specifications–A Controlled Experiment. Proceedings of the 24th International Working Conference on Requirements Engineering: Foundation for Software Quality (REFSQ), pp. 57–71, 2018. Search in Google Scholar

Received: 2019-05-14
Accepted: 2019-05-14
Published Online: 2019-07-02
Published in Print: 2019-08-27

© 2019 Walter de Gruyter GmbH, Berlin/Boston