Skip to content
BY 4.0 license Open Access Published by De Gruyter October 9, 2021

Generating adversarial images to monitor the training state of a CNN model

  • Ning Ding and Knut Möller

Abstract

Deep neural networks have shown effectiveness in many applications, however, in regulated applications like automotive or medicine, quality guarantees are required. Thus, it is important to understand the robustness of the solutions to perturbations in the input space. In order to identify the vulnerability of a trained classification model and evaluate the effect of different perturbations in the input on the output class, two different methods to generate adversarial examples were implemented. The adversarial images created were developed into a robustness index to monitor the training state and safety of a convolutional neural network model. In the future work, some generated adversarial images will be included into the training phase to improve the model robustness.

Published Online: 2021-10-09
Published in Print: 2021-10-01

© 2021 The Author(s), published by Walter de Gruyter GmbH, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 29.9.2023 from https://www.degruyter.com/document/doi/10.1515/cdbme-2021-2077/html
Scroll to top button