Jump to ContentJump to Main Navigation
Show Summary Details
More options …

at - Automatisierungstechnik

Methoden und Anwendungen der Steuerungs-, Regelungs- und Informationstechnik

[AT - Automation Technology: Methods and Applications of Control, Regulation, and Information Technology
]

Editor-in-Chief: Jumar, Ulrich


IMPACT FACTOR 2017: 0.503

CiteScore 2017: 0.47

SCImago Journal Rank (SJR) 2017: 0.212
Source Normalized Impact per Paper (SNIP) 2017: 0.546

Online
ISSN
2196-677X
See all formats and pricing
More options …
Volume 66, Issue 9

Issues

An overview of deep learning techniques

Ein Überblick über Deep-Learning-Techniken

Michael Vogt
Published Online: 2018-09-13 | DOI: https://doi.org/10.1515/auto-2018-0076

Abstract

Deep learning is the paradigm that profoundly changed the artificial intelligence landscape within only a few years. Although accompanied by a variety of algorithmic achievements, this technology is disruptive mainly from the application perspective: It considerably pushes the border of tasks that can be automated, changes the way products are developed, and is available to virtually everyone. Subject of deep learning are artificial neural networks with a large number of layers. Compared to earlier approaches with ideally a single layer, this allows using massive computational resources to train black-box models directly on raw data with a minimum of engineering work. Most successful applications are found in visual image understanding, but also in audio and text modeling.

Zusammenfassung

Deep Learning ist der Ansatz, der die künstliche Intelligenz innerhalb weniger Jahre tiefgreifend verändert hat. Auch wenn sie durch verschiedene algorithmische Fortschritte begleitet wird, ist diese Technik vor allem aus Anwendungssicht „disruptiv“: Sie verschiebt die Grenze automatisierbarer Aufgaben beträchtlich, verändert die Art der Produktentwicklung und steht praktisch jedermann zur Verfügung. Gegenstand des Deep Learning sind neuronale Netze mit einer großen Anzahl von Schichten. Verglichen mit früheren Ansätzen mit idealerweise einer einzigen Schicht, erlaubt dies den Einsatz massiver Rechenhardware, um Black-Box-Modelle mit einem Minimum an Entwicklungsaufwand direkt aus Rohdaten zu trainieren. Die meisten erfolgreichen Anwendungen finden sich in der Auswertung visueller Bilder, aber auch in der Audio- und Text-Modellierung.

Keywords: artificial intelligence; neural networks; deep learning; end-to-end learning

Schlagwörter: Künstliche Intelligenz; Neuronale Netze; Deep Learning; End-to-end Learning

References

  • 1.

    A. Angelova, et al., Real-Time Pedestrian Detection With Deep Network Cascades, in: 26th British Machine Vision Conference (BMVC), September 7–10, 2015, Swansea, UK, 2015, pp. 32.1–32.12.Google Scholar

  • 2.

    D. Bahdanau, et al., Neural Machine Translation by Jointly Learning to Align and Translate, in: 3rd International Conference on Learning Representations (ICLR), May 7–9, 2015, San Diego, CA, USA, 2015.Google Scholar

  • 3.

    M. Banko and E. Brill, Scaling to Very Very Large Corpora for Natural Language Disambiguation, in: 39th Annual Meeting of the Association for Computational Linguistics (ACL), July 06–11, 2001, Toulouse, France, 2001, pp. 26–33.Google Scholar

  • 4.

    K. Behrendt, et al., A deep learning approach to traffic lights: Detection, tracking, and classification, in: IEEE International Conference on Robotics and Automation (ICRA), May 29 – June 3, 2017, Singapore, 2017, pp. 1370–1377.Google Scholar

  • 5.

    R. Bellman, A Markovian Decision Process, Journal of Mathematics and Mechanics 6(5) (1954), 679–684.Google Scholar

  • 6.

    M. Bojarski, et al., Explaining How a Deep Neural Network Trained with End-to-End Learning Steers a Car, Computing Research Repository, arXiv:1704.07911 (2017).Google Scholar

  • 7.

    J.-Z. Cheng, et al., Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans, Scientific Reports 6(24454) (2016).Google Scholar

  • 8.

    K. Cho, et al., Learning Phrase Representations Using RNN Encoder-Decoder for Statistical Machine Translation, in: Conference on Empirical Methods in Natural Language Processing (EMNLP), October 25–29, 2014, Doha, Qatar, 2014, pp. 1724–1734.Google Scholar

  • 9.

    D.-A. Clevert, et al., Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs), in: 4th International Conference on Learning Representations (ICLR), May 2–4, 2016, San Juan, Puerto Rico, 2016.Google Scholar

  • 10.

    M. Cordts, et al., The Cityscapes Dataset for Semantic Urban Scene Understanding, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 27–30, 2016, Las Vegas, NV, USA, 2016, pp. 3213–3223.Google Scholar

  • 11.

    G. V. Cybenko, Approximation by Superpositions of a Sigmoidal Function, Mathematics of Control, Signals, and Systems 2(4) (1989), 303–314.CrossrefGoogle Scholar

  • 12.

    J. Dai, et al., R-FCN: Object Detection via Region-Based Fully Convolutional Networks, in: Advances in Neural Information Processing Systems 29 (NIPS 2016), December 5–10, 2016, Barcelona, Spain, 2016, pp. 379–387.Google Scholar

  • 13.

    J. DiGiovanna, et al., Coadaptive Brain-Machine Interface via Reinforcement Learning, IEEE Transactions on Biomedical Engineering 56(1) (2009), 54–64.CrossrefGoogle Scholar

  • 14.

    C. Doersch, et al., Unsupervised Visual Representation Learning by Context Prediction, in: IEEE International Conference on Computer Vision (ICCV), December 7–13, 2015, Santiago, Chile, 2015, pp. 1422–1430.Google Scholar

  • 15.

    C. Dong, et al., Learning a Deep Convolutional Network for Image Super-Resolution, in: 13th European Conference on Computer Vision (ECCV), September 6–12, 2014, Zurich, Switzerland, LNCS Vol. 8692, 2014, pp. 184–199.Google Scholar

  • 16.

    J. Duchi, et al., Adaptive Subgradient Methods for Online Learning and Stochastic Optimization, Journal of Machine Learning Research 12 (2011), 2121–2159.Google Scholar

  • 17.

    J. E. Espinosa, et al., Vehicle Detection Using AlexNet and Faster R-CNN Deep Learning Models: A Comparative Study, in: 5th International Visual Informatics Conference (IVIC), November 28–30, 2017, Bangi, Malaysia, LNCS Vol. 10645, 2017, pp. 3–15.Google Scholar

  • 18.

    S. S. Farfade, et al., Multi-view Face Detection Using Deep Convolutional Neural Networks, in: 5th ACM on International Conference on Multimedia Retrieval (ICMR), June 23–26, 2015, Shanghai, China, 2015, pp. 643–650.Google Scholar

  • 19.

    A. Géron, Hands-On Machine Learning with Scikit-Learn and TensorFlow, O’Reilly, 2017.Google Scholar

  • 20.

    X. Glorot and Y. Bengio, Understanding the Difficulty of Training Deep Feedforward Neural Networks, in: 13th International Conference on Artificial Intelligence and Statistics, May 13–15, 2010, Sardinia, Italy, Vol. 9, 2010, pp. 249–256.Google Scholar

  • 21.

    X. Glorot, et al., Deep Sparse Rectifier Neural Networks, in: 14th International Conference on Artificial Intelligence and Statistics, April 11–13, 2011, Fort Lauderdale, FL, USA, PMLR Vol. 15, 2011, pp. 315–323.Google Scholar

  • 22.

    I. Goodfellow, et al., Deep Learning, MIT Press, 2016.Google Scholar

  • 23.

    I. J. Goodfellow, et al., Generative Adversarial Nets, in: Advances in Neural Information Processing Systems 27 (NIPS 2014), December 8–13, 2014, Montréal, Canada, 2014, pp. 2672–2680.Google Scholar

  • 24.

    Google, Inc., Neural Network Processor. Patent WO2016186801, 2016.

  • 25.

    K. Greff, et al., LSTM: A Search Space Odyssey, IEEE Transactions on Neural Networks and Learning Systems 28(10) (2017), 2222–2232.CrossrefGoogle Scholar

  • 26.

    K. He, et al., Deep Residual Learning for Image Recognition, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 27–30, 2016, Las Vegas, NV, USA, 2016, pp. 770–778.Google Scholar

  • 27.

    K. He, et al., Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification, in: IEEE International Conference on Computer Vision (ICCV), December 7–13, 2015, Santiago, Chile, 2015, pp. 1026–1034.Google Scholar

  • 28.

    K. He, et al.Mask R-CNN, in: IEEE International Conference on Computer Vision (ICCV), October 22–29, 2017, Venice, Italy, 2017, pp. 2980–2988.Google Scholar

  • 29.

    G. E. Hinton, et al., A Fast Learning Algorithm for Deep Belief Nets, Neural Computation 18 (2006), 1527–1554.CrossrefGoogle Scholar

  • 30.

    G. E. Hinton, et al., Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups, IEEE Signal Processing Magazine 29(6) (2012), 82–97.CrossrefGoogle Scholar

  • 31.

    S. Hochreiter and J. Schmidhuber, Long Short-Term Memory, Neural Computation 9(8) (1997), 1735–1780.CrossrefGoogle Scholar

  • 32.

    K. Hornik, et al., Multilayer Feedforward Networks are Universal Approximators, Neural Networks 2(5) (1989), 359–366.CrossrefGoogle Scholar

  • 33.

    J. Huang, et al., Speed/Accuracy Trade-Offs for Modern Convolutional Object Detectors, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 21–26, 2017, Honolulu, HI, USA, 2017, pp. 3296–3297.Google Scholar

  • 34.

    D. H. Hubel and T. N. Wiesel, Receptive Fields of Single Neurones in the Cat’s Striate Cortex, The Journal of Physiology 148(3) (1959), 574–591.CrossrefGoogle Scholar

  • 35.

    Intel Nervana, https://ai.intel.com/, accessed Jun. 2, 2018.Google Scholar

  • 36.

    S. Ioffe and C. Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, in: 32nd International Conference on International Conference on Machine Learning (ICML), July 06–11, 2015, Lille, France, PMLR Vol. 37, 2015, pp. 448–456.Google Scholar

  • 37.

    A. Karpathy and L. Fei-Fei, Deep Visual-Semantic Alignments for Generating Image Descriptions, IEEE Transactions on Pattern Analysis and Machine Intelligence 39(4) (2017), 664–676.CrossrefGoogle Scholar

  • 38.

    A. Kendall, et al., End-to-End Learning of Geometry and Context for Deep Stereo Regression, in: IEEE International Conference on Computer Vision (ICCV), October 22–29, 2017, Venice, Italy, 2017, pp. 66–75.Google Scholar

  • 39.

    Keras, https://keras.io, accessed Jun. 2, 2018.Google Scholar

  • 40.

    D. P. Kingma and J. Ba, Adam: A Method for Stochastic Optimization, in: 3rd International Conference on Learning Representations (ICLR), May 7–9, 2015, San Diego, CA, USA, 2015.Google Scholar

  • 41.

    A. Krizhevsky, I. Sutskever and Geoffrey Hinton, ImageNet Classification with Deep Convolutional Neural Networks, in: Advances in Neural Information Processing Systems 25 (NIPS 2012), December 3–8, 2012, Lake Tahoe, NV, USA, 2012, pp. 1090–1098.Google Scholar

  • 42.

    G. Larsson, et al., Colorization as a Proxy Task for Visual Understanding, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 21–26, 2017, Honolulu, HI, USA, 2017, pp. 840–849.Google Scholar

  • 43.

    Q. V. Le, et al., A Simple Way to Initialize Recurrent Networks of Rectified Linear Units, Computing Research Repository, arXiv:1504.00941 (2015).Google Scholar

  • 44.

    Y. LeCun, et al., Gradient-Based Learning Applied to Document Recognition, Proceedings of the IEEE 86(11) (1998), 2278–2324.CrossrefGoogle Scholar

  • 45.

    Y. LeCun, et al., Handwritten Digit Recognition with a Back-Propagation Network, in: Advances in Neural Information Processing Systems 2 (NIPS 1989), 1990, pp. 396–404.Google Scholar

  • 46.

    S. Levine, et al., End-to-End Training of Deep Visuomotor Policies, Journal of Machine Learning Research 17(1) (2016), 1334–1373.Google Scholar

  • 47.

    S. Levine, et al., Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection, International Journal of Robotics Research (2017), 1–16.Google Scholar

  • 48.

    Y. Li, et al., Fully Convolutional Instance-Aware Semantic Segmentation, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 21–26, 2017, Honolulu, HI, USA, 2017, pp. 4438–4446.Google Scholar

  • 49.

    K. J. Liang, et al., Automatic threat recognition of prohibited items at aviation checkpoint with X-ray imaging: A deep learning approach, in: SPIE Anomaly Detection and Imaging with X-Rays (ADIX) III, April 17–18, 2018, Orlando, FL, USA, 2018.Google Scholar

  • 50.

    W. Liu, et al., SSD: Single Shot MultiBox Detector, in: 14th European Conference on Computer Vision (ECCV 2016), October 11–14, 2016, Amsterdam, The Netherlands, LNCS Vol. 9905, 2016, pp. 396–404.Google Scholar

  • 51.

    J. Long, et al., Fully Convolutional Networks for Semantic Segmentation, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 7–12, 2015, Boston, MA, USA, 2015, pp. 3431–3440.Google Scholar

  • 52.

    M.-T. Luong, et al., Effective Approaches to Attention-Based Neural Machine Translation, in: Conference on Empirical Methods in Natural Language Processing (EMNLP), September 17–21, 2015, Lisbon, Portugal, 2015, pp. 1412–1421.Google Scholar

  • 53.

    D. Matti, et al., Combining LiDAR Space Clustering and Convolutional Neural Networks for Pedestrian Detection, in: IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), August 29 – September 01, 2017, Lecce, Italy, 2017, pp. 1–6.Google Scholar

  • 54.

    W. S. McCulloch and W. H. Pitts, A Logical Calculus of the Ideas Immanent in Nervous Activity, Bulletin of Mathematical Biophysics 5 (1943), 115–133.CrossrefGoogle Scholar

  • 55.

    V. Mnih, et al., Human-Level Control Through Deep Reinforcement Learning, Nature 518(7540) (2015), 529–533.CrossrefGoogle Scholar

  • 56.

    V. Mnih, et al., Playing Atari with Deep Reinforcement Learning, in: NIPS Deep Learning Workshop, December 9, 2013, Lake Tahoe, NV, USA, 2013.Google Scholar

  • 57.

    Y. Nesterov, A Method of Solving a Convex Programming Problem with Convergence Rate O(1/sqr(k)), Soviet Mathematics Doklady 27(2) (1983), 372–376.Google Scholar

  • 58.

    NVIDIA CUDA, https://developer.nvidia.com/cuda, accessed Jun. 2, 2018.Google Scholar

  • 59.

    NVIDIA cuDNN, https://developer.nvidia.com/cudnn, accessed Jun. 2, 2018.Google Scholar

  • 60.

    O. P. Ogunmolu, et al., Nonlinear Systems Identification Using Deep Dynamic Neural Networks, Computing Research Repository, arXiv:1610.01439 (2016).Google Scholar

  • 61.

    D. Pathak, et al., Context Encoders: Feature Learning by Inpainting, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 27–30, 2016, Las Vegas, NV, USA, 2016, pp. 2536–2544.Google Scholar

  • 62.

    J. Peters and S. Schaal, Reinforcement Learning of Motor Skills With Policy Gradients, Neural Networks 21(4) (2008), 682–697.CrossrefGoogle Scholar

  • 63.

    V. Pham, et al., Dropout Improves Recurrent Neural Networks for Handwriting Recognition, in: 14th International Conference on Frontiers in Handwriting Recognition (ICFHR), September 1–4, 2014, Crete, Greece, 2014, pp. 285–290.Google Scholar

  • 64.

    B. T. Polyak, Some Methods of Speeding up the Convergence of Iteration Methods, USSR Computational Mathematics and Mathematical Physics 4(5) (1964), 1–17.CrossrefGoogle Scholar

  • 65.

    C. R. Qi, et al., PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 21–26, 2017, Honolulu, HI, USA, 2017, pp. 77–85.Google Scholar

  • 66.

    E. Real, et al., Regularized Evolution for Image Classifier Architecture Search, Computing Research Repository, arXiv:1802.01548 (2018).Google Scholar

  • 67.

    J. Redmon and A. Farhadi, YOLO9000: Better, Faster, Stronger, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 21–26, 2017, Honolulu, HI, USA, 2017, pp. 6517–6525.Google Scholar

  • 68.

    J. Redmon, et al., You Only Look Once: Unified, Real-Time Object Detection, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 27–30, 2016, Las Vegas, NV, USA, 2016, pp. 779–788.Google Scholar

  • 69.

    S. Ren, et al., Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, in: Advances in Neural Information Processing Systems 28 (NIPS 2015), December 7–12, 2015, Montreal, Canada, 2015, pp. 91–99.Google Scholar

  • 70.

    G. Riegler, et al., OctNet: Learning Deep 3D Representations at High Resolutions, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 21–26, 2017, Honolulu, HI, USA, IEEE, 2017, pp. 6620–6629.Google Scholar

  • 71.

    F. Rosenblatt, The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain, Psychological Review 65(6) (1958), 386–408.CrossrefGoogle Scholar

  • 72.

    D. E. Rumelhart, et al., Learning Representations by Back-Propagating Errors, Nature 323(6088) (1986), 533–536.CrossrefGoogle Scholar

  • 73.

    O. Russakovsky, et al., ImageNet Large Scale Visual Recognition Challenge, International Journal of Computer Vision 115(3) (2015), 211–252.CrossrefGoogle Scholar

  • 74.

    H. Sak, et al., Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling, in: 15th Annual Conference of the International Speech Communication Association (INTERSPEECH), September 14–18, 2014, Singapore, 2014, pp. 338–342.Google Scholar

  • 75.

    T. Salimans, et al., Improved Techniques for Training GANs, in: Advances in Neural Information Processing Systems 29 (NIPS 2016), December 5–10, 2016, Barcelona, Spain, 2016, pp. 2234–2242.Google Scholar

  • 76.

    J. Schulman, et al., Trust Region Policy Optimization, in: 32nd International Conference on International Conference on Machine Learning (ICML), July 06–11, 2015, Lille, France, PMLR Vol. 37, 2015, pp. 1889–1897.Google Scholar

  • 77.

    A. See, et al., Get To The Point: Summarization with Pointer-Generator Networks, in: 55th Annual Meeting of the Association for Computational Linguistics (ACL), July 30 – August 4, 2017, Vancouver, Canada, 2017, pp. 1073–1083.Google Scholar

  • 78.

    D. Silver, et al., Deterministic Policy Gradient Algorithms, in: 31st International Conference on International Conference on Machine Learning (ICML), June 22–24, 2014, Bejing, China, PMLR Vol. 32, 2014, pp. 387–395.Google Scholar

  • 79.

    D. Silver, et al., Mastering the Game of Go with Deep Neural Networks and Tree Search, Nature 529(7587) (2016), 484–489.CrossrefGoogle Scholar

  • 80.

    K. Simonyan and A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition, in: 3rd International Conference on Learning Representations (ICLR), May 7–9, 2015, San Diego, CA, USA, 2015.Google Scholar

  • 81.

    N. Srivastava, et al., Dropout: A Simple Way to Prevent Neural Networks from Overfitting, Journal of Machine Learning Research 15 (2014), 1929–1958.Google Scholar

  • 82.

    H. Su, et al., Crowdsourcing Annotations for Visual Object Detection, in: AAAI Human Computation Workshop, 2012, pp. 40–46.Google Scholar

  • 83.

    I. Sutskever, et al., Sequence to Sequence Learning with Neural Networks, in: Advances in Neural Information Processing Systems 27 (NIPS 2014), December 8–13, 2014, Montréal, Canada, 2014, pp. 3104–3112.Google Scholar

  • 84.

    C. Szegedy, et al., Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning, in: 31st AAAI Conference on Artificial Intelligence, February 4–9, 2017, San Francisco, CA, USA, 2017, pp. 4278–4284.Google Scholar

  • 85.

    C. Szegedy, et al., Going Deeper with Convolutions, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 7–12, 2015, Boston, MA, USA, 2015, pp. 1–9.Google Scholar

  • 86.

    C. Szegedy, et al., Rethinking the Inception Architecture for Computer Vision, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 27–30, 2016, Las Vegas, NV, USA, 2016, pp. 2818–2826.Google Scholar

  • 87.

    G. Tesauro, Temporal Difference Learning and TD-Gammon, Communications of the ACM 38(3) (1995), 58–68.CrossrefGoogle Scholar

  • 88.

    P. J. Werbos, Backpropagation Through Time: What It Does and How to Do It, Proceedings of the IEEE 78(10) (1990), 1550–1560.CrossrefGoogle Scholar

  • 89.

    P. J. Werbos, Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences, Phd thesis, Harvard University, 1974.Google Scholar

  • 90.

    R. J. Williams, Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning, Machine Learning 8(3) (1992), 229–256.CrossrefGoogle Scholar

  • 91.

    A. C. Wilson, et al., The Marginal Value of Adaptive Gradient Methods in Machine Learning, in: Advances in Neural Information Processing Systems 30 (NIPS), December 4–9, 2017, Long Beach, CA, USA, 2017, pp. 4151–4161.Google Scholar

  • 92.

    B. Xu, et al., Empirical Evaluation of Rectified Activations in Convolutional Network, in: ICML Deep Learning Workshop, July 06–11, 2015, Lille, France, 2015.Google Scholar

  • 93.

    K. Xu, et al., Show, Attend and Tell: Neural Image Caption Generation with Visual Attention, in: 32nd International Conference on International Conference on Machine Learning (ICML), July 06–11, 2015, Lille, France, PMLR Vol. 37, 2015, pp. 2048–2057.Google Scholar

  • 94.

    M. D. Zeiler and R. Fergus, Visualizing and Understanding Convolutional Networks, in: 13th European Conference on Computer Vision (ECCV), September 6–12, 2014, Zurich, Switzerland, LNCS Vol. 8689, 2014, pp. 818–833.Google Scholar

  • 95.

    Y. Zhang, et al., Augmenting Supervised Neural Networks With Unsupervised Objectives for Large-Scale Image Classification, in: 33rd International Conference on Machine Learning (ICML), June 19–24, 2016, New York, NY, USA, PMLR Vol. 48, 2016, pp. 612–621.Google Scholar

  • 96.

    Z. Zhu, et al., Traffic-Sign Detection and Classification in the Wild, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 27–30, 2016, Las Vegas, NV, USA, 2016, pp. 2110–2118.Google Scholar

  • 97.

    B. Zoph and Q. Le, Neural Architecture Search with Reinforcement Learning, in: 5th International Conference on Learning Representations (ICLR), April 24–26, 2017, Toulon, France, 2017.Google Scholar

About the article

Michael Vogt

Michael Vogt received his Dr.-Ing. degree in control engineering in 2007 from Darmstadt University of Technology (TUD). From 2001–2006 he was a research associate at TUD’s Institute of Automatic Control, working on system identification and support vector machines under the supervision of Prof. Dr.-Ing. Dr. h.c. Rolf Isermann. Since 2006, he has been employed as an algorithm developer and project manager at Smiths Detection in Wiesbaden, mainly engaged in the automatic evaluation of X-ray images. His areas of expertise include machine learning, numerical algorithms and related fields.


Received: 2018-06-12

Accepted: 2018-07-13

Published Online: 2018-09-13

Published in Print: 2018-09-25


Citation Information: at - Automatisierungstechnik, Volume 66, Issue 9, Pages 690–703, ISSN (Online) 2196-677X, ISSN (Print) 0178-2312, DOI: https://doi.org/10.1515/auto-2018-0076.

Export Citation

© 2018 Walter de Gruyter GmbH, Berlin/Boston.Get Permission

Comments (0)

Please log in or register to comment.
Log in