Matching, i. e. determining the exact 2D pose (e. g., position and orientation) of objects, is still one of the key tasks in machine vision applications like robot navigation, measuring, or grasping an object. There are many classic approaches for matching, based on edges or on the pure gray values of the template. In recent years, deep learning has been utilized mainly for more difficult tasks where the objects of interest are from many different categories with high intra-class variations and classic algorithms are failing. In this work, we compare one of the latest deep-learning-based object detectors with classic shape-based matching. We evaluate the methods both on a matching dataset as well as an object detection dataset that contains rigid objects and is thus also suitable for shape-based matching. We show that for datasets of this type, where rigid objects appear with rigid transformations, shape-based matching still outperforms recent object detectors regarding runtime, robustness, and precision if only a single template image per object is used. On the other hand, we show that for the application of object detection, the deep-learning-based approach outperforms the classic approach if annotated data is used for training. Ultimately, the choice of the best suited approach depends on the conditions and requirements of the application.