Visual computing is a broad field ranging from image analysis and computer vision to computer graphics and visualization. All these areas work with visual data, and further including methods from human-computer interaction and perceptual psychology allows for efficient visual data handling, exploration, and interpretation. Even though visual representations are widely used and important, they are often investigated and evaluated in a purely qualitative manner, in the sense of what looks good is good. There is no overarching tradition of quantitative methods for visual computing in its breadth. Therefore, a stronger focus on quantitative methods and approaches is needed to advance the field and obtain reliable results.
Quantitative visual computing is the focus of the Transregional Collaborative Research Center SFB-TRR 161 at the University of Stuttgart, the University of Konstanz, Ulm University, and LMU Munich. This center is dedicated to dealing in a more quantitative way with important information that emerges from the huge and ever increasing amount of data produced in modern societies. The aim is to develop and extend a scientific culture of quantification in visual computing that helps reproduce, reanalyze, predict, and compare visual artifacts. Quantification can also improve the quality of visual computing algorithms and techniques, for example, by allowing them to adapt to quality metrics or other measures of visual results. Questions addressed in the research center include how to use massive amounts of data, how to explore such data, how to identify relevant information from that data, and how to communicate it effectively based on quantitative measures and methods.
This special issue of it – Information Technology provides an overview of research in quantitative visual computing in general and of research themes of the SFB-TRR 161 in particular. In the following, we will briefly introduce the articles compiled in this special issue:
Hägele and coauthors present Uncertainty Visualization: Fundamentals and Recent Developments, an overview of the general topic of uncertainty visualization and particular aspects of quantifying uncertainty. The authors discuss how the different parts of the visualization pipeline can be affected by uncertainty and how this can be dealt with, present recent advances in the field of uncertainty visualization, and discuss some examples.
The article by Chiossi et al. on Adapting Visualizations and Interfaces to the User investigates how such approaches improve interaction regarding user performance and user experience based on different user inputs (physiological, behavioral, qualitative, and so on). The authors outline current research trends and discuss methodological approaches used in areas such as visual analytics, mixed reality, physiological computing, and proficiency-aware systems.
Related to this topic is the article by Zagermann et al. on Complementary Interfaces for Visual Computing, which discusses the importance of multi-device ecologies as single devices may not be sufficient to support the user’s workflow. The authors argue that each device or interface component needs to contribute a complementary characteristic to improve the quality of interaction and to better support activities of the user.
Klein and coauthors move even more into the area of immersive analytics with their article Immersive Analytics: An Overview. The authors provide an overview of the concepts, research questions, topics, as well as challenges of this field, which is concerned with the investigation of benefits and challenges of employing immersive environments for data analysis and decision making.
The work by Ngo and coauthors about Machine Learning Meets Visualization – Experiences and Lessons Learn is concerned with the interplay of machine learning and visualization, in particular how both areas could mutually benefit from each other. The authors describe both directions: how visualization supports explaining models in machine learning and how it can aid machine learning based dimensionality reduction techniques as well as how machine learning helps improving visualizations, for example, using machine learning based automation to improve the design of visualizations.
Finally, the article Robust Visualization of Trajectory Data by Zhang et al. investigates methods and approaches for movement trajectories. Such trajectories are important in traffic management, collective behavior, sports analysis, and many other areas. The authors provide an overview on trajectory visualization issues related to robustness and discuss the importance of visualization methods for movement data analysis, in particular considering that the pipeline for acquisition and analysis might introduce artifacts or alternate trajectory features.
To conclude, this special issue gives an overview of state-of-the-art and future developments in quantitative visual computing and we hope that it is of broad interest to the readers.
About the authors

Falk Schreiber is full professor and head of the Life Science Informatics group at the University of Konstanz. He worked at universities and research institutes in Germany and Australia and is adjunct professor at Monash University Melbourne. His main interests are network analysis and visualisation, immersive analytics as well as multi-scale modelling of processes, all with a focus on life science applications from metabolic processes to collective behaviour.

Daniel Weiskopf is a professor at the Visualization Research Center (VISUS) of the University of Stuttgart. He received his Dr. rer. nat. degree in Physics from the University of Tübingen, and the habilitation degree in Computer Science from the University of Stuttgart. His research interests include visualization, visual analytics, eye tracking, human-computer interaction, computer graphics, and special and general relativity.
© 2022 Walter de Gruyter GmbH, Berlin/Boston