Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag November 14, 2018

Slowing Down Interactions on Tangible Tabletop Interfaces

A Comparative User Study in the Context of Collaborative Problem Solving

  • Cathia Lahure

    Cathia Lahure is a Computer Science teacher at a secondary school in Luxembourg. She studied at the University of Kent at Canterbury (MSc) and at the Université Nancy 2 (DESS). After 12 years of experience in the Public Sector, she changed career into education. As part of her teacher training she participated in research activities at the Luxembourg Institute of Science and Technology (LIST) dealing with tangible interactive tabletops in education.

    and Valérie Maquil

    Valérie Maquil holds a PhD in Computer Science from Vienna University of Technology. She is currently working as a Research and Technology Associate at the Luxembourg Institute of Science and Technology (LIST). She is co-PI in the ERASMUS+ project ReEngage (2015–2017). Her research interests are on Human Computer Interaction, Tangible User Interfaces, Interactive Tabletops, and Physical Computing.

    EMAIL logo
From the journal i-com


This paper describes the results from a comparative study with 14 pupils using two different versions of a tangible tabletop application on satellite communication. While one of the versions was designed in a way to allow the resolution of the tasks in a pure trial-and-error approach, the second version prevented this by adding a button which had to be pressed in order to calculate and display results. The results of the study show that the design of the button and the associated scoring system was indeed successful in slowing down interactions and increasing thinking time. However, the knowledge acquisition was lower for the version with the button as compared to the one supporting trial-and-error. We discuss the results of this study and, in particular, argue for the need to carefully balance usability, task complexity and the learning dimension in the design of interactive tabletops for learning.

1 Introduction

Interactive tabletops are considered to provide unique benefits in various informal and formal educational settings. According to [8], they support co-location, multiple user interaction, hands-on-activities, and multiple modes of communication. These benefits are due to their large shared screen and the possibility for direct interaction by multiple users [17].

The learning-by-doing principle is defined as creating knowledge from the results of one’s actions [20]. In this context, reading, watching and listening do not qualify as actions, a sensory experience is required. While different explanations about the effectiveness of learning-by-doing over other methods can be found ([20] p. 14), altogether it is considered as a truism for many.

Learning-by-doing is encouraged by the tangible table, which invites for experimentation: through trial-and-error, the users get to know the widgets and their manipulation, as well as the scenario and its reactions. The learners need to try and experiment in order to discover and understand the underlying principles. However, trial-and-error can only be qualified as learning-by-doing, if the discovered information is remembered [20]. This process needs moments of reasoning and understanding.

Many design oriented studies with interactive tabletops and floors (e. g. [3], [4], [14], [18], [22], [23], [25]) observed already the need to foster such moments of reasoning and propose to integrate design mechanisms that slow down interactions and encourage temporal pauses. However, despite the general acknowledgement of the need to support reflection, there are currently no empirical investigations about the design measures that slow down interactions on interactive tabletops and how these may impact learning.

This paper seeks to contribute to the understanding of how to design tangible tabletops for learning and presents the results of a comparative user study of a tabletop application on satellite communication. While one of the versions was designed in a way to allow the resolution of the tasks in a pure trial-and-error approach, the second version prevented this by adding a button which had to be pressed in order to calculate and display results. In this paper, we describe our results concerning the solving time, the number of tries, the usability, the subjective workload and the learning gain. We discuss these results with regard to its implications for the design of tangible tabletops for collaborative problem solving in education.

2 Related Work

2.1 Interactive Tabletops in Education

Previous work as already reported on how learning could be successfully supported with interactive tabletops. Existing applications deal, for instance, with the warehouse management [28], evolution [12], medicine [27], cultural history [7], or sustainability [2].

While multi-touch tabletop interfaces are operated using finger touches, tangible tabletop interfaces make use of physical objects that can be placed, moved or rotated in order to interact with the system. Tangible tabletop interfaces were found to be better physically embedded in a social setting [9], facilitating the partitioning and coordination of activities with little or no verbal negotiation [24], enhancing the visibility group members’ interactions and promoting equity of participation [21].

To guide the design of tangible user interfaces, Antle and Wise [3] have proposed five elements to be considered in the design: (1) physical objects, (2) digital objects, (3) actions, (4), informational relations, and (5) learning activities. Building upon theories of cognition and learning, they have proposed 12 guidelines informing the design of tangible interaction. With a focus on interactive tabletops in education, Dillenbourg & Evans [8] propose 33 points to consider for design. These points deal with four different circles of interactions: user system interactions, social interactions, classroom orchestration, and institutional context.

2.2 Tangible User Interfaces and Reflection

In the attempt to slow down interactions with tangible user interfaces, previous work have applied various approaches. For instance, researchers propose to use larger interaction spaces [25] or spatially separated workspaces [4] to force users to physically move and through this, engage in a reflective mode of learning. Other proposed approaches consists in delocalizing tangible interactions and their visual effects [18], using ‘unexpected’ effects as part of sensor-based systems [22], or using a button that needs to be triggered to launch the calculations [1], [13].

In the TinkerTable project [28], the designers used an even more radical approach to afford reflection. To avoid pure trial-and-error approaches, they completely blocked access to the simulation start for students. To see the results, the teacher has to come to their table, ask them about their predictions and then triggers the simulation by placing a key [8].

These and other observations have been compiled by Hornecker [14] and Antle and Wise [3] who argue for a stronger attention on reflection during the design of tangible user interfaces for learning. Hornecker [14], based on a literature review, observes that naturalness and intuitiveness is not always possible or desired and that the true potential of tangible interaction seem to lie beyond real-world behaviour. Antle and Wise [3] as part of a design framework, propose to use “spatial, physical, temporal or relational properties” to slow down interaction and trigger reflection.

3 The Satellite Communication Application

The tabletop application was designed by building upon the approach and experiences of previous work [1], [15], [19]. Inspired by an exercise proposed by the European Space Agency (ESA) Education Office, we chose to use the topic satellite communication, and in particular, storage and transmission units. It consists of three different contexts: a satellite broadcasting TV programs, a satellite orbiting Earth and monitoring the Sun and space weather, and a satellite transmitting data from Mars.

We chose space as a setting, because it is interesting for many and numerous science problems, such as phenomena of waves, sound and energy, can be taught using space phenomena. As space is not a topic per se in the official teaching program, prior knowledge and experience are limited for secondary school students.

Using a user centric design approach, we iteratively created the scenario, the tasks, the images and the widgets. Intermediate versions were tested by a group of three postgraduate students and 3 groups of secondary school students as part of a user study. After the study, two researchers inspected the usability as part of a walkthrough to identify required modifications. While the first version was close to a text-based problem, the final version evolved into a scenario with considerably more visual representations.

To implement the scenario, we used COPSE [16] a Java-based framework based on TULIP [26]. This framework simplifies the creation, adaptation and re-use of simple simulations (called Microworlds). It aims to reduce development time for tabletop applications and make them accessible to non-programmers, for instance teachers.

A Microworld created with COPSE consists of two types of widgets: (1) the rotating widget, which changes its values when the user rotates it on the table, and (2) the placement widget, which has the value 0 when it is off the table and a predefined value when it is on the table. By manipulating widgets, the variable of the underlying equations are changed, which thus provide different results. Based on those results the learners are provided with different feedback showing the outcomes of the equations. These can be provided in the form of text or images.

3.1 Learning Goals

The target learners are students aged 15 to 18. The learning goal is to establish a general understanding of storage and transfer units as well as space communication. In particular the learners should be able to (1) list different uses of satellites. (2) distinguish between the different types of orbits and classify them, (3) distinguish between signal and data transfer time, and (4) understand the relation between a data quantity to be transferred, its transfer rate and its transfer duration.

Table 1

Learning goals and how they can be reached.

Learning goals How to attain them
List different uses of satellites By reading the textual instructions of the tasks on the tabletop
Distinguish between the different types of orbits and classify them Through interactive exploration: the orbits are represented graphically through their respective distance to earth and a textual label indicating name and distance.
Understand the relation between a data quantity to be transferred, its transfer rate and its transfer duration Through exploration: what is the impact of changing the value of one variable?
Distinguish between signal and data transfer time Through interpretation: the transfer times are the results of the tasks shown either in real time or differed depending on the scenario version.
Figure 1 
            Setting variables and exploring effects (left); 5 of the 6 widgets used in the application (right).
Figure 1

Setting variables and exploring effects (left); 5 of the 6 widgets used in the application (right).

The knowledge can be created either using the textual instructions of the tasks, interactive explorations, or interpretations of the results. Table 1 lists the different learning goals and how the students can reach them.

3.2 Features and Interactions

The application allows users to explore signal and data transfer time for satellites placed at different distances from earth. It includes 6 widgets, four different variables, a widget to change the task, and a button. By rotating them, users can set the distance, signal speed, data transfer rate, and data quantity to be sent by the satellite. The simulation then calculates the required time for transmitting the signal and the data.

Figure 2 
            Increasing the widget value by rotation causes changes of scenes.
Figure 2

Increasing the widget value by rotation causes changes of scenes.

Figure 3 
            Instructions, additional information, and feedback provided in the application.
Figure 3

Instructions, additional information, and feedback provided in the application.

Feedback about the current settings and the results is provided by text and images. For instance, when increasing the distance between sender and receiver by widget rotation, the scene will show a sender and a receiver that are physically further apart (Figure 2: top). When increasing the transfer rate, more arcs will be shown around the antenna (Figure 2: bottom).

3.3 Tasks

Instructions, additional information and textual feedback are provided in a fixed area in the upper part of the screen (Figure 3). The instructions are displayed as long as a task has not been solved. The feedback, with a green tick, is shown when the task is solved. Right to these texts is a box for additional information, which is not necessarily needed for problem resolution, but interesting facts, explanatory images or pictures corresponding to the problem at hand.

When starting with the application, users are invited to explore the widgets and the scenario. In addition, for the button version, they get information on the use of the button and how it impacts the scoring.

The subsequent 5 tasks introduce different notions one by one. The degree of difficulty increases the further the learner advances in the tasks. For the first task, users need to only work with signal transfer duration. For task 2 and 3, they are required to work with data transfer duration, and finally, for task 4 and 5, they need to consider both. Each task consists of a question (see Table 2), as well as some additional information the learners need in order to solve the tasks, as, for instance, that Proba2 is flying in Low Earth Orbit.

Table 2

The different tasks of the satellite communication application (questions only).

Number Question
1 How long does it take to send a signal from a TV broadcaster in New York via satellite to a TV in Luxembourg?
2 How long does it take to transfer 100 megabytes (MB) from Proba2 to Earth?
3 How much data can be sent from Proba2 to Earth in one day?
4 How long does it take to receive on Earth a 200 MB image from Mars Express?
5 How much data can be downlinked [by Mars Express] to Earth in 5 hours?

3.4 The Button and Scoring Mechanism

To avoid that learners find the correct answer by pure trial-and-error or even by chance, we integrated a button and a related scoring mechanism. We wanted the learners to reason and discuss whether their widgets are calibrated to the right values. If they agree within their group, they press the button to check the result. Only when pressing the button, they trigger the calculation of the output values and receive a feedback indicating that they provided the correct or the wrong answer to the question. We hid the output values to allow learners to focus more on defining the input values, and, hence, allow for more reflection phases and an increased learning gain.

In addition, to discourage the constant pressing of the button, we introduced a scoring mechanism. With this mechanism users can collect points: the less tries they use to solve the tasks, the more points they are awarded.

Figure 4 
            The button (left) and the scoring mechanism (right).
Figure 4

The button (left) and the scoring mechanism (right).

4 Comparative Study

To evaluate the button and the associated scoring mechanism, we conducted a comparative study with two versions of the tabletop application. One of these versions (version T) does not include the button and hence, allows students to solve the tasks in a pure trial-and-error approach. In this version, results were constantly shown to the users: when changing variables, they could see the impact onto the results in real time. Therefore, the students can play around with the widgets until they reached the correct result.

In the second version (version B), we used the button as described above. In this version, users could not see the impact onto the results in real time, but have to first press the button in order to trigger the calculation.

4.1 Study Design

4.1.1 Population

The tests were performed by a secondary school class (12th grade) with students aged between 17 and 19. The study took place during school hours, but it had no impact on grades. The students were asked to form themselves groups of 2 to 3 students and to assign themselves to the available slots without knowing which version was used at which slot. The 14 students present for the tests formed in total 5 groups, 3 tested the version with the button (version B), and 2 groups the version allowing trial-and-error (version T).

4.1.2 Measures

To get an indication on the trial-and-error behaviour, we measured the time to solve the tasks, as well as, for version B, the number of tries until the correct answer has been found.

To quantify the learning effect, the students filled in a knowledge evaluation questionnaire before and after solving the problem with the tangible table. The questionnaire contained questions of two learning dimensions:

  1. Factual knowledge (knowledge of terminology and specific elements)

    1. questions about constants mentioned in the tasks (2 questions)

    2. questions about satellite missions mentioned in the tasks (1 question)

    3. questions about the answers of the solved tasks (2 questions)

  2. Contextual knowledge (knowledge about categories, principles and theories):

    1. questions about the orbits used in the tasks (2 questions)

    2. questions about the relation between the widgets (3 questions)

Correct pre-test answers and correct post-test answers were counted to get the students’ scores. The normalized gain (g) [10] of those scores makes the learning gain quantifiable. It is defined as the ratio of the difference in score to the maximal possible increase in score:


To evaluate the usability, the students filled in the system usability scale (SUS) [6] assessing effectiveness (the ability of users to complete tasks using the system), efficiency (the level of resources consumed in performing tasks), and satisfaction (users’ subjective reaction to using the system).

Finally, we evaluated the workload of performing the tasks making use of the Nasa Task Load index (NASA-TLX) [11]. The NASA-TLX analyses different dimensions of workload: mental demand, physical demand, temporal demand, frustration, effort and performance. Each dimension is associated with one question and the tester marks his rating on a scale of 0 to 20. To calculate the overall workload, we applied the so-called “Raw TLX” [11] where the ratings are simply averaged or summed.

4.1.3 Preparation

The tests took place in the lab at (removed). The tangible table with the application was placed at the centre of the room to allow participants to freely move around the table (see Figure 5). The widgets were initially placed at the border of the table, before the interactive area. In addition, the worksheet and pen were placed next to the objects. To record participants’ interactions, we set up cameras on 5 positions around the table (front, top, right, left, back).

Figure 5 
              The lab with a tangible table at the centre.
Figure 5

The lab with a tangible table at the centre.

4.1.4 Protocol

For each group, the following steps were executed. First, each student individually filled in a consent form and the knowledge evaluation questionnaire on satellite communication. Following the experiences of the previous intermediary user study, the students were seated apart as not to share their answers with other group members.

Then, the group entered the lab and got a short introduction on what was expected of them. The explanations included how to use the task widget on the TUI, how many tasks to resolve, how to use the button and how the score is calculated. They were given a worksheet for writing down the results for each of the 5 tasks. This included, for instance, the missions of the satellites, their altitude, or the signal and data transfer durations related to a specific task. After this introduction, the students started to resolve the tasks of the scenario and fill in the worksheet. Two researchers were observing them, mostly from outside the lab as not to distract the students.

After having solved the final task, each student filled in the same knowledge evaluation questionnaire than before the test, as well as the usability and workload questionnaires.

4.2 Results

4.2.1 Solving Time and Score per Group

The results show that the exploration phase at the beginning was hardly used by any of the groups (on average 02:42 min for version T and 01:31 min for version B, see Figure 6). To solve each of the subsequent tasks the groups testing version B took longer (between M: 04:57 min (task 4) and M: 12:09 min (task 3)) as the groups testing version T (between M: 03:45 min (task 4) and M: 07:28 min (task 1)). This difference is most significant for tasks 2 and 3 where the version B groups engaged in calculations, in contrast to the version T groups, who repeatedly rotated the different widgets until the solution was found.

Figure 6 
              Time spent on each task per group.
Figure 6

Time spent on each task per group.

To solve the tasks, the version B groups made in total between 16 and 31 tries. On average, 5 tries were made per task and group. Group 5 took the longest and used the least tries. We observed that they applied a structured, well-reasoned and slow approach.

These two measures indicate that the button was indeed effective in slowing down interactions, and the score in preventing groups from constantly pressing the button.

4.2.2 Usability

Figure 7 shows the results of the usability evaluation. In addition to the two versions for the comparative study, we added the results from the intermediary user study with 3 groups of secondary school students using the more textual version of the application. The results show that version B got an average SUS score of 61.9 (SD: 17.2) which is significantly lower as the average SUS score of version T (M: 77.5; SD: 11.0). According to [5] only usability scores over 70 can be considered as acceptable. We can explain this low score by the increased difficulty of solving the tasks and that the SUS was not designed for a learning context, but for an operational context.

Figure 7 
              Average SUS-scores of the 3 test cases.
Figure 7

Average SUS-scores of the 3 test cases.

Both scenarios of the comparative study were rated higher than the scenario used in the intermediary user study (M: 58.8, SD: 15.1). This shows that the redesign of the scenario, replacing textual instructions by visual representation of data, was indeed effective in improving the usability.

4.2.3 Cognitive Workload

Figure 8 shows the result of the workload assessment for the intermediary user study, as well as version B and T of the comparative study.

Figure 8 
              NASA-TLX ratings for the comparative study.
Figure 8

NASA-TLX ratings for the comparative study.

For both scenarios, the mental demand was above the physical and temporal demands. The students using version B rated all three of those demands higher than other testers. The users of version T indicated that they performed well (M: 14.17; SD: 4.17) for an average effort (M: 10.17, SD: 2.79), whereas the testers of version B estimated their performance lower (M: 12.38; SD: 3.89) for an above-average effort (M: 13.13; SD: 3.04). The frustration level of the trial-and-error testers was evaluated slightly higher (M: 9.67; SD: 4.84) than for the button testers (M: 8.88; SD: 4.97).

In comparison, the results of the pilot study indicated lower demand levels, an above-average estimated performance, an average effort and the lowest frustration level of all three charts.

The results show, as expected, that the testers of version B made a higher effort. The students using version T were more satisfied with their performance. The frustration level is similar for both test cases, but the difference between the lowest and the highest answers is large.

Figure 9 
              Subjective workload of the 3 test cases.
Figure 9

Subjective workload of the 3 test cases.

Comparing the total average of each of the three test cases indicates the highest workload for the version with the button. The students from the case study, which was less visual, rated the workload a little higher than the users of the trial-and-error scenario in the final tests. This could be an indicator for the effectiveness of the redesign: tasks requiring mental visualisation have been simplified through visual representations.

4.2.4 Learning Gain

The score of a questionnaire corresponds to the number of correct answers, one point per question, except for three questions which each count for three points, one point per part of the answer. This makes the maximum score 14 points.

Figure 10 shows the average scores of the pre- and post-test evaluations for both test groups. The normalized gain of version T is 42.11 %, the normalized gain of version B is 36.47 %. This indicates that, despite our expectations, version B did not cause a larger learning effect than version T.

Figure 10 
              Results on the learning gain.
Figure 10

Results on the learning gain.

By analysing the answers per learning dimension, we can see that the normalized learning gain for questions related to factual knowledge was higher for version T (M: 47.5 %) as for version B (M: 37.0 %). For questions regarding to conceptual knowledge, however, the normalized learning gain was higher for version B (M: 35.5 %) as for version T (M: 29.4 %).

This can be considered as an indication that depending on the learning dimension to be trained, a different design might be more appropriate. Mechanisms for slowing down interactions might not be beneficial in every learning context and factual knowledge might be better trained in an environment where trial-and-error behaviour is possible and the mental effort to solve the tasks is lower. Conceptual knowledge, however, might be better increased if interaction is slower and learners are required to think before setting the widgets to solve the tasks.

5 Conclusion

In this paper we have investigated how a button and score mechanism could be an effective design feature for slowing down interactions and enhance learning-by-doing during joint problem solving on interactive tabletops. We have evaluated the button as part of a tangible tabletop application on satellite communication, with regard to the solving time, the number of tries, the perceived usability, the subjective workload, and the learning gain. We have compared the results to the ones achieved with a version without button, where trial-and-error is allowed.

The results indicate that the button was indeed successful in slowing down interactions and increasing reflections. The users of version B worked longer on the tasks as compared to the ones of version T and used the button sparingly (on average 5 tries per task) to solve them. They also judged the mental demand and effort as higher, which can be an considered as an indicator that they had to reflect more.

However, despite our expectations, the average learning gain for the groups using version B was lower as the learning gain for the groups using version T. A closer look at the learning gain with regard to the different types of questions revealed that there was a difference depending on the type of knowledge which was assessed: factual knowledge achieved a higher increase for version T, but conceptual knowledge was better learned with version B.

These result lead to the preliminary conclusion that a button and scoring mechanism might indeed be used as a design measure to slow down interactions, however, it needs to be put in accordance with learning activities that target higher levels of knowledge, such as conceptual knowledge. To train factual knowledge, however, a trial-and-error environment seems to be more appropriate.

Due to the small amount of participants, these results can only be considered as tendencies and need to be verified with a larger sample in order to provide reliable results. Furthermore, there are other aspects which might have impacted the results, such as language problems, the high complexity of the scenario, and crashes during the tests.

Nevertheless they provide preliminary insights regarding the design of tangible tabletop systems to support learning-by-doing and show in particular, the need to balance usability, task complexity, and the learning dimension.

About the authors

Cathia Lahure

Cathia Lahure is a Computer Science teacher at a secondary school in Luxembourg. She studied at the University of Kent at Canterbury (MSc) and at the Université Nancy 2 (DESS). After 12 years of experience in the Public Sector, she changed career into education. As part of her teacher training she participated in research activities at the Luxembourg Institute of Science and Technology (LIST) dealing with tangible interactive tabletops in education.

Valérie Maquil

Valérie Maquil holds a PhD in Computer Science from Vienna University of Technology. She is currently working as a Research and Technology Associate at the Luxembourg Institute of Science and Technology (LIST). She is co-PI in the ERASMUS+ project ReEngage (2015–2017). Her research interests are on Human Computer Interaction, Tangible User Interfaces, Interactive Tabletops, and Physical Computing.


[1] Dimitra Anastasiou, Valérie Maquil, Eric Ras, and Mehmetcan Fal. 2016. Design implications for a user study on a tangible tabletop. In Proceedings of IDC 2016 – The 15th International Conference on Interaction Design and Children, 499–505. 10.1145/2930674.2935982.Search in Google Scholar

[2] Alissa N. Antle, Joshua Tanenbaum, Allen Bevans, Katie Seaborn, and Sijie Wang. 2011. Balancing act: Enabling public engagement with sustainability issues through a multi-touch tabletop collaborative game. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 6947 LNCS, PART 2: 194–211. 10.1007/978-3-642-23771-3_16.Search in Google Scholar

[3] Alissa N. Antle and Alyssa F. Wise. 2013. Getting down to details: Using theories of cognition and learning to inform tangible user interface design. Interacting with Computers 25, 1: 1–20. 10.1093/iwc/iws007.Search in Google Scholar

[4] Alissa Antle, Alyssa F. Wise, and Kristine Nielsen. 2011. Towards Utopia: designing tangibles for learning. In International Conference on Interaction Design and Children, 11–20. 10.1145/1999030.1999032.Search in Google Scholar

[5] Aaron Bangor, Philip T. Kortum, and James T. Miller. 2008. An empirical evaluation of the system usability scale. International Journal of Human-Computer Interaction 24, 6: 574–594. 10.1080/10447310802205776.Search in Google Scholar

[6] John Brooke. 1996. SUS – A quick and dirty usability scale. Usability evaluation in industry 189, 194: 4–7. 10.1002/hbm.20701.Search in Google Scholar PubMed PubMed Central

[7] Jean Ho Chu, Paul Clifton, Daniel Harley, Jordanne Pavao, and Ali Mazalek. 2014. Mapping Place: Supporting Cultural Learning through a Lukasa-inspired Tangible Tabletop Museum Exhibit. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’15): 261–268. 10.1145/2677199.2680559.Search in Google Scholar

[8] Pierre Dillenbourg and Michael Evans. 2011. Interactive tabletops in education. International Journal of Computer-Supported Collaborative Learning 6, 4: 491–514. 10.1007/s11412-011-9127-7.Search in Google Scholar

[9] Ylva Fernaeus, Jakob Tholander, and Martin Jonsson. 2008. Towards a New set of Ideals: Consequences of the Practice Turn in Tangible Interaction. Forum American Bar Association 1, 3/4: 223–230. 10.1145/1347390.1347441.Search in Google Scholar

[10] Richard R. Hake. 1998. Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics 66, 1: 64–74. 10.1119/1.18809.Search in Google Scholar

[11] S. G. Hart. 2006. Nasa-Task Load Index (NASA-TLX); 20 Years Later. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 50, 9: 904–908. 10.1177/154193120605000909.Search in Google Scholar

[12] M. Horn, Zeina Atrash Leong, and Florian Block. 2012. Of BATs and APEs: an interactive tabletop game for natural history museums. In Proceedings of the …: 2059–2068. 10.1145/2207676.2208355.Search in Google Scholar

[13] Michael S. Horn, Erin Treacy Solovey, and Robert J.K. Jacob. 2008. Tangible Programming and Informal Science Learning: Making TUIs Work for Museums. In IDC ’08 Proceedings of the 7th international conference on Interaction design and children: 194–201. 10.1145/1463689.1463756.Search in Google Scholar

[14] Eva Hornecker. 2012. Beyond Affordance: Tangibles’ Hybrid Nature. In Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction: 175–182. 10.1145/2148131.2148168.Search in Google Scholar

[15] B. Limbu, V. Maquil, E. Ras, and A. Weinberger. 2015. Tomb of osiris: Gamifying the assessment of collaborative complex problem solving skills on tangible tabletops. 10.1007/978-3-319-27704-2_7.Search in Google Scholar

[16] Valerie Maquil, Eric Tobias, Dimitra Anastasiou, Hélène Mayer, and Thibaud Latour. 2017. COPSE: Rapidly Instantiating Problem Solving Activities based on Tangible Tabletop Interfaces. Proc. ACM Hum.-Comput. Interact. 1, June: 16. 10.1145/3095808.Search in Google Scholar

[17] Emma Mercier and S. Higgins. 2014. Creating joint representations of collaborative problem solving with multi-touch technology. Journal of Computer Assisted Learning 30, 6: 497–510. 10.1111/jcal.12052.Search in Google Scholar

[18] Sara Price, Taciana Pontual Falcão, Jennifer G. Sheridan, and George Roussos. 2009. The effect of representation location on interaction in a tangible learning environment. In Proceedings of the 3rd International Conference on Tangible and Embedded Interaction – TEI ’09: 85. 10.1145/1517664.1517689.Search in Google Scholar

[19] Eric Ras, Samuel Greiff, and Eric Tobias. 2014. Moving Towards the Assessment of Collaborative Problem Solving Skills With a Tangible User Interface. Turkish Online Journal of Educational Technology 13, 4: 95–104.Search in Google Scholar

[20] Hayne W. Reese. 2011. The Learning-by-Doing Principle. Behavioral Development Bulletin 11: 1–19. 10.1037/h0100597.Search in Google Scholar

[21] Yvonne Rogers, Youn-kyung Lim, William R. Hazlewood, and Paul Marshall. 2009. Equal Opportunities: Do Shareable Interfaces Promote More Group Participation Than Single User Displays? Human-Computer Interaction 24, 1/2: 79–116. 10.1080/07370020902739379.Search in Google Scholar

[22] Yvonne Rogers and Henk Muller. 2006. A framework for designing sensor-based interactions to promote exploration and reflection in play. International Journal of Human Computer Studies 64, 1: 1–14. 10.1016/j.ijhcs.2005.05.004.Search in Google Scholar

[23] Yvonne Rogers, Mike Scaife, Silvia Gabrielli, Hilary Smith, and Eric Harris. 2002. A Conceptual Framework for Mixed Reality Environments: Designing Novel Learning Activities for Young Children. Presence: Teleoperators and Virtual Environments 11, January 2017: 677–686. 10.1162/105474602321050776.Search in Google Scholar

[24] Stacey D. Scott, Carpendale Sheelagh, and Kori M. Inkpen. 2004. Territoriality in collaborative tabletop workspaces. In Computer Supported Cooperative Work: 294–303. 10.1145/1031607.1031655.Search in Google Scholar

[25] Danae Stanton, Victor Bayon, Helen Neale, Ahmed Ghali, Steve Benford, Sue Cobb, Rob Ingram, Claire O’Malley, John Wilson, and Tony Pridmore. 2001. Classroom collaboration in the design of tangible interfaces for storytelling. the SIGCHI conference, 3: 482–489. 10.1145/365024.365322.Search in Google Scholar

[26] E. Tobias, V. Maquil, and T. Latour. 2015. TULIP: A widget-based software framework for tangible tabletop interfaces. In EICS 2015 – Proceedings of the 2015 ACM SIGCHI Symposium on Engineering Interactive Computing Systems. 10.1145/2774225.2775080.Search in Google Scholar

[27] Ulrich von Zadow, Sandra Buron, Tina Harms, Florian Behringer, Kai Sostmann, and Raimund Dachselt. 2013. SimMed: Combining Simulation and Interactive Tabletops for Medical Education. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems – CHI ’13: 1469. 10.1145/2470654.2466196.Search in Google Scholar

[28] Guillaume Zufferey, Patrick Jermann, Aurélien Lucchi, and Pierre Dillenbourg. 2009. TinkerSheets: using paper forms to control and visualize tangible simulations. In Proceedings of the 3rd International Conference on Tangible and Embedded Interaction, 377–384. 10.1145/1517664.1517740.Search in Google Scholar

Published Online: 2018-11-14
Published in Print: 2018-12-19

© 2018 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 28.3.2023 from
Scroll Up Arrow