3D Reconstructions as Research Hubs: Geospatial Interfaces for Real-Time Data Exploration of Seventeenth-Century Amsterdam Domestic Interiors

This paper presents our ongoing work in the Virtual Interiors project, which aims to develop 3D reconstructions as geospatial interfaces to structure and explore historical data of seventeenth-century Amsterdam. We take the reconstruction of the entrance hall of the house of the patrician Pieter de Graeff (1638–1707) as our case study and use it to illustrate the iterative process of knowledge creation, sharing, and discovery that unfolds while creating, exploring and experiencing the 3D models in a prototype research environment. During this work, an interdisciplinary dataset was collected, various metadata and paradata were created to document both the sources and the reasoning process, and rich contextual links were added. These data were used as the basis for creating a user interface for an online research environment, taking design principles and previous user studies into account. Knowledge is shared by visualizing the 3D reconstructions along with the related complexities and uncertainties, while the integration of various underlying data and Linked Data makes it possible to discover contextual knowledge by exploring associated resources. Moreover, we outline how users of the research environment can add annotations and rearrange objects in the scene, facilitating further knowledge discovery and creation.


Introduction
In this paper, we explore the use of 3D reconstructions¹ as heuristic tools in the research process. The creation of 3D reconstructions for research and analytical purposes is still far less developed than their more common application as visualization means for heritage valorization, outreach, education, and  edutainment. In our approach, we consider a digital reconstruction as one of the tools that humanities scholars have at their disposal to enhance their understanding of past (built) environments. In this sense, they can be equated to simulations and to the visual expressions of a model-based reasoning that in other fields is already seen as central to the process of knowledge building (Magnani & Nersessian, 2002;Magnani, Nersessian, & Thagard, 1999). Especially, the possibility to replicate real-world spatial properties in the 3D reconstruction opens up a range of analytical opportunities. For example, it makes it possible to verify whether (or how many) objects would fit into a given space, to spatially correlate previously unconnected evidence, to assess proximity and visual prominence, to experiment with and evaluate alternative hypotheses, and to investigate lighting conditions.² Another important, yet often overlooked, characteristic of the reconstruction process is that it helps to reconsider old data from a new perspective and encourages the search for additional evidence to fill the gaps that are made apparent during their creation (cf. Favro, 2012). As the visual expression of the results of one's reasoning process, 3D reconstructions have the potential to provide an easy to grasp entry point for understanding complex datasets. Making them available together with the interrelated web of existing knowledge and new knowledge produced during their creation process would therefore greatly facilitate future research building upon them.
Considering these aspects, it might seem surprising that humanities scholars have not yet employed 3D reconstructions at a larger scale for the aforementioned purposes. A possible explanation can be found by taking a long-term perspective on the history of their development which stands in a long tradition of visual representations dating back to the early modern period.³ What emerges is that the adoption of digital tools has resulted in a change of technology for visualization purposes, but generally not in a deeper engagement with their enhanced potential in aiding the interpretative process.⁴ This situation can be partially ascribed to the dichotomy of expertise and research questions between humanities and computer scientists, the latter group traditionally being in charge of the creation of virtual worlds. Different "epistemic cultures" also play a role in evaluating the success of the reconstruction process. As Ratto clearly demonstrated with an example of immersive VR, the two groups assign a measure of trustworthiness to opposite visual qualities (Ratto, 2009). While in computer graphics photorealism and state-of-the-art visualization techniques are taken as quality indicators, humanities scholars traditionally prefer conceptual and nonrealistic visualization modes which convey the underlying degree of uncertainty in their interpretations (Masuch, Freudenberg, Ludowici, Kreiker, & Strothotte, 1999;Strothotte, Masuch, & Isenberg, 1999). Virtual reconstructions of cultural heritage are at the intersection of these two realms and as such have embodied the tension and contrasting epistemic commitments of the two (Ratto, 2009). Photorealism also seems to comply with the expectations of the public and to be perceived as associated with historical accuracy. However, from the point of view of learning, user evaluations have shown that an environment that allows an intuitive and consistent use and interaction is more effective than striving for a photorealistic rendition (Pujol-Tost, 2019).
Due to their nontransparent nature coupled with their unmediated communicative strength, virtual reconstructions have been the object of several skeptical and critical stances since their earliest applications in this field. Concerns have been voiced that these visually compelling renditions were often not driven by meaningful archaeological research questions and not transparent in terms of data, methods, and interpretations (e.g., Beacham, Denard, & Niccolucci, 2006;Forte, 2000;Frischer, Niccolucci, Ryan, & Barceló, 2002;Gillings, 2005;Ryan, 1996Ryan, , 2001.⁵ The problem of how to express uncertainty is particularly evident in 3D reconstructions of cultural heritage due to the complex and fragmentary nature of archaeological and historical datasets. This challenge, however, is certainly not specific for this domain, but common to any type of scientific data visualization (see e.g., Bammer & Smithson 2009;Hansen, Chen, Johnson, Kaufman, & Hagen,  2 Frischer and Dakouri-Hild (2008) focused on the research potential of 2D and 3D visualizations. A selection of representative published case studies exploring the array of research applications for 3D reconstructions is discussed in Piccoli (2018, pp. 74-84).
3 For the history of early modern reconstructions and previous studies, see Piccoli (2018, pp. 6-48). 4 With some notable exceptions such as the experience of Learning Sites, Inc. in using 3D and VR technologies for both research and educational purposes (https://www.learningsites.com/index.php last accessed Apr. 2021). 5 The literature on this subject is abundant, we refer the reader to Beacham et al., 2006 andSanders, 2012 for a broader overview of this topic. 2014). Also, the fact that usually only one reconstruction is presented has been criticized as conveying the idea that a single authoritative version of "the" past exists (e.g., Forte, 2000;Miller & Richards, 1995). In academic settings, virtual reconstructions have been therefore overall perceived as lacking scientific quality standards and as in striking contrast with the postmodern emphasis on reflexivity and the existence of a plurality of interpretations. Over the years, suggestions have been proposed for a more transparent treatment of the reconstruction process (e.g., Baker, 2012;Bentkowska-Kafel, Denard, & Baker, 2012;Demetrescu, 2018;Fernie & Richards, 2003;Forte, 2007;Frischer et al., 2002;Hermon, Nikodem, & Perlingieri, 2006;Landes, Heissler, Koehl, Benazzi, & Nivola, 2019;Ogleby, 2007;Sifniotis, Mania, Watten, & White, 2006) and examples of successful collaborative efforts are becoming more numerous.⁶ Moreover, the need for scientific rigor in cultural heritage visualizations has been formalized in 2009, with the publication of the London Charter which proposes guidelines for best practices in this field (Denard, 2012).⁷ The Charter has been followed by the Seville principles which target more specifically archaeological visualizations.⁸ These developments are promising steps in the direction of a future deeper understanding of what 3D reconstructions can be used for, an increased visual literacy (as described in Watterson, 2015), and a greater integration of these tools in the routine research practice of humanities scholars (cf. Sanders, 2012;Watterson, 2015). However, it is clear that much still needs to be done to reach these goals and exploit the full potential of these visualization means as analytical tools in the humanities. In fact, surveys carried out in the past years have highlighted that the application of the London and Seville guidelines has still unevenly permeated the current 3D reconstruction practice (Cerato & Pescarin, 2013). What is needed are more practical implementations to show how current technologies can be employed to develop an efficient and sustainable pipeline that takes the London and Seville documents as guidelines.
In this paper, we offer our contribution to the current debate by discussing how we applied these principles in our ongoing work in the Virtual Interiors project, taking the reconstruction of the entrance hall of a seventeenth-century grand canal house in Amsterdam as a case study.⁹ Specifically, we focused on a structured evaluation and documentation of the sources and reasoning process underlying the reconstruction¹⁰ as well as the creation of alternative hypotheses, which are integrated and made available via our prototype research environment. Moreover, we are tackling the issues of long-term sustainability and access by using open-source modelling software packages and visualization libraries, by making use of open file formats for storing 3D models and associated data, and by taking a modular approach to implementing the research environment.
In our view, the use of 3D reconstructions as research tools enables a three-step (potentially iterative) process of knowledge creation, sharing, and discovery: (1) New insights are generated during the making of the 3D reconstruction (interpretative visualization); (2) this new knowledge is presented to the viewer in an online platform (expressive visualization); (3) the user gains, discovers, and creates additional knowledge through the interaction with the 3D environment and the exploration of the underlying (Linked) data. In this paper, we illustrate how this process unfolds in our work on this seventeenth-century entrance hall. In the following section, we will briefly go over the selection of this case study, the available sources, and the contribution of the resulting 3D reconstruction for research purposes, namely to shed light on the original spatial arrangement and the representational character of the selected room.¹¹ Next, we will discuss the design and implementation of a prototype web-based research environment, and how it supports knowledge sharing by allowing users to explore the 3D scene in real-time, to inspect underlying data sources, and to view the degree of certainty of visualized hypotheses. Furthermore, we show how the research environment can facilitate knowledge discovery and creation by integrating Linked Data sources, annotation  6 See e.g. the projects carried out by the Institute for Technologies applied to Cultural Heritage of the Italian Research Council (CNR ISPC, former ITABC), within the V-MUST network (https://www.v-must.net/ last accessed Apr. 2021) and at the site of Çatalhöyük (Perry, 2014). 7 https://www.londoncharter.org/ (last accessed Apr. 2021). 8 https://sevilleprinciples.com/ (last accessed Apr. 2021). See also López-Menchero Bendicho (2013). 9 https://virtualinteriorsproject.nl/ (last accessed Apr. 2021). 10 These aspects are treated in more detail in Piccoli (2021). 11 See Piccoli (2021) for a more in-depth analysis, discussion and evaluation of the historical sources and choices underlying the creation of the 3D reconstruction hypothesis of this room. capabilities and features for experimenting with lighting conditions, and the arrangement of objects. This implementation paves the way towards seeing 3D reconstructions as research hubs which connect contextual resources and serve as geospatial interfaces acting as the starting point for further exploration according to each user's interests.
2 Case Study: The Seventeenth-Century Entrance Hall of the House of Pieter de Graeff (1638-1707) The case study that we take into consideration here is the entrance hall ("voorhuis" in Dutch) of a grand canal house situated at Herengracht 573, which was built in the 1660s by the Amsterdam patrician Pieter de Graeff (1638-1707). De Graeff left a wealth of information on the construction works, decorations, and changes that he carried out in his house scattered in a series of almanacs which he kept for over forty years of his life.¹² These volumes are preserved at the Amsterdam City Archives together with the probate inventory which listed all the pieces of furniture and objects that he had in his house upon his death in 1707.¹³ This inventory recorded these movable properties room by room and it is therefore particularly useful to try to retrace the spatial arrangement of his house. The availability of such rich archival documentation made it possible to compare the information derived from the textual and visual sources with the current condition of the extant building. Its façade and interior have undergone several modifications over the centuries, dictated by changes in fashion, different owners' needs, and designated uses.¹⁴ The series of drawings made by Caspar Philips of the buildings along the Amsterdam canals for his Grachtenboek (1768-1771) provides the earliest surviving evidence of the façade of this and neighboring houses ( Figure 1). Despite already displaying 18th century features, the façade depicted in the drawing shows the entrance in its original position, the central bay on the bel-etage, and the flight of steps with a landing that would have connected it to the street level. The interventions carried out by architect Isaac Gosschalk in 1868 redesigned the façade and obliterated the previous entrance door by turning it into a window. These interventions resulted in a change of function of the original voorhuis which from that moment on became an anteroom for the two adjacent side-rooms. The outdoor steps were also removed during these works and the main access to the building became the door on the street level that was previously used to access the basement and cellars of the house. The modifications that the voorhuis underwent made it an interesting case study to evaluate the contribution of a 3D reconstruction to visualize its past appearance. As discussed in Piccoli (2021), the resulting 3D model acted as a platform to integrate and elaborate the available data in order to shed light on the original representational character of this room. As the space that visitors would first encounter when entering the house, the voorhuis is in fact notoriously used to display the owner's status and taste (Fock, 2007a;Loughman & Montias, 2000, p. 72;Sluijter, 2001). From De Graeff's inventory, we know which pieces of furniture and objects were in this room:¹⁵ two paper candleholders, one map of Zuid-Polsbroek (the family's fiefdom), one map of the City of Amsterdam, an oak table with a marble top, a lint curtain "from Smyrna," a carved wooden bench, a marble table with a walnut foot, and two plaster portraits (Figure 2, top left). The list, however, does not provide any hints of the relative positions of these objects, which makes it difficult to place them within the room. The only item that we can position with certainty is the curtain which must have hung on the 14 After being a home for De Graeff and his descendants, the building was used for different purposes, including to house offices, bank headquarters and, from 2007, the Museum of Bags and Purses, which was permanently closed in 2020. 15 Apart from the potential presence of additional silver or golden objects which were listed under a separated section of the inventory (fols. 464-474). This separation is due to the practical necessity of gathering these objects in one room to weigh each of them in order to establish their value. window over the main entrance door, a typical feature at that time. The almanacs, on the other hand, inform us of other characteristics of the room, such as that it had a floor made of a combination of Carrara marble and red Öland tiles ( Figure 2, bottom left) and that a total of ten grisailles were added to adorn the walls in the last decade of the seventeenth century. The creation of a 3D reconstruction seemed therefore the appropriate tool to integrate these pieces of information and investigate how they would fit in the room. The results of this process are briefly discussed in the following section and displayed in Figure 2 (right).

The 3D Reconstruction of De Graeff's voorhuis as a Heuristic
Research Tool The modifications that the voorhuis underwent changed its function over time, but left its spatial properties intact. Therefore, the current dimensions of its floor correspond to its original size. From De Graeff's notes in  his almanacs, we know that a total of 20.5 "Swedish red stones" of 2 voeten each (56.6 cm)¹⁶ were used as bands in the floor of his voorhuis.¹⁷ There is no clear indication of the dimensions of the tiles that would have filled the remaining space, but we know that they must have been made of Italian white marble given the large amounts he ordered.¹⁸ The creation of the 3D reconstruction has offered the opportunity to experiment with several tile arrangements to find which would have fit the number of red tiles mentioned by De Graeff. Preserved examples suggest that when the room had a coffered ceiling (as in this case), a commonly chosen floor tile pattern resembled the ceiling's partition to create a sense of symmetry in the room.¹⁹ This pattern fits well with De Graeff's description of the red bands in his floor, and for this reason, has been chosen to narrow down the possible alternatives. Draft 3D reconstructions of the walls and of the floor tiles have been made in the open-source software Blender (the main 3D modelling software that was used in this project). Figure 3 displays a top view of the two most likely options that emerged from this reconstruction process; both cases allow for a total of 32 two-feet white marble tiles. Option (a) mirrors the ceiling partition more closely than (b) and better corresponds to the current position of the side doors, but it creates an asymmetry on either of the most narrow sides of the floor. Option (b) creates a perfectly symmetrical look, but the central band is slightly off in relation to the position of the side doors. This observation therefore raises questions as to their original location and number.²⁰ Besides the insight that it provided for the specific case of De Graeff's voorhuis, this process highlights the potential of such 3D reconstructions as tools to think about and quantify construction materials and labor in the past. In fact, as shown in recent publications, the calculations of volumes that a three-dimensional environment affords support and enhance these types of analysis in archaeological and historical research (e.g., Buccellati, 2016;McCurdy & Abrams, 2019).
The second aspect that the creation of the 3D model has enabled us to explore was the owner's agency in placing the various objects and furniture pieces within this room (Piccoli, 2021). Particularly interesting is to investigate which objects would have been immediately visible to a visitor entering the house from its front door. In fact, this information is not self-evident when looking at the list of inventoried items. The precondition for the feasibility of this analysis is that their position in the room can be derived from the available sources. As discussed in Piccoli (2021), a few spatial correlations can be deduced based both on the location of the voorhuis within the house and on the order of the objects in the inventory. The first spatial indication is provided by the curtain which is recorded halfway through the list and must have hung in the window above the main entrance door. Considering the rooms that are mentioned before and after the voorhuis, and based on the reconstruction of the internal route (Piccoli, 2021), there is no other possibility than that the notary entered the voorhuis from the door in front of the main entrance (see Figure 4).²¹ Suitable comparisons have been found to propose a reconstruction hypothesis for the various objects and furniture pieces. To this end, De Graeff's notes in his almanacs and his rich family archive were searched both to look for direct references to these objects and to gain a deeper understanding of De Graeff's sociocultural milieu which would inform the choices regarding the style of his house. This research has led to the identification of a close comparison for the map of Zuid-Polsbroek in a manuscript map made by Johannes Leupenius and still preserved in the De Graeff family archive and to the hypothesis that the two plaster portraits represented his parents (Piccoli, 2021). As no pieces of furniture from this household have survived, their 3D models were based on items currently preserved at the Amsterdam Rijksmuseum and on  16 One "voet" (feet) corresponds to 28.3 cm and one "duim" (thumb) to 2.57 cm. 17 SAA, De Graeff family archive (76), nr. 188: 13-11-1666. 18 SAA, De Graeff family archive (76), nr. 188: 25-01-1666; 06-07-1666; nr. 189: 28-11-1668. Moreover, it is certain that Italian white marble titles of 19 duim (ca. 48 cm) were used in the smallest of the two reception rooms located at either side of the voorhuis (SAA, De Graeff family archive (76), nr. 212: 23-04-1692). See also Piccoli (2021).
19 See e.g. the floor in the 'Grote Zaal' at Oudezijds Voorburgwal 38 (currently Museum Ons' Lieve Heer op Solder). 20 Cf. Piccoli (2021). 21 The rooms mentioned before are on the upper floor (the children's bedroom and De Graeff's home office); the voorhuis is followed by the groote tapijte kamer (large room with carpets) and then by the kleine zijdelkamer (small side chamber) which was only accessible from the voorhuis. miniature pieces in seventeenth-century Dutch doll houses which accurately depict contemporary upperclass houses (Piccoli, 2021). Therefore, even if they are only proxies for the original items, they provide realistic representations of their measurements (and hence of the volumes that they would have occupied in the room). The tables and the portraits were modelled by starting from a set of images that were processed using the open-source software Meshroom. Further post-processing was done, and new textures were made using Blender which was also used to model the wooden bench.
The appearance, size, and position of the grisailles were derived from preserved examples and again from comparisons with seventeenth-century Dutch doll houses (Piccoli, 2021). The four oval-topped grisailles would have been placed mirroring each other on the longer sides of the room, and the six upper ones would have been positioned above the oval ones and the doors in between (see Figure 5). After the grisailles were added to the walls in the reconstructed room, it was easier to assess how much space would have been left to hang the two maps. According to this reconstruction hypothesis, the only space available would have been the wall in front of the entrance door. In this way, a potential visual hierarchy is highlighted in the otherwise nonhierarchical list of objects recorded in De Graeff's probate inventory in this room. The maps of the family fiefdom in Zuid-Polsbroek and of the city of Amsterdam would have immediately made De Graeff's lineage and affiliations clear.²² Moreover, the tripartite division resulting from the wall decorations created the spaces to position the furniture pieces following the order of the inventory. Interestingly, in this way, the wooden bench is placed near the entrance door (a convenient location to accommodate visitors waiting to be received) and the two tables face each other, thus contributing to the overall symmetrical composition of the room.
The process of integrating and visualizing the available data into the 3D reconstruction has therefore led to new insight in, and questions on, the original appearance of this room. Besides suggesting a visual  prominence of some of the items in the inventory, it has also created new relations between the 3D models, the archival sources, and the preserved objects. As the 3D reconstruction hypothesis visualizes the interpretation of the researcher that carried out this work, we aimed to provide a transparent data handling which enables users to assess and engage on a deeper level with the research data and resulting interpretations. In the following section, we will discuss in more detail the design of the user interface features that support these tasks, the creation of an online research environment, and the underlying data structure.

Introduction
An interface, as Gane and Beer (2008) assert, is an "in-between device," for instance positioned between a user and a system. It may translate user interaction into actions performed by a system and provides feedback of system states back to a user. Since the user interface is often the only contact point between user and system, its user-friendliness is of key importance. This is often denoted as "usability," a concept with various dimensions such as ease of learning and ease of use (see e.g., Jacko, 2012;Lewis, 2012;Mayhew, 1999).
In itself, the usability of a 3D user interface is already problematic: interaction and navigation of 3D content can be rather complex, leading to typical problems such as "disorientation, perceptual misjudgment and difficulty of finding and understanding available interaction" (Chittaro & Ieronutti, 2004;see LaViola, Kruijff, McMahan, Bowman, & Poupyrev, 2017;Jerald, 2016 for an extensive overview). Therefore, we looked at previous work involving user interfaces and tools for accessing 3D content.
A considerable number of 3D-related projects in academic and cultural heritage settings have explored the combination of 3D reconstructions with interactive features. These previous works have used custombuilt tools,²³ game engines,²⁴ or existing products²⁵ as their basis. However, except for a few cases (e.g.,


23 Examples of custom-built tools include VSim, a prototype for interacting with 3D models aimed at educational settings (Snyder, 2014;Sullivan & Snyder, 2017). Von Schwerin, Richards-Rissetto, Remondino, Agugiaro, and Girardi (2013) and Richards-Rissetto and von Schwerin (2017) have described MayaArch3D, a 3D WebGIS with analytical tools which allows researchers to search and query segmented 3D models as well as attribute data. Visual tools and methods for 3D querying, visualization and inspection of scientific processes behind archeological virtual reconstructions have been discussed by Demetrescu and Fanini (2017), resulting in the interactive 3D inspection tool EMviq, while Turco et al. (2019) have outlined the ATON 3D web presentation framework on which EMviq is based. Kuroczyński, Hauck, and Dworak (2016) and Kuroczyński (2017) describe a virtual research environment for 3D reconstructions, created via a custom combination of a content management system and WebGL technology. 24 For instance, Ferdani, Fanini, Piccioli, Carboni, and Vigliarolo (2020) have outlined issues surrounding the creation of historically accurate VR applications and used game engines for interactive visualization. Another perspective is taken by Frischer (2017) by using computer simulations, published as a Unity application, to analyze the alignment of historical monuments. Opitz and Johnson (2016) have discussed the challenges in designing user interfaces within the context of the Gabii project, which were created using Unity. Papadopoulos and Schreibman (2019) have described various 3D scholarly editions, including reconstructed historical battles created using Unity (see also: https://3dpublishingcooperative.com/ (last accessed: Apr. 2021)). 25 See e.g. Lercari (2017) who has taken 3D reconstructions of Neolithic buildings as a basis for investigating issues of reflexivity in archaeological discourse, for which Corinth classroom software provided the interactive visualizations.
Pujol-Tost, 2019), evaluating the user interface of the resulting interactive tools was usually not the focal point. This has led to a lack of documentation of user needs, best practices, interaction design patterns, and guidelines in a 3D context (Champion, 2019;Wood, William, & Copeland, 2019). Moreover, there is no agreement on which "standard" sets of features are needed for interaction with 3D virtual environments (see also Champion, 2019), let alone for interfaces supporting acts of interpretation.
To create an interface aimed at enabling the aforementioned three-step process during the real-time exploration of Pieter de Graeff's house, we first analyzed previous use cases focusing on conducting research in a 3D context, organized internal workshops, and a user study. In these workshops, initial sketches of features for a prototype research environment were devised. Second, we organized a small-scale user study with digital humanities scholars in which we explored the potential use of 3D environments and evaluated different types of search interfaces for such a 3D environment (see Huurdeman & Piccoli, 2020).
For the subsequent design of our user interface for 3D content we used general guidelines, design principles, taxonomies, and design patterns from a Human-Computer Interaction context and suggestions from game design as well as humanities literature. Various guidelines and design principles (Sharp, Rogers, & Preece, 2019;Shneiderman, Plaisant, Cohen, Jacobs, & Elmqvist, 2017) were applied, such as the aim for universal usability for different target audiences and experience levels (Shneiderman, 2002). Based on Shneiderman's (1996) and Card's (in Jacko, 2012) information visualization taxonomies, we designed features supporting overviews, dynamic zooming, information filtering, "details-on-demand," and visualization of object relationships. Design patterns (see Tidwell, Brewer, & Valencia, 2020) were used as an input for designing complex data displays.²⁶ Game design literature (Jørgensen, 2013;Nitsche, 2008) informed exploration modalities and the ways in which we integrate information: directly within the 3D scene, superimposed in the 3D scene (e.g., colors or highlights), and via two-dimensional interface panels and overlays. Finally, we took note of Drucker's previous work for thinking outside common interface approaches in computer science and for potential ways to support "acts of interpretation rather than simply returning selected results from a preexisting data set" (Drucker, 2011(Drucker, , 2013(Drucker, , 2018. While increasing the number and versatility of user interface features can enhance the potential for interpretation, they also create a more complex interface, potentially causing information overload and increased cognitive load (Huurdeman & Kamps, 2020;Sweller, Merrienboer, & Paas, 1998;Wilson & Schraefel, 2008). This may negatively impact the usability of these kinds of interfaces. For this reason, a careful balancing of features is needed. To prevent an overly complex user interface for our 3D research environment, we employed a "multilayer approach" (Shneiderman, 2002). This is a strategy to support firsttime, intermittent, and expert users without negatively impacting usability. In practice, this means that our interface can be used at multiple layers and that each layer consecutively adds more functionality, metadata,²⁷ and paradata²⁸ ( Figure 6). The first layer offers real-time exploration of a 3D environment to any enduser, focused on the experience of "being there," along with a limited set of user interface features. The latter, as will be more extensively discussed below, includes abbreviated metadata and paradata. This interface layer is supported on desktop, mobile, and VR platforms.²⁹ The second layer is aimed at supporting scholars and offers additional features targeted at research and interpretation, including access to all metadata and paradata. The detailed nature of these features means that this layer is only available on  26 Such as: module tabs, collapsible panels, "accordion" displays and responsive layouts for a variety of possible screen sizes. 27 As the report of the Europeana 3D Task Force (https://pro.europeana.eu/project/3d-content-in-europeana, last accessed Apr. 2021) indicates, metadata should allow for resource discovery and for understanding provenance, technical characteristics and the cultural heritage object(s) represented by a 3D model. In an archaeology context, Koller, Frischer, and Humphreys (2009) distinguish between catalog metadata, bibliographical metadata and commentary metadata (the latter similar to paradata). In Library & Information Science, another categorization of metadata is often used: descriptive metadata (for resource discovery), administrative metadata (for resource management and use), and structural metadata (technical information) (Greenberg, 2005;Taylor & Joudrey, 2009, p. 91). 28 Defined by the London Charter as "information about human processes of understanding and interpretation of data objects" (https://www.londoncharter.org/glossary.html last accessed Apr. 2021). See also Bentkowska-Kafel et al. (2012). 29 The VR interface of our application will be discussed in a future publication. desktop platforms, which offer more screen real-estate and editing capabilities. Finally, the third layer adds functionalities to do authoring, for instance direct editing of metadata and paradata, also on a desktop platform. Layer 2, the desktop research environment, has been the initial focus of our prototype design and development and is therefore the focal point of the sections that follow.³⁰

Conceiving a Desktop User Interface for Knowledge Creation, Sharing, and Discovery
Our desktop research environment prototype natively supports various functionalities commonly included in viewers for 3D content,³¹ such as the use of different camera settings (e.g., orbit and first-person views), interaction with elements in the scene, and different ways to move around and explore the 3D space. These types of functionalities, unique to 3D environments, can provide new perspectives and potential insights to prospective users which 2D content would not offer; however, they are not enough to fully expose the interpretative and expressive potential of a 3D reconstruction. Therefore, we first demonstrate how 3D reconstructions within our prototype research environment can be further used for knowledge sharing by exposing reconstructions as well as underlying decisions, uncertainties, and data sources. Subsequently, we discuss the possibility for end-users to dive deeper into the created environment to potentially discover and create knowledge by themselves. As discussed in Section 3, the first step of knowledge creation and sharing takes place during the construction of a historically reliable 3D model. To enable this process, a variety of sources are used, including historical records, textual sources, archaeological field investigations, comparative studies, and "the modeller's informed imagination" (Hermon, 2008). These kinds of evidence are usually incomplete, and therefore various decisions have to be taken during the modeling process. However, these decisions, underlying sources, and related uncertainties are often not explicitly incorporated into the resulting 3D visualizations (see also e.g., Statham, 2019). The 3D research environment developed in the context of this work allows direct access to this information. First of all, it comprises explicit metadata captured during the modeling process (i.e., information about the included resources such as the type and description of an object) and references to underlying sources used for the reconstruction. Second, it includes paradata; in our case, a concise version of the source evaluation and reasoning that led to the reconstruction of De Graeff's voorhuis (discussed in Piccoli, 2021).³² Figure 6: Multilayer user interface approach to our prototype 3D research environment.
 30 An additional reason to focus on a single platform for scholarly and authoring functionality in this paper is that the needs for the user interface of a viewer vary considerably for different interaction modalities: as Wood et al. (2019) assert, "what works well in a full VR experience for navigation and interaction must be configured separately for mobile devices and again for webbased virtual experiences on a desktop computer." 31 See the comprehensive surveys of current web-based frameworks and their features in Champion and Rahaman (2020), and Scopigno, Callieri, Dellepiane, Ponchio, and Potenziani (2017). 32 A screencast video of the research environment demonstrating the functionalities discussed here is available via https://dx.doi.org/10.21942/uva.14424218 and as supplemental material to this article. Figure 7 shows the interface of the research environment, with on the left-hand side an interactive 3D view of the voorhuis. In the figure, the map of Zuid-Polsbroek has been selected in the 3D view, thereby showing related details on the right-hand side. Prominently listed under "Reconstruction," we encounter the paradata created during the reconstruction process. We can for instance find out that a map of Zuid-Polsbroek made by Leupenius was mentioned by De Graeff himself in his almanacs and that the original is preserved in the De Graeff family archive. Moreover, a link to the archive takes us directly to this resource. Also, color-coded confidence indices are displayed. These values give an indication of the confidence in the reconstruction of the object, as well as in its location in the reconstructed spaceranging from 1 (certain) to 4 (uncertain).³³ From the numerical index and information in a mouse-over, we can observe that the certainty of the map appearance is based on a "primary source + analogy with high degree of certainty" (2), and that the spatial location of the map is "inferred with high degree of certainty" (2). Besides the certainty indicators within the 2D sidebar of our application, a color-coded overlay can be dynamically superimposed within the 3D panel of the viewer, allowing for direct inspection of certainty while navigating the 3D scene (see Figure 8).³⁴ As described in Section 3, some alternative reconstruction hypotheses have been prepared, for instance for the floor of the voorhuis displayed in Figure 3. Via the "Actions" menu in the top-right of the interface, a user can toggle between different hypotheses in the research environment.
The information shown below "Original object" consists of descriptive metadata about the object the 3D reconstruction has been based on. Besides Leupenius' name as the creator of the map, additional information such as its dimensions are included, as well as associated images. Moreover, links to structured Linked  33 More specifically, the four numerical indices of the confidence indicators are defined as follows: (1) modeled after the original object, (2) based on primary source + analogy with high degree of certainty, (3) primary source + analogy with doubt, to (4) uncertain. These confidence indicators can be easily expanded and adapted to the needs of other case studies. 34 Various potential color schemes are currently being evaluated and will be implemented at a later point. Other suggestions on how to visualize uncertainty are presented e.g. in Kensek, Dodd, and Cipolla (2004) ;Noordegraaf, Opgenhaffen, and Bakker (2016).
Data³⁵ vocabularies have been integrated and are made accessible from the sidebar of our web application. Linked Data is a standardized way to represent data, used to achieve the vision of the "Semantic Web" (Berners-Lee, 2009). One of the advantages of this standardized format is that it is easier to interconnect and automatically process data (Antoniou & Van Harmelen, 2008), based on uniform identifiers. In earlier work in a cultural heritage context, Linked Data has for instance been used for data modeling (Kuroczyński et al., 2016), for creating semantic annotations (Yu & Hunter, 2013), and for creating a research infrastructure combining datasets on the Dutch Republic (Zamborlini, Betti, & van den Heuvel, 2017).³⁶ In our metadata, we currently include concepts from the Getty Art and Architecture Thesaurus (AAT),³⁷ identifiers for works in the Rijksmuseum collection and identifiers for persons in the Ecartico database.³⁸ In this way, we can obtain the AAT category for "city maps"³⁹ and use it to filter related objects either in the voorhuis, or potentially within the whole house of Pieter de Graeff. Moreover, users can consult information about Leupenius originating from Ecartico⁴⁰ (for instance his occupations) directly within the interface, as well  35 Here, our specific focus is on the use of the Linked Data approach, as defined by W3C (https://www.w3.org/standards/ semanticweb/ last accessed Apr. 2021), in the context of 3D reconstructions (see e.g. ARIADNE report, Geser, 2016, andSimon, Isaksen, Barker, &Cañamares, 2016). The practice of linking to other databases, on the other hand, has been more common in the past, already going back to the 1990s. For instance at Carnegie Mellon, Project Buhen, and Learning Sites, as well as digital documentation software related to excavations, including REVEAL (Sanders, 2011). A broader discussion of this aspect is beyond the scope of this paper. 36 Golden Agents (https://www.goldenagents.org last accessed Apr. 2021) develops a "sustainable research infrastructure to study relations and interactions between producers and consumers of creative goods." To this end, it uses Linked Data and semantic web technologies, which provide ways to combine heterogeneous datasets. Included datasets are e.g. notary acts from the Amsterdam City Archives, and works from the Rijksmuseum's collections. The Virtual Interiors research environment directly incorporates data from Golden Agents, but also the datasets created within Virtual Interiors will be made available in Golden Agents at the end of the project. 37 https://www.getty.edu/research/tools/vocabularies/aat/ (last accessed Apr. 2021). 38 Ecartico is a biographical database maintained at the Amsterdam Centre for the Study of the Golden Age which contains data on painters, engravers, printers, book sellers, gold-and silversmiths and other craftsmen involved in the 'cultural industries' of the Low Countries in the sixteenth and seventeenth centuries. Besides integrating supplementary information, Linked Data also facilitates additional external browsing possibilities. These are integrated in the "Linked Data" sidebar tab. For instance, a list of external resources on Leupenius is derived from Ecartico, including his Wikidata identifier and the records associated with him elaborated by the Netherlands Institute for Art History (RKD).⁴¹ Direct links to the original sources of all of these pieces of information can be followed directly to delve deeper into the sources underlying the reconstruction (Figure 9, top right). In this way, our environment serves as a research hub that facilitates further discovery since users can navigate to related relevant resources, beyond the already included information. Finally, for visual browsing, thumbnails of related works are included within the application. For instance, we can leverage the Linked Data structure to show images of works by the same creator of the original object from Wikidata, Adamnet,⁴² and the Golden Agents Linked Data reposi-tory⁴³ or works of the same type and period (Figure 9, bottom right). Thus, users are able to get inspiration within our application for further exploration. Moreover, the queries these visualizations are based on can be explored and adapted by users themselves.⁴⁴ In addition to exploring external information, users can customize, highlight, and selectively emphasize (Erickson, 1993;Tidwell et al., 2020) objects and spaces within the 3D scene itself, manually or via an included textual search function.
A second aim of our 3D research environment is to provide a space for users to actively create knowledge themselves, and for allowing what Drucker (2018) has called "two-way potential," stimulating users to  41 https://rkd.nl/nl/explore (last accessed Apr. 2021). 42 Adamlink (https://adamlink.nl/ last accessed Apr. 2021), a project of the AdamNet foundation (https://www.adamnet.nl/ last accessed Apr. 2021) aims to connect Amsterdam collections and make them accessible as Linked Open Data. The linked datasets, including for instance selected items from the Amsterdam Museum, are currently available via https://druid. datalegend.net/AdamNet/ (last accessed Apr. 2021). 43 The datasets available in Golden Agents can be found at https://data.goldenagents.org (last accessed Apr. 2021), originating for instance from the Amsterdam City Archives, RKD and Rijksmuseum. 44 These queries are based on the SPARQL query language (see: https://www.w3.org/TR/sparql11-overview/ last accessed Apr. 2021), which automatically retrieve sources from Linked Data repositories ("SPARQL endpoints"). engage in a dialog with the included content. Two concrete ways to facilitate this dialog constitute facilitating user annotations and exploration of different conditions and hypotheses.
In a broad sense, annotations have been defined as tools to "convey information about a resource or associations between resources."⁴⁵ Authors of a 3D reconstruction, and also end-users such as scholars, reviewers, heritage professionals, educators, or the general public, may create annotations within a 3D virtual environment to add their impressions, interpretations, comments, and suggestions.⁴⁶ We also envision the use of these types of annotation features for eliciting expert feedback, for instance to formally approve the correctness of reconstructions or associated metadata. By allowing private and public annotations, a 3D environment can become a platform for discussion. However, the implementation of annotation capabilities in 3D environments is still underexplored (see e.g., Champion & Rahaman, 2020;Snyder, 2014). Annotations may not only consist of text, but can also comprise various media such as images or videos, structured datasets, and links to internal or external resources (see e.g., Boot, Dekker, Koolen, & Melgar, 2017;Melgar, Koolen, Huurdeman, & Blom, 2017;Yu & Hunter, 2013).⁴⁷ In our prototype research environment, we currently support textual annotations at the level of logical spaces and objects ( Figure 10). In addition, two or more annotations can be linked together to create a narrative (Snyder, 2014) or an "intellectual guided tour" (Drucker, 2011) through the materials, with the possibility to associate camera viewpoints, viewing modes, and analysis layers with each annotation.  46 We distinguish between explicit and implicit (or tacit) annotations. An explicit annotation is a specific act of adding an annotation to an object by a user, while implicit annotations are for instance interaction sequences, clicked items, and navigation within the 3D space. These implicit annotations can be monitored by the underlying system and presented to a user, at an individual or aggregated level. 47 An unsolved aspect of annotations in a 3D context are their shapes and spatial coverage (see e.g. Abrami, Mehler, & Spiekermann, 2019;Alliez et al., 2017;De Luca, Busayarat, Stefani, Véron, & Florenzano, 2011;Ponchio, Callieri, Dellepiane, & Scopigno, 2019;Yu & Hunter, 2013). Point-based annotations refer to a fixed point in the 3D scene, while geometrical approaches (e.g. a line, polygon, surface, volumetric area or texture) refer to an area within the scene.
The advent of 3D environments also brings forward novel possibilities to test out ideas, conditions, and exploratory hypotheses, not only by the authors of a 3D visualization, but also by end-users themselves. In our prototype viewer, we support two types of explorations: of lighting conditions, and of custom hypotheses related to the placement and spatial arrangement of objects. Thus far, those kinds of features have rarely been included in existing web-based viewers.⁴⁸ Users of the prototype can apply predefined lighting hypotheses. In Pieter de Graeff's house, one can for instance select the lighting of the house at sunset on a certain day of the year, or choose a night-time setting with solely illumination by natural light sources, such as the candle holders within the scene (see Figure 11). Experienced users can also perform custom experimentations with lighting settings, shadow generation, and lighting arrangements, to explore what a space looked like in different lighting conditions.⁴⁹ We further allow for rearrangement and customization of included 3D objects, as this may enhance learning and engagement. Figure 11 also shows the functionality which allows users to rearrange objects in the room. The oak table with a marble top in Pieter de Graeff's house has been selected and can be freely moved, scaled, or rotated. Visual filters and viewing modes can be applied to the object via the "Actions" menu. Finally, categorized object color views are available via the "Analysis layers" option. Thus, users can potentially explore different hypotheses with regard to spatial arrangement of objects as well experiment with their appearance.

Brief Overview of the Prototype 3D Research Environment Structure
While the previous sections focused on the front-end, the user interface of our prototype research environment, this section provides a brief overview of the implementation of the back-end of the 3D viewer. For our prototype, we created an elementary framework, which allows for flexible customization. In our view, there  48 As evidenced by the overview in Champion and Ramahan (2020). 49 By using the native "inspector" of the used BabylonJS framework: https://doc.babylonjs.com/toolsAndResources/tools/inspector (last accessed Apr. 2021). is a tension between the possibility of standardizing features and the usefulness of a tool targeted at specific purposes. Zundert (2012) named this the generalization paradox: "the very wish to cater to everyone pushes the designers toward generalization, and thus necessarily away from delivering data models specific enough to be useful to anyone." For this reason, we opted for a standard, lightweight set of features, which can be extended for use in different contexts.⁵⁰ Figure 12 shows the structure of the 3D research environment, created as a web application using common web technologies such as HTML5 and Javascript.
The underlying back-end library for showing the 3D reconstructions is BabylonJS, an open-source 3D rendering engine for the web⁵¹ ( Figure 12C). On top of this framework, we built custom Javascript extensions needed for our research functionality, as well as for integrating necessary data and settings ( Figure 12B). The user interface described in the previous sections has been created using Bootstrap, an open-source front-end toolkit which allows for responsive user interfaces⁵² ( Figure 12A).
These components depend on three main resources: the settings, 3D models, and the contextual data. A settings file contains for instance camera positions, default lighting conditions, the paths to the 3D models, and other necessary configurations (Figure 12(1)). Supported 3D models can be loaded in the GLTF format, which is supported in the majority of current 3D modeling and processing tools (Figure 12(2)). The contextual dataset containing the default metadata and paradata is currently a CSV file (Figure 12(3)).⁵³ This CSV file is generated from an editable online spreadsheet, which is automatically read by the application. The use of an online spreadsheet facilitates team-based work: metadata and paradata can be collectively entered and accessed via the shared worksheets, while these online tools also include extensive versioning and annotation features. Finally, Linked Data identifiers are used within the database to integrate and connect to external data sources, such as the aforementioned Ecartico, the Art & Architecture Thesaurus, and the Rijksmuseum (Figure 12(4)).  50 For instance, support for different metadata standards (such as Dublin Core) could be added, various Linked Data could be integrated and features could be adapted or extended towards application in different domains and settings. 51 https://www.babylonjs.com/ (last accessed Apr. 2021). 52 https://getbootstrap.com/ (last accessed Apr. 2021). Responsive interfaces are designed in such a way that they optimize the display of contents to various screen sizes and platforms, e.g. for display on mobile phones, tablets and desktop computers. 53 Planned for the future is to create a pipeline to automatically convert this tabular CSV data into Linked Data (RDF), and to host it via the Golden Agents Linked Data infrastructure (https://data.goldenagents.org).
The use of open-source components and the flexibility of the included features means that it may be possible to reuse our application,⁵⁴ or parts of it in other contexts⁵⁵ by adapting the settings, supplied 3D models, and input dataset.⁵⁶

Conclusion
In this paper, we argue for the use of 3D reconstructions in the humanities as potentially enabling an iterative process of knowledge creation, sharing, and discovery. As we discussed, this process can take place both during their making and when they are made available to other users in an online research environment, designed with these purposes in mind. We used the 3D reconstruction of the seventeenthcentury entrance hall of an Amsterdam grand canal house as our case study to demonstrate the interpretative potential of these digital representations beyond their role of visualization means. In fact, the creation of the 3D model suggested a visual prominence of specific items which could not be detected from the archival sources alone. Moreover, it offered the possibility to explore alternative hypotheses (such as for the floor tile pattern). We then discussed the design of the research environment which further enables the process described above when the 3D reconstruction is made available to various types of users. The features we included in the interface, the data and the modular back-end structure, allow users to explicitly visualize uncertainty, explore the associated underlying data sources and external resources, test alternative hypotheses related to the spatial arrangement of the objects in the room, evaluate different lighting conditions, and leave annotations. In this way, we open up the 3D reconstruction process and we transform these 3D visualizations into interactive research hubs that allow for a more dialogical and usercentered exploration.
Although we focused on a specific case study, the features we implemented are generalized by taking the needs of other possible historical and archaeological applications into account. Moreover, the workflow we propose allows for collaboration and evaluation at various levels of data ingestion and processing. To this end, our focus on open-source software packages for the creation of the 3D reconstruction and of opensource components for the implementation of the 3D research environment makes it easier to replicate and reuse our work pipeline for other case studies. Future work includes evaluating the 3D research environment with a larger group of users to assess the usability and usefulness of current features in the context of data exploration and research tasks. A possible addition we envisage is to leverage the previous users' navigation and interaction histories to enhance and guide the exploration of the 3D environment and related sources. Finally, the research environment prototype will be developed further into a more stable and sustainable, and where possible more generic, form.