Real-time adaptation is one of the most important problems that currently require a solution in the field of personalized human-computer interaction. For conventional desktop system interactions, user behaviors are acquired to develop models that support context-aware interactions. In virtual reality interactions, however, users operate tools in the physical world but view virtual objects in the virtual world. This dichotomy constrains the use of conventional behavioral models and presents difficulties to personalizing interactions in virtual environments. To address this problem, we propose the cross-object user interfaces (COUIs) for personalized virtual reality touring. COUIs consist of two components: a Deep Learning algorithm-based model using convolutional neural networks (CNNs) to predict the user’s visual attention from the past eye movement patterns to determine which virtual objects are likely to be viewed next, and delivery mechanisms that determine what should when and where be displayed on the user interface. In this chapter, we elaborate on the training and testing of the prediction model and evaluate the delivery mechanisms of COUIs through a cognitive walk-through approach. Furthermore, the implications for using COUIs to personalize interactions in virtual reality (and other environments such as augmented reality and mixed reality) are discussed.