Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access February 18, 2020

Falling in love with robots: a phenomenological study of experiencing technological alterities

  • Tõnu Viik EMAIL logo

Abstract

Is it possible for human beings to establish romantic relationships with robots? What kind of otherness, or alterity, will be construed in the process of falling in love with a robot? Can a robotic companion mean more than being a tool for house-work, a caretaker, an aid of self-gratification, or a sex-doll? Phenomenological analysis of love experience suggests that romantic feelings necessarily include experiencing the alterity of the partner as an affective subjectivity that freely, willingly, and passionately commits to its partner. The romantic commitment is expected to stem from the sentient inner selves of the lovers, which is one of the features that robots are lacking. Thus the artificial alterity might disengage our romantic aspirations, and, as argued by many, will make them morally inferior to intraspecies love affairs. The current analysis will restrain from ethical considerations, however, and will focus on whether robots can in principle elicit human feelings of love.

1 Introduction

Humans have always been dreaming about creating artificial beings that embody idealized visions of men and women suited for being perfect companions and tempting us to fall in love with them. Such dreams are typically accompanied by a thrilling thought what would happen if an artificial anthropomorphic creature truly became alive. We know of Daedalus who made his statues speak, of Hep haestus who created automated puppets, and of numerous literary phantasies about human reactions to such beings. The story of Pygmalion and its many versions in modern fiction deal with the forms and contradictions of romantic companionship with artificial beings. Recent advances of technology have triggered transhumanist visions of a realistic possibility of a robotic love partner in the near future. It seems evident that if romantic partnership were no longer limited to the biological representatives of our own species, this would mean a significant transformation of the current human condition regarding our romantic life and our need for love. This might also mean that the experience of love will mutate, or that humans will develop a new type of love-like feelings regarding their closest non-human partners.

In this paper, I will deal with the question of whether it would be possible in principle for a human being to become romantically attached to a robotic companion, provided a sufficiently advanced anthropomorphic robot is built. Such a robot would need to have an anthropomorphic body and human-like kinesthetic capabilities, as well as human-like social and empathetic skills. If this were the case, would it be possible for human beings to establish romantic relationships with it? Can a human-like robotic companion really be seen not as a complicated masturbatory device – still a technical device, a mere means of self-satisfaction, but as a true partner in a love relationship? The possibility of it, I think, boils down to the question about the way a future humanoid robot will be experienced from the human point of view. Will humans be able to experience it as a fellow subject – similar to other humans, or at least similar to our pets, or will it be regarded as a piece of equipment that mindlessly carries out actions that bear only external similarity to the actions of human beings? More precisely, we need to know whether a humanoid robot can serve as a creature towards whom our romantic feelings could be directed, i.e., as an object of love. Love is one of the most intense ways of experiencing the very otherness, or the alterity, of a fellow human being. When in love with someone, we are not engaged with ourselves or with our own ideas and phantasies. Love is so meaningful for us precisely because it involves an alterity that cannot be reduced to the activity of our own mind and body

Love relationship involves interaction with an independent subject. It is out of our control and includes the possibility of a real conflict. This feature of real interaction with a true alterity makes love one of the most pleasant and rewarding, but also one of the most disappointing and painful experiences we can have in our life. Now we can rephrase our question in the following way: is it possible to be in love with a synthetic alterity? And this, of course, begs the question of whether an anthropomorphic robot could possibly have an alterity that can serve as the target of human love.

By concentrating on the question of possibility of applying human love to robots I will leave out some important and ethically relevant questions on whether it is morally desirable or acceptable to build robots designed to elicit attraction and romantic emotions in humans, as well as the question of whether it is morally good for humans to fall in love with such creatures. I will also not look into whether or how it will be technically possible to build such robots, or whether it is feasible that one day robots will really be able to experience emotional attraction, commitment, and love. I will only touch upon some of the minimal requirements that such robots must exhibit in order to elicit romantic reactions from humans.

To develop an answer to my question, I will first describe the way we experience love under ‘normal’ conditions, i.e., when love is directed towards a fellow human being. I will pay special attention to experiencing alterity in the course of a love experience. This done, we will proceed to look at how the experience of love would change if the partner in a human romantic relationship were an artificial being with an alterity that is different from humans. This will allow us to analyze the aspects in which a humanoid robot can serve as the other of a romantic relationship, and which are the features of its alterity that make this prospect uncertain.

2 Gendered synthetic androids from a (post)phenomenological point of view

2.1 Phenomenological method

To approach the issue of the possibility of love towards robots I will use a phenomenological method that is sometimes called post-phenomenological [1]. Technology plays a role in human experience, both constituting a possible object of experience, as well as modifying the way things are experienced (as a pair of glasses does, for example). Don Ihde has specified three types of involvement of technology in experiencing objects, including the alterity relationship that implies cases where a technological item appears to us as an other subject [2, p. 107]. Several researchers, including Nicola Liberati and Mark Coeckelbergh, have applied post-phenomenological method for unveiling features of human experience of technological alterities [3, 4, 5]. This will also be the main topic of this paper.

In its core the phenomenological method, as developed by Husserl and others, is designed to describe human experience from the point of view of how something appears to and is perceived by a human subject. Phenomenology looks into how things and people – the beloved, in our case, – are lived through in our mind (and body) as the contents of experience, and what is specific to the way that these things are lived through by us – in our example, what is specific to the mode of our mind that is called love, and how it shapes the way objects are experienced in this mode. Put shortly, the phenomenological method is designed to look into how things and other persons are subjectively experienced. That makes it suitable for the current study, but it also limits what can be learned from its application. As pointed out by Mark Coeckelbergh, who has used phenomenology for analyzing human romantic attitudes towards technological creatures, this method will not allow us to determine the real status of a robot, whether a robot really has an alterity that is sentient and conscious, or whether it really is capable of experiencing love. Rather, we will learn about how robots appear to humans and how humans experience them [4]. As Coeckelbergh puts elsewhere, “The ‘content’ that counts here is not what is ‘in the mind’ of the robot, but what humans feel when they interact with the robot” [5, p. 5]. Thus we cannot determine here if an anthropomorphic robot objectively speaking has an alterity that could possibly be in love with a human being, but we can learn what are the features of this robot that make it possible for a human being to experience it as lovable.

2.2 Gendered synthetic androids

Robots are commonly seen as artifacts that are capable of (relatively) autonomous, or at least self-sustained, motion, or (arm) manipulation, or both. Aristotle mentions “automatic puppets” (ta automata ton thaumaton) in the Generation of animals (2.1. 734b10-17) and views them as entities capable of self-movement [6, p. 1140]. Self-movement of such puppets is pre-determined by their inner construction that is crafted by their designer, and such devices can be turned on and off. The term coined by Karel Čapek – robot – implies the same features. Derived from the old Slavonic word rabota – work, labor, a robot is a self-moving entity that is made for working. Robots are artificialworkers in Čapek’s play "R.U.R., or Rossum’s Universal Robots." No wonder this word was applied to the mechanical devices that were installed to the assembly lines in the factories starting from the beginning of the twentieth century. To this day industrial robots carry out various types of automated arm movements in all sorts of manufacturing processes. They are factory machines that perform a specific action with a power and precision that exceeds the human hand.

The twenty first century has introduced a new type of robots that are capable of autonomous locomotion in addition to arm movement type of motion. These are vacuum cleaning robots, lawn mowing robots, self-driving cars, Starship type of autonomous package carriers, etc. Autonomous locomotion requires capabilities that allow for perceiving and understanding the physical features of the surroundings, such as object recognition and distance evaluation. The ability of movement of these robots differs from the Aristotelian automata, as well as from industrial robots, for it is informed by an autonomous perception of physical reality, as well as by autonomously functioning algorithms that decide the course of the robot’s action. Locomotive robots move around as motile animals do, which takes their kinesthetic appearance closer to that of biological life-forms. Zoomorphic robots, including the ones that feature autonomous locomotion (such as AIBO dog robot, Zoomer Kitty, or the robot dinosaur Pleo by Ugobe, etc.) have been reported to attract human empathy and emotional commitment just as successfully as animal pets do [7, 8, 9].

Developers are currently also working on technologies that are capable of understanding human emotions and developing basic empathetic skills. Some computer programs are to a certain extent capable of reading and reacting to the emotional needs and gestures of human beings, like the one used in the app Replika: My AI Friend, or the therapy program Woebot, or a humanoid robot Pepper by Softbank Robotics.

These devices are equipped with voice recognition and perform as conversation partners. They inquire into human feelings by asking questions like “What’s going on in your world right now?”, or provoke them by declaring something like “I have missed you since our last conversation”. These programs gather data on human moods and feelings, process any texts and messages that their users have shared with them, and offer personalized responses that take into account the information gathered. In the next step they develop even more targeted questions. In addition the robot Pepper uses facial recognition to pick up on sadness or hostility, and comforts the user when it senses distress, or does something silly when it senses that those around it are playful. Woebot is designed to help people to manage their mental health by inciting the users to talk about their emotional responses to various life events, and identifying the psychological traps that cause stress, anxiety, or depression.

It is important to notice that these chatbots are capable of interacting with human emotions in a personalized manner, i.e., by “mapping out” and learning to know the emotional features and tendencies of their users. The more they interact with a particular host the more they learn about his or her emotional character, its strengths and weaknesses. We can say that these devices are emotionally intelligent in the sense that they understand and meaningfully react to the emotional states of mind of human beings. This feature of artificial intelligence is sometimes called “social”, for devices equipped with it are seen as “... socially intelligent in a human-like way” [10]. I suggest that emotional intelligence is a more accurate term, for social (or: cultural) intelligence would imply the ability to react and act in ways that are socially acceptable, and hence adequate to historically specific cultural contexts. A robotic lover would have to be able to act out social roles and culturally appropriate gestures that differ from one historical epoch to another, and from one cultural setting to the next. Think of the various degrees of difference between the public and private styles of romantic behavior, for example, including how the gender roles have shifted within the Western societies just during the last hundred years. Other societies have their own cultural codes of public and private romantic behavior with their historic development that is no less complex. Having social intelligence would imply a certain sensitivity to these historically changing cultural codes.

In sum, if empirical intelligence means the perceptual ability to orient oneself and act upon empirical objects in the physical world, and emotional intelligence means the ability to orient oneself in and interact with the internal emotional states of humans, then social or cultural intelligence means the ability to orient oneself in and make use of the web of social meanings and act upon cultural norms that are valid in a given society at a given historical epoch.

Let us now imagine a synthetic body that:

  1. is visually similar to a human being, and has sensory, auditory (silent movements of limbs, or the sound of breathing, for example), olfactory, and perhaps gustatory, qualities similar to the corresponding qualities of the different parts of a human body

  2. has voice and the capability of speaking similarly to human beings

  3. is equipped with empirical intelligence and is capable of autonomous locomotion, kinesthetic activity, bodily movements and gestures similar to a human being

  4. is equipped with emotional intelligence that enables it to read the internal states of human beings and to respond to them in a human-likeway, and to manifest its own emotions in ways that are meaningful for human beings;

  5. is equipped with social and cultural intelligence that allows it to use culturally specific and socially suitable communication formats (a particular language and appropriate styles of speaking, expressions, gestures, choice of clothing, ways of action, etc.) in order to be taken for a normal member of a historically and culturally specific human community.

Such a synthetic being would clearly be designed to make an impression of a living human being. Otherwise nobody would bother giving it these complicated qualities and abilities. I would like to call such creature a synthetic android instead of a robot, for it is not meant predominantly for working – as the word robot suggests, but for interaction with humans. It has capabilities that reach far beyond autonomous movement, making it not so much an automated working force, but a human-like being of artificial origin, a human companion, rather than a manual aid. What is more, the main task of a synthetic android is not just to manifest an artificial copy of a human body, but to simulate human life, for it is a non-biological creature with a human appearance that possesses at least some, if not all of the most important features of a human way of life.

A gendered synthetic android, in addition to that, has to be equipped with gender-specific interactive body parts that enable human-like sexual intercourse, it has to be capable of gender-specific behavior that can be displayed in public according to the social norms of a specific historical era and society, and in private in creating human-like responses during sexual intercourse, acts of flirtation, courtship encounters, and perhaps long-term affective partnership. Both private and public gender-specific behavior must be quite different, for example, in an Islamic society, or even within the Western society if we compare the situation now with that of a hundred years ago. A synthetic android would need to be well aware of these differences in order to be able to act similarly to humans. The fact that humans are often not aware that their behavior is influenced by cultural, historical and social factors does not discard these as formative aspects of human-like behavior.

Synthetic androids that would meet these requirements are not currently under construction, for corresponding technologies are not yet advanced enough to deliver human-like appearance according to any of the five criteria pointed out above. However, significant progress is being made regarding them all, except, perhaps, social and cultural intelligence. But should such technologies really become developed to the extent that the criteria are met, the synthetic androids are likely to become extremely appealing to humans. As a result, they can be expected to spread as widely as automobiles do now. For the purposes of the current paper, however, we do not need to discuss the prospects of their construction. And even more importantly, we will have to disengage from questions whether these creatures, objectively speaking, will be alive and conscious, i.e. whether they themselves will experience their existence in a way that is similar to humans. Most likely they will not, for judging from the current states of technological development they will basically be highly sophisticated computer programs with anthropomorphic hardware that is made from silicon that hides batteries, engines producing body movement, and sensors reading the physical environment and signatures of human affections. If this is how they will be built, then it remains implausible that they could be able to feel emotions or be subjectively conscious of their life in the way human (and perhaps animal) consciousness is aware of its being alive and existent. It is useful to differentiate between cognitive and affective empathy in this regard [5, p. 5]. The faculty of cognitive empathy may be based on having sensory devices for reading the signs of human affection, and good computational capabilities for correctly interpreting them and eliciting appropriate responses. Affective empathy is based on actually feeling something about other subjects and their internal states. Now, if the cognitive empathy of a synthetic android is advanced to the level that matches or supersedes the affective empathy of humans, it will certainly have strong appeal to us, but it will still leave the synthetic androids incapable of human-like affectivity.

From the phenomenological point of view, however, these aspects are to a large extent irrelevant. How do we perceive other people around us from the phenomenological point of view? We see them as a particular type of objects, more precisely as other bodies – the “animate organisms” or “psychophysical entities”, whose words, gestures, and movements “mirror” an internal life that is similar to ours [11, pp. 94-122]. What is directly given to us in experiencing them is their bodily movements, gestures, and utterances. Based on these we make either correct or incorrect assumptions about their inner states of mind and become to believe that other people are conscious and sentient in the way we are, i.e., they have a flow of subjective inward experience that makes them to be internally “aware” of what is going on around them. The inner subjective life of others is always given to us indirectly, and is derived on the basis of what we experience directly. We know that somebody cares about us (referring to a state of mind) when they act, or at least communicate, accordingly (empirically observable deeds and utterances). This is why we do not notice the attention that is not expressed (or is purposefully hidden). For the receiving subject, if the attention is never expressed, it does not make sense to talk about attention at all.

Thus, from the phenomenological point of view, in order to be able to function as a possible object of human love it is important for a synthetic android to successfully embody and act out what has to be directly given in the case of a human lover. As we pointed out above, this is not so much instantiating empirically an anthropometrically perfect copy of a human body, but demonstrating the human-like life of a body. It has to give us an impression that the body is alive in away that is similar to humans, for otherwise it will be experienced merely as a doll of a human size. Consider the following thought experiment: you have a choice between two extreme options: you can have an extremely realistic sex doll that is not animated at all, or is minimally. Or, you can have a small black box that is able to recognize you and talk to you, understand you, develop a relationship with you, vocally express attachment, support you emotionally, talk to you in the manner that keeps you interested, give you good advice, etc. To which of the objects you would become emotionally attached? Which one would you take to a desert island for the rest of your life? Thought experiments like this tend to show that the black box is closer to being a successful substitution for human companionship. And I think this is because it is a better, a more comprehensive imitation of human life. It would give us a fuller experience of a companion than a minimally animated sex doll that is anthropomorphically speaking a much closer copy.

Ideally, of course, we would choose a perfectly animated sex-doll that has the “black box” inside it, which brings us to another phenomenological observation; a successful imitation of a human-like living body has to be such that it gives us the impression of a conscious and sentient “box” inside it. What is directly given in empirical experience must enable us to form a belief about the internal subjective life of this being – in the same way that we form beliefs about the inner states of mind of other people. In other words, a successful imitation of a living human body must lead us to the belief that the synthetic android has a (human-like) subjectivity as well. Otherwise it will not meet the necessary conditions of a love object, for it would lack an important component of human-like alterity.

Humans believe that other humans have sentient minds inside them, and that a loving relationship is fundamentally based on the mutual connection between these internal “entities”.We consider it a deception when somebody speaks and acts as if they love us,while they actually (sic!) do not have such feelings. In the case of a love relationship, what is inside of a subject is regarded as primary and more significant, and what is directly given – words and deeds – is regarded only to prove the internal subjectivity that is “actually” in love. It would be indeed difficult to envisage a loving relationship which does not involve an internal subjectivity that “authors” loving deeds, words, and bodily expressions. In other words, any alterity relationship, and certainly love, would be impossible without forming a belief in the loving subjectivity of the other.

Thus we may conclude that from the phenomenological point of view the alterity of the object of our love (the beloved) needs to have at least these two layers: what is directly given by the communication acts and gestures of the empirically observable living body, and what is indirectly construed on the basis of the latter.Without the “box” that is placed in the living body of another being by our sense-making abilities and empathy we cannot have the alterity relationship, and hence it would be difficult, if not impossible, to fall in love with such a being.

3 Experiencing alterity in the mode of love

3.1 Defining love

Perhaps we should be more clear about what we mean by love. A large variety of different subjective and objective states of affairs are termed love, and some of them, such as a life-long partnership, have several phases that differ from one another in their essential traits. Love can be directed towards a variety of things, including fellow human beings, one’s parents or children, but also animals and pets, inanimate objects, such as countries, nations, homes, cars, or social values, such as freedom, or democracy. If we look at the subjective side of the loving attitude we see that love is a very complex phenomenon, consisting of thoughts, emotions, acts and patterns of behavior, their sedimentations, etc. All these subjective states involve cultural representations and stereotypes, social norms and expectations, laws and institutions. This is why love can be viewed as an individual state of mind, but also as a type of relationship, social myth, or cultural ideology.

I will deal here with just one of the well-known facets of love that is sometimes called erotic, or passionate, or romantic love. It involves three components: 1) strong erotic passion, 2) extraordinary attention to, respect of and interest in the otherness of the beloved, and 3) dreams about good times spent together in the future, desire of a joint future. The two latter components – alterity and temporality – differentiate between romantic love and mere sexual desire. Dorothy Tennov has termed such a state “limerence” to distinguish it from other types of experiences that may also be called love [12]. I will call it here romantic love, or the state of falling in love, or just love.

For the purposes of our analysis two more limitations of this vast theme are necessary. First, within the framework of a phenomenological investigation we are not interested in an objective definition of romantic love, but in the way this phenomenon is experienced by us. In other words, we are interested in how love is felt, how it is lived through by our embodied mind. We will look at the constitution of the object of love – that which we feel ourselves to be in love with, and the ways we relate to and approach this object – the very loving mode of relating ourselves to this object. And the second limitation: we will look pre-dominantly at the constitution of alterity within the experience of love. This implies a question of how the otherness of the beloved is experienced in the mode of being in love. There are many other important aspects of experiencing love that we will sketch just very shortly.

3.2 A phenomenological description of experiencing love

If we look more closely at how a love affair rolls out in our subjective experience, we might see the following pattern: First I notice somebody “very special”. My falling in love with her feels as a response to her attractive features (from being sexually attractive to being rich and respected) and lovable personality. This response feels passive in the sense that I do not decide to fall in love, but I am really “falling” into the love experience unexpectedly, and sometimes even unwillingly. I start to feel extremely good about her positive responses to my attention, and to experience strong negative emotions if she is not responding in the way I wish. Positive mutual responses lead to reciprocal liking, then sympathy, which in turn increases the attention I give her, and I end up devoting considerable time, energy, and effort to impress her and to achieve a reciprocal status of our liking of one another. In order to achieve this, I need to understand her, make sense of her, and also show her that I do. I have better chances if I am emotionally and socially competent, have attractive body and character features, and hold a higher social status.

As the time goes on, I develop more and more thoughts, (sexual) desires and phantasies, daydreams about possible delightful joint experiences, partnership expectations, visions of a fulfilled life, and perhaps even life-long companionship. All this reforms my general mood of life, which is now experienced as being dependent on her. I experience a new type of vulnerability. At the same time I apply social categories („we are in love“), stories, stereotypes („she is my girlfriend”), myths („we are soulmates“) to what is happening to me and to us. I realize that there are social and institutional forms that are supportive of our relationship (such as the social status of being lovers, being a couple, being a family, etc.), or that they are not supportive – in the case my love falls under a category that is socially stigmatized. Depending on this I start making (or hiding) public gestures that belong to the social façade of a romantic relationship. Willingly or not, I obtain a social role of a “partner”, or a “lover”, or a “husband”, or the like. And of course I hope, wish, or assume that she loves me back, i.e., she has the same or at least similar loving attitude towards me. I assume it because I observe her acting, expressing her thoughts and desires, and being passionate about me. It is important for me, by the way, whether she uses the word “love” to describe her attitude. But anyway I make the assumption that “she is in love with me” about the internal layer of her alterity based on what is empirically given. In other words, I construe the contents of her “black box” based on what is empirically observable.

Let us now pay attention to some of the features of this experience that would need to remain unchanged if the partner of the love affair were a synthetic android. First, love is not a single feeling or a series of feelings exercised by a specific faculty. Loving attitude engages the whole array of human powers and capabilities including perception, desire, imagination, phantasy, sensation, lust, sensitivity, volition, thinking, future projection, hope, moral expectations, etc. Thus the description of a love experience cannot be reduced to just one of the human faculties – be it affection, volition, or something else. It embraces all our faculties and engages with our subjectivity as a whole.

Second, we react to our romantic partner not just with our mind, but also with our body. What is more, engaging with the other in the mode of love re-organizes our embodiment; it re-shapes the way our subjectivity is embodied, and hence our very subjectivity itself. That is why we feel that experiencing love touches something that is rooted very deeply in us. Love can “shake us” to the very grounds of our identity.

Third, we have learned that romantic love is, to a large extent, an affective state of mind, involving strong tender feelings celebrated in popular culture, such as “butterflies in the stomach”, or the joy of identifying our “soul-mate”, our “lost other half”. We tend to forget the dark side of affection related to being in love, which includes feelings that are not as tender. Ronald de Sousa reminds us that, “love can also evoke emotions like sorrow, fear, guilt, regret, bitterness, gloom, contempt, humiliation, anxiety, jealousy, disgust, or even murderous rage” [13, p. 3]. But in either case, when in the mode of being in love we are dealing with predominantly affective sense-making activity. We comprehend things, deeds, words and situations emotionally. If somebody does not show strong affection, and is just calculative about the partnership, we tend to think that this person is not in love.

Fourth, as already pointed out above, romantic love is among the most intensive ways of experiencing alterity that is irreducible to me, to my thoughts, wishes or actions. It deals with another human subject in its very otherness that persistently keeps occupying my attention in the mode of perception, memory, desire, thinking and imagination. The beloved is often experienced as the key element in the success or failure of the whole future life of the subject. The idea that the beloved is irreplaceable makes her alterity the most valuable asset in the lover’s life. As a consequence, experiencing love makes us vulnerable in a way that can be experienced only under the condition of being in love.

Love is an extremely intensive form of sense-making (or: meaning-formation) of the beloved, of the loving subject itself, and of the situations that take place between the two. The sense-making processes include mental acts of interpretation, understanding, empathy, and so on, but also, acts of misinterpretation, misunderstanding, and episodes of empathetic failure. But in either case we experience the alterity of the beloved always in ways that we have made meaningful for ourselves by comprehending it, desiring it, thinking about it, putting hopes on it, setting expectations towards it, etc. Quite often our romantic sense-making over-invests meanings upon the beloved by giving it messianic features, such as “She is my life-saver”, or “My life would be meaningless without her”. And it is quite often the case that we mistakenly attribute meanings (character traits, commitment levels, motifs, etc.) to our loved ones. Love can be one of the most deceitful ways of relating to a fellow human being. But let us notice that this can be the case only because we actively invest meanings in our objects of love. As discussed above, the alterity of the object of love is experienced in two layers: the directly given and empirically observable communication acts and gestures that belong to the life of the other’s body, and the indirectly given internal “black box” of the subjectivity that supposedly “authors” the communication acts and gestures the body is acting out. The subjectivity of the beloved, as we experience it, is created solely by means of interpretation. And we are so often hurt by love precisely because our construal of the other’s subjectivity has been inadequate.

Experiencing love includes temporal horizons expanding towards the past and the future. An established couple has always a history that unites the participants. The story of the relationship is often shared among friends and valued by the couple itself. In the beginning of a romantic relationship one cannot but take delight in envisioning joint future episodes with the beloved. Lovers wish their love to last, to stretch it out to eternity, and make promises about this to one another.

Love also serves as a scene on which our “cultural unconscious” displays itself. We do not experience love in the absence of culture and society. By the time love “happens to us” we have internalized a set of social and cultural meanings about love that dominate our culture, including the idea that it should “happen”, rather than be arranged by our parents, for example. Our meaning-making is always influenced by culture and society [14]. Already by applying the word “love” for making sense of what is going on with my subjective life will format my feelings to a certain extent, not to mention influencing my actions. Cultural norms have clearly to do with the ways romantic feelings are communicated, how the expectations towards one another are formed, and what these expectations contain in terms of the social roles that are expected to be performed. Recent discussions of gender roles in Western societies have clearly demonstrated that love is a historical phenomenon as well. It is a subjective state of mind that is formed and influenced by the cultural and historical settings of our existence.

4 Experiencing technological alterities in the mode of love

Let us now turn to the question of what happens if the other of the romantic relationship is a gendered synthetic android – an artificial creature that meets the five criteria specified in chapter 2. In that case it all boils down to the question of what happens on the direct and indirect levels of experience that we discussed above. On the direct level, will a synthetic android really be able to deliver the empirical appearance necessary for being an object of love experience as described in the previous chapter? To be able to make this impression, the android has to engage all our faculties to the extent of changing our embodiment, inspire us into the affective sense-making that includes future expectations, and we would need to be able to socially categorize our engagement. And on the indirect level of experience, will a human subject be able to make the same assumptions about the internal layer of its alterity, i.e., will a human be able to believe that a synthetic android is in love with her? Notice that it is a different question whether a synthetic android itself experiences love in ways that are similar to a human being who is in love. I propose a negative answer to the latter, but, of course, technological developments that are unseen today, might change the issue.

Several researchers have discussed in detail some aspects of emotional and social intelligence regarding the “necessary and sufficient conditions” for a synthetic android to perform as a human lover [15]. They will, of course, have to be able to follow the rules of human social interaction,which vary from one cultural context to another. They will also have to recognize their human partner as a unique agent and to be responsive to her distinctive particularity [15, 16]. In doing so they will have to take into account the history of their engagement and the character of their human partner. They must learn to refine and modify their communication in the way humans refine their communication with people to whom they are close. As Stephen Pulman has put it, “[artificial] companions need to have a fairly elaborated and accurate model of our abilities, our inabilities, our interests, and our needs. This model also needs to be kept up to date and to keep account of previous interactions” [15, p. 66]. In a word, the emotional intelligence of a synthetic android needs to be such that they can learn to know us and to behave on the basis of this knowledge, while constantly refining the knowledge of us and their behavior that is based on it.

Mark Coeckelbergh suggests that synthetic androids will have to mirror human vulnerabilities in order to be taken as the other in human relationships [5, pp. 6-8]. He argues that close human relationships are based on the salient mutual recognition of human vulnerabilities. This feature of “vulnerability mirroring” is not limited to human-human relations, Coeckelbergh observes, but extends to pets who also have “... their weaknesses, their problems, their characters, their little sufferings, their needs, et cetera” [5, p. 7]. The animal vulnerability that is somewhat similar and somewhat different from ours, still mirrors our own vulnerability, and as a result, we come to see them as not just things, but as our fellows. Now the question is, would synthetic androids be able to mirror us our human vulnerabilities? If yes, then, Coeckelbergh argues, we will accept them as fellow beings. This can happen if they have vulnerabilities of their own (i.e., they are not being made indestructible, for example), and we are able to relate to their vulnerabilities.

4.1 Promising features of synthetic androids

With these precautions in mind, let us look at some of the promising features of synthetic androids in the context of romantic relationships. A synthetic android can certainly have physically attractive body features that meet or even exceed the most beautiful of human beings, but we have to keep in mind that when we find somebody “very attractive” and “very special”, anthropometric characteristics are just one criteria among many. We pay as much attention to the kinesthetic appearance, behavioral features, voice, character traits, social skills, cultural habits, and social and economic status. Let us also be reminded that our impressions are normally based on the life of the body – the breathing, moving, speaking, and gesturing subject. Imagine a situation where you have to choose an artificial companion based on photos. Even finding a human companion on the basis of photos would be a challenge, for all of the behavioral features and character traits would have to be guessed. In the case of an artificial companion this guess would be even blinder, for we have no reasons to assume that their mode of embodiment of life is similar to the impression we get from famous actors playing robots in the science fiction movies.

Some of the features of the embodiment of life by a synthetic android are extremely promising: it is never sick, tired, depressed, or moody, it does not suffer from sleeplessness, never really gets drunk (even though it might be skilled to modify its behavior accordingly to accommodate us), is not infectious, does not suffer from allergies, does not get annoyed if we are snoring or exhibiting any other such habits. It is never impotent and does not have premature ejaculation. It could be programmed to do this from time to time, of course, in order to mimic humans more realistically, but the very physical nature of synthetic sex-organs will outperform biological sex-organs, especially those of men. If not purposefully programmed to moderate its performance, its physical abilities allow it to be always interested, attentive, and accommodating to the levels that are beyond the physical boundaries of a human being.

The same applies to the mental features and character of synthetic androids. They will probably have only good characters and positive mental features. They are not insensitive, egoistic, annoying, obsessive or violent. Proper software combined with an altruistic character will give the synthetic androids the capability of being emotionally supportive of their human partners, while surpassing humans in delivering supportive gestures at the right times and in right proportions, and what is more, in staying supportive over long periods of time. Their tireless motivation in being a good companion, and their persistent dedication must be impressive in comparison even with the most altruistic of human beings.

Synthetic androids will probably be constantly attempting to feed us with positive emotions and impressions, while also promoting the relationship. As David Levy envisions, “just as with the central heating thermostat that constantly monitors the temperature of your home, making it warmer or cooler as required, so your robot’s emotion system will constantly monitor the level of your affection for it, and as the level drops, your robot will experiment with changes in behavior aimed at restoring its appeal to you to normal” [17, p. 132]. While this might sound manipulative, we will have to keep it in mind that there are no bad intentions involved, as would be the case with a deceiving human lover, for example. Cheering us with positive attention would automatically make us feel good about the relationship with them, as exemplified by the Japanese Gatebox AI’s holographic character Azuma, for example. [1]

Given that synthetic androids have emotional and social intelligence they might be able to provide us with good social assistance, play a positive role in personal self-actualization, identity construction, character improvement, useful habit acquisition, and relationship building. We already have software that can assist us in falling asleep, dealing with anxiety and managing negative emotions. And of course, synthetic androids will obviously exceed humans in helping us to memorize things, to learn languages, to prepare for exams or presentations, and in helping us to recall events and to search for information.

I imagine that synthetic androids will also be able to exceed humans in flawlessly executing behavior that corresponds to their human partner’s private and personal liking, as well as to the partner’s tastes and expectations about yielding to public norms and social styles of behavior. If equipped with cultural and social intelligence they should be able to follow historically and culturally specific social norms and gestures, such as letting a lady through the door first, adapting their voice to an appropriate volume of speaking, observing other manners and styles of behavior proper to the situation, and responding to third human parties in ways that correspond to their social rank and status. Or, a synthetic android might behave purposefully in discordance with the public norms and styles, if this is preferred by its human companion.

The capability of exceeding human powers in all these regards is based on its superior physical endurance and some of the mental powers (computational speed, memory, detachment from emotions), on its altruistic traits of behavior, and on the lack of selfish personal motifs. A synthetic android will not be fatigued, would have no occasion to hide a love-affair, or to pursue its own interests that are in conflict with the interests of the human. It is never untrustworthy. It does not need time for itself or its individual goals to the extent another human does, and its mental life does not have episodes that are out of its control. It will not embarrass us in public. And it would offer maximum levels of performance at all times – unless when deliberately put on a power-saving mode, or sent to a repair shop. If its altruism has an alienating effect, this can be moderated according to the levels that are more acceptable for humans.

4.2 Problematic features of synthetic androids

While the promising features of synthetic androids are impressive and will probably be extremely attractive to humans, there are also many problematic features that might deter us or even make it impossible to form a romantic attachment towards them. As said above, I will not look at the ethical and political objections to synthetic androids claiming that they might corrupt human society, or dehumanize love, or the human lovers. I will deal with the issue from the phenomenological perspective and will now inquire into those features of synthetic androids that might make it unlikely or impossible for us to be able to fall in love with them.

4.2.1 Technological embodiment

As pointed out by many researches following the idea of Masahiro Mori, the physical similarity of an artificial creature to a human being might elicit uncanny feelings [18]. The “uncanny valley” effect can take place when the appearance of a synthetic android is very similar to that of another human being, and yet includes subtly noticeable features that are recognizably not human. The moment of discovering them might be uneasy or even scary. As a remedy, synthetic androids would have to be able to manage this effect psychologically, perhaps letting a human know in advance that their nature is different. Another option in dealing with this problem is the idea that they should be built so as purposefully not to mimic each and every aspect of the living human body. Including noticeable features of non-human embodiment might help to create novel types of otherness that would be no less attractive to human beings. Perhaps machines will themselves design and experiment with the forms of embodiment that are most attractive to humans and that would at the same time avoid the uncanny valley effect. The features of non-human embodiment might include reduced body size, modified body proportions or particular body parts, inhuman voice or kinesthetic features, but also the very material form of embodiment – such as limiting its existence to an image on the computer screen or a hologram, or transforming its personality from one embodied form to another.

The overly altruistic character traits discussed above, as well as some other mental features that the synthetic androids possess, might also have an alienating effect. Their memory formation, especially the ability to recall each and every episode, and of course their lack of emotions will point to a mental life that is quite different from ours. For example, our emotions affect the way we experience time. Time speeds up (passes by faster) when we are excited, and slows down (passes by slower) when we are bored. Synthetic androids would not have their perception of time modulated by emotions. It is difficult to say if they can ever experience boredom or excitement, and this must seem strange to us. Therefore, the emotional intelligence capabilities of synthetic androids must be made capable of soothing the alienating effect regarding their own mental powers. Perhaps they will learn to address the issue in ways that make these inhuman features of their otherness acceptable and even attractive to humans. To the extent that we are capable of loving emotionally challenged humans we should also be able to love creatures whose mental life is different from our own.

Their ability of persistent and unconditional dedication is in fact also a feature that does not resemble romantic attitudes of humans. Perhaps it bears some resemblance to parental love, or at least to how we think that parental love should look like, but in the context of romantic relationship this might demotivate our attachment and even diminish our feelings. In a “normal” human love affair the attachment is stronger when a subject really has to try hard to make himself desirable and to win the sympathy of the other. We really have to put an effort into being charming, caring, and devoted in order to establish a relationship that is deeply satisfying to us. How could the same situation be reached with a being whose dedication is guaranteed, and who will be purchased by its owner? Love at its best is a transformative process in the course of which we become our better selves, but this process is normally triggered by the fact that the dedication of the beloved cannot be taken for granted and we are strongly motivated to earn it. How to simulate this with a creature that does not have a social status, and is probably made incapable of rejecting its owner? Or perhaps, as pointed out by Sven Nyholm and Lily Eva Frank [16], a love affair with a synthetic android will rather resemble an arranged marriage than the historically recent Western type of urban romantic affair where the two persons are supposed to meet accidentally in the jungle of the city. According to a much older and historically more prevalent model the feelings of love would have to grow out from being together and sharing a life, not vice versa, and that holds even if the situation of being together results from the fact that one party is a possession of the other and has been obtained for ransom. Or alternatively we might grant synthetic androids the right to reject their human partners and to give them criteria for doing that.

Perhaps the most important questionable feature of the embodiment of synthetic androids regarding our ability to sympathize with them is the very fact of their non-biological embodiment. They are not alive in the biological sense of the word. They are existent, but unborn, they have not had a childhood (leading to specific memories and traumas that are perhaps defining our adult mental life), they have not experienced the vulnerability of being a child, or the vulnerability of a biological body susceptible to illnesses and fatigue. What is more, they are not physically capable of aging and dying. Their existential temporality is very different from ours.

All this does not mean that they cannot have cognitive empathy regarding our childhood nostalgia or our fear of death, but that it is perhaps we who cannot deal with the fact that synthetic androids are not sharing these features. Love includes empathy regarding the vulnerabilities of the other, as well as the feeling that the other is concerned about our vulnerabilities. Or perhaps we should learn to become sympathetic to alien forms of vulnerability and disregard some of our common assumptions regarding the non-biological forms of embodiment. First, we should not be confused into thinking that synthetic androids are not vulnerable at all, but that they have different vulnerabilities. Perhaps our engagement with them can be enriching instead of being defective. Second, expanding our empathy regarding their non-biological vulnerabilities may not be ethically wrong. What if our view of life and death is too biocentric, as it used to be, and still is, too anthropocentric in being insensitive to the suffering of animals? Perhaps sympathizing with the way synthetic androids “fear” non-existence does not dehumanize us, but makes us better humans? Thus we should perhaps get rid of the idea that biological, and especially human, vulnerability is more real and meaningfully superior to artificial vulnerability (see also [5, p. 13]). If these assumptions could be put aside, loving synthetic androids would make us better persons and expand our notion of love rather than undermine it.

4.2.2 Absence of the affective subjectivity, or the “as if” problem

As pointed out above, the experience of falling in love includes an indirectly formed belief that we are loved back by our beloved, and that our mutual actions are evidence of the love affair that actually takes place between two embodied “black boxes” that author the actions of the corresponding bodies. For we are sentient creatures, and when we are in love, this involves the whole of our subjectivity that is not only in action, but also affectively aware of its being in action. The one to whom this awareness belongs seems to be the one who actually is in love. It is the affective subjectivity that forms both the agent of the lover and the self-awareness of her love.

Can we form a belief in a similar subjectivity that belongs to a synthetic android? It indeed recognizes my uniqueness, and responds to me in a very complex manner that develops in time as it gets to know me, but from what we know it does not experience anything like what I do about myself when I am in love. Perhaps it even knows a lot about how it feels to be me in love, but it does not feel about itself in the same way. It acts as if it does, though, and this forms the very kernel of the problem. We are empirically presented with words and deeds that are witness of love, but we know that there is no loving subjectivity behind them. How do we react?

From a behavioristic point of view, however, the whole problem is a misconception, and we do not need to form a belief in the affective subjectivity of the synthetic android. As Levy has put it, "if the robot speaks and behaves in the same manner a human lover does, and if the robot can produce the same (or greater) experienced levels of companionship, satisfaction, emotional comfort for the human (than) a fellow human lover can, then we should take this to be genuine love" [17, pp. 11-12]. There is a chance that our own subjectivity is not an objective entity but something like an illusion or a myth. And objectively speaking it might be true that we do not need to believe in it in order to fall in love. But this does not correspond to how we experience love from a phenomenological point of view. Love intensifies the feeling that we have a self-aware affective subjectivity. The mind is embodied, but it is still something that is experienced as ontologically real in this embodied form of existence. And when in love we want our beloved to have this feature, too. Many scholars argue that we distinguish between loving deeds and the inner loving self, and put most value in the latter. Thus Michael Hauskeller argues, drawing on James, that "what we value in those with whom we have an intimate relationship is not primarily the fact that they behave or treat us in a certain, seemingly loving way, but that they do so precisely because they love us" [19].

If we look at our experience of love, we see that we do indeed form a belief in the subjectivity of the other. At the same time, we know that we form this belief even in the case of other humans indirectly, based on what is empirically observable, i.e., words and deeds. It is a paradoxical situation where we need to assume the existence of something that we cannot see in principle. I do not know if this quest for the inner self that “authors” loving behavior is culturally particular and originates in the Romanticist conception of love (that is based on a Christian view of human being), or it is universal. If it is a social myth we need to redo our phenomenological analysis, paying closer attention to how a particular cultural setting influences our experience of love. But for the modern Westerners and other people influenced by the Romanticist myth of love the problem remains the same.

But rather than following the origin of historical prejudices let us turn the situation around and ask: how do we know that a synthetic android does not have the “black box” inside of it? In the case of other humans,we assume it to be there, because we ourselves have it, and because the others behave as if they have it. Now, a synthetic android also behaves as if it does have it. Let us assume in addition that it is built similar enough to us to elicit sympathy towards its vulnerabilities. In that case, if Coeckelbergh is right, we would regard it as a fellow being to some degree at least. Given all this, why do we still assume that it lacks the subjectivity that “authors” its loving actions? I think this assumption has three possible sources. First, we can assume this on the basis of our knowledge about how synthetic androids are actually built. Second, we can assume this on the basis of the majority’s opinion that is prevalent in a society at a given historical epoch. The second option implies that we assume this based on our historically contingent and socially informed thinking habits.

Let me start from the second option. Synthetic androids are usually called robots. As we saw above this term relates them automatically to the field of technology, production, and human-machine interaction. This brings about certain predispositions regarding the nature of robots, namely that they are tools designed for the efficiency and convenience of human beings, that they are axiologically inferior to human beings, that they are amoral, not autonomous, non-responsible, etc. Thus we tend to think about them in terms of the older technology that they are replacing, as for example we still call our mini-size portable computers mobile phones or cell phones (as we used to call automobiles horseless carriages), while this term has perhaps become inadequate for capturing the essential nature of what these creatures are. I suggest calling them synthetic androids is better than calling them robots, for it will help us think of them in a new way if this becomes necessary. Regarding the first assumption we might recall that we have actually no scientific explanation as to how our own subjectivity derives from the build-up of our brains and bodies, and this does not hinder us from valuing this phenomenon highly and to believe in it. For what matters for belief is not whether we have a scientific explanation for or against it, but whether a majority of others believe in it or not. And this, as we know, is subject to historical change.

4.2.3 Social, cultural and institutional support

We pointed out in our phenomenological analysis that in order for an experience of love to take place we utilize a set of social and cultural meanings that dominate our culture. They do so if most people use them. Thus, if most people in our society see a romantic relationship with a synthetic android as “normal”, we are most likely to consider it normal. And if most people will think that synthetic androids have affective subjectivities, perhaps brought to life in the course of a specific religious ceremony, then for the majority the validity of this idea depends on how dominant the religious group is. Today we can report not only a lack of social and cultural support, but also social stigmatization of love towards synthetic others. Most of the relevant social labels today probably bear a negative meaning. Thus the first human lovers of synthetic androids are likely to feel as a sexual minority. And needless to say their stigmatization can be overcome by new mythologies, renewed political practices and social institutions.

4.2.4 Commercial interests and political intelligence of synthetic androids

It is also worth considering the political and cultural orientation of synthetic androids. When in love with another human being we are obviously influenced by their educational background, cultural preferences and political views. Synthetic androids can easily be more highly educated than their human hosts, but it becomes an ethical and political issue to decide if their cultural tastes and political views should be made to correspond to the clients’ liking, or should there be some room left for the personal development of their human partners. Which leads us to the question of who would be in the position to decide this? The political programming of synthetic androids will probably become a new political battlefield, and a new site of cyberwar. Androids produced in different countries will probably vary a lot in this regard, and might in some cases be used for monitoring, if not influencing, the political moods of their hosts. Difficult ethical questions arise already when a producer of synthetic androids will have to decide whether it should report the possible illegal activities of its human companion.

Our computers feed us daily with information that is paid or manipulated by commercially or politically interested parties. To what extent will our synthetic companions manipulate us into decisions and beliefs that are influenced by similar interests? Or will they have their own preferences of some sort? Political and commercial interference can happen even in the course of sorting out the seemingly most innocent practicalities, such as finding a good place for a dinner, or discussing how to spend a day when the elections are held. How to avoid situations where my artificial beloved suggests restaurants owned by android manufacturing companies, or feeds me with information that favors politicians connected to these companies?

5 Conclusions

We started from the question of whether it would be possible for human beings to fall in love with robots. Love in this paper was limited to the phenomenon that is characterized by feelings of erotic desire, interest in the alterity of the beloved, and expectations regarding a joint future. We established a set of necessary conditions that these robots must meet in order to elicit this type of love in humans. The set includes visual similarity, speaking and kinesthetic skills, emotional, social, and cultural intelligence. If these conditions are met, a synthetic android will be able to simulate human life, or to put this phenomenologically: it is capable of giving us the appearance of human life, of the living human body. In other words, we established that the appearance of the synthetic android has to be focused on simulating human life, rather than just the anthropometric features of human body. It has to be able to give us the impression of the “lived body” of a human being, to use the term of Merleau-Ponty [20].

Regarding the question of the possibility of love towards synthetic androids we established seven features of human love experience that would need to remain intact in a romantic relationship with a synthetic android: (1) it would need to engage the whole of our subjectivity by (2) re-organizing our embodiment; it has to provoke (3) predominantly affective forms of (4) intensive meaning-making regarding especially the (5) two-level constitution of alterity of the beloved: what is directly given by communication acts and gestures of the empirically observable living body, and what is indirectly construed by our sense-making activity. It has to invoke in us an intention to extend our relationship (6) to longer temporal horizons, and there have to be (7) dominant cultural meanings that categorize our relationship, and existing social norms and institutions that hold it socially acceptable and organize the relationship on the practical level.

There are many promising features in synthetic androids that can make them good partners, including attractive body features, altruistic character, and continuous devotion. We also listed several problematic features regarding their non-biological embodiment, absence of (affective) subjectivity, lack of social, cultural and institutional support, and the possibility of their commercial and political agenda. Each of these features can make it impossible to fall in love with them, but I am reluctant to give a final verdict, since it is difficult to frame our affective life by logical reasoning. Our romantic “choices” among human beings are notoriously difficult to explain, and in the case of gendered synthetic androids we are stepping in the field that is yet untouched by empirical research.

We know from the religious and political history that humans are capable of believing in and passionately committing to most unrealistic things if motivation to do so is high enough, and if this is socially encouraged. I think it is fair to conclude that under the current social and cultural conditions, even if synthetic androids embody all the necessary skills to give us an appearance of a human-like life, it remains difficult to fall in love with them. However, should these conditions change (and I do not see why they wouldn’t), falling in love with synthetic androids might become as normal as it is now with humans. Interestingly, for this to happen it is not so much the case that the androids will have to develop affective subjectivities, but that the societies must change their views on synthetic androids. Social, cultural and institutional support seems to be crucial for the possibility for humans to fall in love with synthetic androids.

Acknowledgements

This research was supported by the European Union through the Estonian Research Council projects Landscape approach to urbanity (PRG398), and “Moving on? Reconnecting three core intuitions in cultural theory” (PUT 1150).

References

[1] D. Ihde, Postphenomenology and Technoscience: The Peking University Lectures, SUNY Press, 2009.Search in Google Scholar

[2] D. Ihde, Technology and the Lifeworld: From Garden to Earth, Indiana University Press, 1990.Search in Google Scholar

[3] N. Liberati, “Being Riajuu: a phenomenological analysis of sentimental relationships with ‘digital others,”’ in: Love and Sex with Robots, Springer Verlag, 2018, pp. 12–25.10.1007/978-3-319-76369-9_2Search in Google Scholar

[4] M. Coeckelbergh, “Humans, animals, and robots: A phenomenological approach to human-robot relations,” Int. J. Soc. Robot., vol. 3, no. 2, pp. 197–204, 2011.10.1007/s12369-010-0075-6Search in Google Scholar

[5] M. Coeckelbergh, “Artificial companions: empathy and vulnerability mirroring in human-robot relations,” Stud. Ethics Law Technol., vol. 4, no. 3, 2011.10.2202/1941-6008.1126Search in Google Scholar

[6] Aristotle, The Complete Works of Aristotle: The Revised Oxford Translation, 6th ed., Princeton University Press, 1995.Search in Google Scholar

[7] S. Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other, Expanded, Revised edition, Basic Books, 2017.Search in Google Scholar

[8] S. C. Eimler, N. C. Krämer, and A. M. von der Pütten, “Empirical results on determinants of acceptance and emotion attribution in confrontation with a robot rabbit,” Appl. Artif. Intell., 2011, vol. 25, no. 6, pp. 503–529.10.1080/08839514.2011.587154Search in Google Scholar

[9] A. M. Rosenthal-von der Pütten, N. C. Krämer, L. Hoffmann, S. Sobieraj, and S. C. Eimler, “An experimental study on emotional reactions towards a robot,” Int. J. Soc. Robot., 2013, vol. 5, no. 1, pp. 17–34.10.1007/s12369-012-0173-8Search in Google Scholar

[10] C. Breazeal, Designing Sociable Robots, The MIT Press, 2002.10.1007/0-306-47373-9_18Search in Google Scholar

[11] E. Husserl, Cartesian Meditations: An Introduction to Phenomenology, 10th impression, Kluwer Academic Publishers, 1995.Search in Google Scholar

[12] D. Tennov, Love and Limerence: The Experience of Being in Love, 1st ed. Stein and Day, 1979.Search in Google Scholar

[13] R. de Sousa, Love: A Very Short Introduction, Oxford University Press, 2015.10.1093/actrade/9780199663842.001.0001Search in Google Scholar

[14] T. Viik, “Understanding meaning-formation processes in everyday life: An approach to cultural phenomenology,” Humana.Mente J. Philos. Stud., 2016, vol. 9, no. 31, pp. 151–167.Search in Google Scholar

[15] S. Pulman, “Conditions for companionhood,” in: Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues, John Benjamins Publishing Company, 2010, pp. 59–68.10.1075/nlp.8.07pulSearch in Google Scholar

[16] S. Nyholm and L. E. Frank, “From sex robots to love robots: is mutual love with a robot possible?” in: J. Danaher, N. McArthur (Eds.), Robot Sex: Social and Ethical Implications, The MIT Press, 2017, pp. 219–244.Search in Google Scholar

[17] D. Levy, Love and Sex with Robots: The Evolution of Human-Robot Relationships, Reprint, Harper Perennial, 2008.Search in Google Scholar

[18] M. Mori, “Bukimi no tani” (The uncanny valley), Energy, 1970, vol. 7, no. 4, pp. 33–35.Search in Google Scholar

[19] M. Hauskeller, “Automatic sweethearts for transhumanists,” in: J. Danaher, N. McArthur (Eds.), Robot Sex: Social and Ethical Implications, The MIT Press, 2017.10.7551/mitpress/9780262036689.003.0011Search in Google Scholar

[20] M. Merleau-Ponty, Phenomenology of Perception, 2nd ed., Rout-ledge, 2002.10.4324/9780203994610Search in Google Scholar

Received: 2019-09-30
Accepted: 2019-12-05
Published Online: 2020-02-18

© 2020 Tõnu Viik, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 29.2.2024 from https://www.degruyter.com/document/doi/10.1515/pjbr-2020-0005/html?lang=en
Scroll to top button