Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag August 16, 2016

Technology for Behavior Change – Potential, Challenges, and Ethical Questions

An Interdisciplinary Experts Discussion

  • Sarah Diefenbach

    Sarah Diefenbach is professor for market and consumer psychology at the Ludwig-Maximilians-University Munich. Her research focuses on the design and evaluation of interactive technology with a special attention to emotional experience and psychological needs.

    EMAIL logo
    , Andreas Kapsner

    Andreas Kapsner’s work in philosophy spans the field from theoretical to practical philosophy. He holds a PhD in cognitive science and works at the psychology department of the LMU Munich. Also, he is co-founder of nifu.tv, a collective of interactive media artists and scientists dedicated to the joint exploration of complex ideas (www.nifu.tv).

    , Matthias Laschke

    Matthias Laschke is a postdoctoral researcher in Prof. Dr. Marc Hassenzahl’s workgroup at Folkwang University of the Arts, Germany. He focuses on the design and aesthetics of transformational objects (“pleasurable troublemakers”) and persuasive technologies addressing diverse topics such as sustainability, procrastination, willpower, adherence, and driver concentration in traffic.

    , Jasmin Niess

    Jasmin Niess is a researcher in Prof. Dr. Sarah Diefenbach’s workgroup at Ludwig-Maximilians-University Munich, Germany. Her research focuses on interactive technologies for self-improvement, in particular on the implementation of psychological knowledge within these products, in order to improve the User’s Experience.

    and Daniel Ullrich

    Daniel UIllrich is researcher in the institute of informatics at Ludwig-Maximilians-University Munich. His research focuses on the interaction with and influence of robots in the field of human-robot-interaction, in particular robot personality and application of social psychological mechanisms.

From the journal i-com

The multitude of technologies in our daily life – smartphones, ticket machines, and communication services like WhatsApp or social media platforms like Facebook – naturally shapes our actions and thinking. Beyond this, technology also becomes a medium for actively influencing and changing human behavior. Often, the intended change in behavior aims at socially desirable goals, such as conscious, sustainable consumption, public safety, or the adoption of healthier lifestyles. Examples are speed monitoring displays, smart meters to control energy consumption, or mobile apps that remind their users of doing more sports. Also healthcare providers make use of such solutions and started to equip their customers with “smart technology”, tracking their actions and daily routines. A healthy lifestyle is rewarded with a fee reduction, or, in other words, not using such technology is punished with paying more.

Apparently, such products have high potential to better our lives, but they also raise serious ethical concerns: Should we consider the induced changes in behavior as manipulation? And if so, in which cases might such manipulation be justified? Does the design of these products show enough respect for the autonomy, dignity and privacy of the users?

The discussion of such normative issues has not yet reached any definitive conclusions. In general, we are a dealing with a relatively young product category that obviously asks for new models, metrics and quality criteria. While users and designers are confronted with such technologies in their daily life and working environment, many established criteria of “positive user experience” (e. g., efficiency, comfort) are not applicable anymore. Instead of making life easy and smooth, technologies for behavior change often deliberately create friction. The idea of an aesthetic of friction [3] is to break up routines to inspire reflection. For example, Keymoment [4] makes the choice between taking the car or the bike more deliberate. If the user takes the car key, Keymoment throws the bike key at the users’ feet. You can pick it up, hang it back and still take the car – or reflect on what might be good for your health and the environment. However, aesthetic of friction is only one possible design principle. It might not be suited for all contexts of behavior change and especially long term effects still require further exploration.

Our goal is to develop, ideally in an interdisciplinary effort, more general standards, design guidelines and quality criteria that help us to describe, design and evaluate such products, also considering ethical perspectives.

As a start to this endeavor, the present article points out some central questions about the potential, current challenges and ethical issues in the field of technology for behavior change. The following sections are based on a discussion between experts with backgrounds in psychology, design, media informatics and philosophy. Sarah Diefenbach, whose background is in psychology, led the discussion. Her current research centers around technology design as a chance to support self-improvement and well-being but also the critical side effects and ‘unhealthy routines’ initiated through technology and social media. Andreas Kapsner is a philosopher whose main focus has been the question in how far governments should make use of such technologies and techniques. The idea that this potential to lead citizens towards “health, wealth and happiness” should be vigorously exploited has been popularized by Richard Thaler and Cass Sunstein in their best-selling book “Nudge”. Matthias Laschke, whose background is design and human-computer-interaction, focuses on interactive objects that help people to change their behavior in order to achieve personal goals and to support self-realization and self-improvement. He introduced the “aesthetic of friction” as general design guideline in order to address aforementioned design intentions such as wellbeing and self-improvement by interactive objects. Jasmin Niess is a psychologist whose research focuses on interactive technologies for self-improvement (e. g., online training to reduce stress, fitness gadgets). Her main interest is the utilization of psychological knowledge in the field of user experience research and technology for well-being. Daniel Ullrich represents the field of media informatics. In his research he focuses on the influence of machines – particularly robots – on human behavior, affect, and attitude.

Figure 1 
        Keymoment [4].
Figure 1

Keymoment [4].

1 Potentials, Risks, and Different Types of Motivation to Change

SD

Welcome to our discussion – glad to have you all here today! To start with our common ground: I think we all agree that there is some potential for technology to support people in changing (or optimizing) their behavior. But where do you see the biggest opportunities? Can you give some positive examples – domains or specific concepts you like?

ML

Technology can lend support in cases where people set goals for themselves, but are not able to achieve them on their own.

SD

Ah, the typical January 1st users. This is a term used by Agnis Stibe to describe the main target group of persuasive technologies. January 1st users are people that would like to change their routines, but rarely succeed in doing so. The typical New Year’s resolutions that end around February. Is this what you mean?

ML

Yes exactly. When you want to change but need help on the way. There are so many areas where this might happen … fitness, health behavior, sustainability or eco-friendly behavior.

Figure 2 
          January 1st users as main target group of persuasive technology [9].
Figure 2

January 1st users as main target group of persuasive technology [9].

1.1 Goal-setting

SD

But the important aspect is that people already formulated this goal for themselves?

ML

Yes. This would be the ideal situation. But in reality it won’t always be like this. Products could also help you to explore or detect areas where you can still improve. You could identify possible goals and, in the next step, technology can support you in achieving these goals, motivate you, push you or something like that…

DU

And even if the goal is already clear, technology can make the activity more pleasing. There are some nice examples from the area of gamification. One thing I remember very well is a cycling trainer in a fitness studio, enhanced with video and connection to other devices. You were cycling through the virtual countryside and you were in a race with like ten other virtual people. And also your training partner, cycling on a second device, if you want to. So the race was between you, ten virtual opponents and your real friend. The whole cycling experience was designed very reactive. When you were cycling up the virtual hill you needed to pedal harder (more powerful). I never realized how exhausted I was until I had reached the finish line.

1.2 Gamification and Intrinsic Versus Extrinsic Motivation

SD

So you generally see gamification as a promising way to better our lives? Make everything more fun?

DU

No, not in general. A problem of many concepts in this area is that they just attach some external reward, which do not have any relation with the original activity. This may even destroy any intrinsic motivation. There is this “carrot on a stick” which makes you do something just to get that carrot. This is also how many successful computer games work. People do stupid, repetitive tasks, spend hours of their lifetime, just to be rewarded with some points. It’s kind of sad that even in the serious area of changing behavior towards positive goals, many designers don’t find better motivations than just adding points. All these carrots are only short-term impulses.

JN

.. so when the carrot has been eaten, the next carrot has to follow immediately. Otherwise the user would fall back to the old pattern of behavior, right?

DU

Yes exactly. Another tricky thing in most successful computer games – you actually never reach the carrot. This is what keeps you in the loop. There is always someone else with a higher score. So you can always go further, achieve more, be better.

1.3 Competition

ML

I think that aspect is very interesting – what is the actual motivation behind the activity? Daniel, when you told us about the cycling trainer your eyes were sparkling when you mentioned the competition and your friend on the second cycling trainer. But then you emphasized the authentic cycling experience, the landscape and the reactive feedback, as an important motivational aspect. I really wonder: If the competitive elements were removed, would the whole experience still be motivating? Do you think the competitive elements are necessary at all?

DU

….in this example competition is an important aspect to make the activity more fun. Sports is often associated with a competitive setting, so this is somehow a natural relation. But I think it would work without the competition aspect as well. Maybe not that good, but in my opinion, it would work.

AK

My guess is that it would have been much less effective if the second cycling machine would have been removed.

DU

I have tried it without the second machine (and without my friend) and it was fun as well. But you are right, the experience was different, on another level. You are setting the pace, you can adjust your virtual opponents, that’s it. But you cannot adjust your living opponent. Your buddy on the second machine will cycle as fast as he wants to.

AK

I think it makes a big difference if the opponents are real or only virtual. For example, I play Japanese chess. It is very difficult to find people to play with here in Europe, so I play it mostly online. But always against other people, somewhere else in the world, never against the computer. The feeling, the experience is quite different, even though what I see on the screen is the same. If I win, I know there is some poor, angry guy sitting in Japan, and if I lose I know this person will be happy. I have to admit that this aspect of competition between real people is somehow important to me.

1.4 Concept of Man

SD

Any concept for behavior change implicitly builds on a particular concept of man. The humanistic approach assumes that human beings want to improve and actively use their environment for personal growth. They only need to understand and get insights about their own behavior and possibilities to change. If technology wants to support this process, it can be in an open dialogue on equal footing. In the simplest form, this could be informative messages in combination with a friendly suggestion. For example: “Physical activity reduces your risk for cardiovascular disease. What do you think about a daily five minute workout?”

An alternative concept of man could be that humans are lazy by nature. They react to punishment and rewards and they only change if they have to. With this approach in mind, you would not try to persuade people through information. You rather would try to change them unnoticed, justified by the “greater good”. With this approach, the designer is in the position of power. The designer knows what the world needs, what people need. People won’t manage to improve by themselves. I have to trick you – for your own sake.

JN

I think both approaches, the humanistic perspective and the “carrot-on-a-stick” approach can be helpful in the context of technology for behavior change. To give an example: Depending on my mood, my psychological state of mind, it either can be useful if someone is pushing me (a little) or punishing me somehow, if I am not doing my workout program. There will be days when I am highly motivated and I am happy that I have the opportunity to work out. But to come back to the aspect of personal goal setting: For me it is important that there has been a conscious decision at some point. In a first step, you have to decide that you want to use the cycling trainer or to install a mobile app that reminds you of doing more sports. If the government starts making this decision for you, it becomes dangerous. Such paternalistic approaches are definitely a potential risk in the field of technology for behavior change.

AK

I agree. As long as these technologies are implemented by the user’s own choice, there are relatively few worries. You are motivated to improve yourself in some way, and in order to do so you pick some technical device to assist you in achieving your goal. As long as this is not externally imposed on us, everything is fine. But this is already a point where it gets a bit tricky. Say, health insurances decide to increase their prices, but give bonuses to those who use self tracking devices that the insurance can monitor. (This, by the way, is a nice illustration of a point that I’ve been trying to draw attention to in my work, that many of these behavioral technologies raise serious worries about privacy and data protection). In any case, in such a scenario it is still your decision to use the technology, but not quite a free one any more.

ML

But again it is not that easy. Of course, such approaches are a bit risky, but I have some sympathy for that. We have to understand that my own individual well-being, let us say my health, concerns the society as well. Because I am part of this society. From my subjective, affective perspective, such forced self-tracking approaches are not good. But from the insurance and society point of view I totally understand that they want my data. In his bestseller “So Save Everything, Click Here”, Evgeny Morozov [5] describes this issue. On the one hand, we want technology to help us to overcome daily obstacles. On the other hand, it should not be too technocratic. People are able to make good decisions for themselves and change their behavior on their own. However, failure should be still part of this process and it is even a source for insights. Maybe failure even guarantees autonomy and freedom.

2 Different Types of Influence and Societal Versus Individual Goals

SD

Let us discuss this further. The question of how much pressure you put on people to change, and the question of apparent versus hidden influence. Do you have any ideals or design principles regarding that issue?

ML

For me an important issue is to identify ways of influencing behavior that are transparent in their ambition, and also leave room for deviant behavior. We should not design technologies that lead to perfectly homogenous behavior, but also leave room for appropriation. People should be able to behave differently than we as designers intended. We are already experimenting with such approaches. For example, the concept Keymoment, which we already mentioned in the beginning of the discussion. It suggests you to take the bike more often by throwing the bike key at your feet (if you take the car key). But you have all the freedom to behave differently. You can put the bike key back to the board. You can also “switch off” the mechanism by putting the bike key on top of the board, so it will not fall down the next time you take the car key. You can even trick the technology and switch the two keys. But this aspect is intentionally designed into it. The technology is naïve, only a means; you decide how you use it. If your goal is to become fatter instead of fitter, Keymoment also will support this. The technology should be open to any goal that you define for yourself.

DU

Keymoment is a nice example. Surely freedom of choice – following the technology’s suggestions or not – makes sense here. But in general we need to discuss how much of that freedom is really desirable and whether technology should be open to any personal goal that users might have. So far, we have only talked about – in our view – positive, personal goals, or at least goals that are not hurting others. But what if my goal is anti-social behavior? If I use an app about social events to commit pickpocketing? Would you still say, well if this is your goal the technology should support it?

ML

Hmm, indeed this is tricky. But still I would say that as a minimum, the user should always have some way of making a choice that differs from the one the designer intended.

AK

In philosophy and political theory, these questions are reflected in the distinction between soft and hard paternalism. Paternalism is the idea that someone, and usually we are thinking here about the state, but it might also be a company or NGO, thinks it knows what is best for you and tries to make you do it. The traditional form of this is hard paternalism, where people do not have a real choice, such as when hard drugs are made illegal. But the relatively new trend is to move towards soft paternalism, measures that leave just the sort of freedom to opt out that you were just discussing. And it’s in the context of soft paternalism that many of these psychologically informed designs get proposed.

JN

I also think that leaving a good deal of freedom is important. It is important that we don’t forget that people could still be capable of making good decisions for themselves and don’t necessarily need technology to follow the “right path”. We would do them a disservice if we override their conscious choices in these instances. Moreover, if technology or any external instance makes all these decisions and forces them into what is good for them, people never have the chance to develop an own sense of self-control.

AK

Right. In the political debate, Sunstein and Thaler often use a GPS system as an analogy to their interventions: It tells you where to go, but you are free to choose to go elsewhere. But even with that freedom, we see that people who rely on GPS technology lose their orientation capacities quite quickly.

SD

Do you know the model of Tromp and colleagues (2011)? It suggests two dimensions of influence: power (strong vs. weak) and visibility (hidden vs. apparent). Where would you position your approach? Which position would be morally and ethically acceptable? And which position would you assess as most effective? Are there discrepancies?

DU

When I choose for myself I would prefer a rather strong influence, and when I would choose for someone else I would favor a rather weak influence.

AK

It depends, some goals are important enough to warrant even a coercive approach. Everyone agrees that it is important to have traffic rules and that people have to give up some “personal freedom” for the sake of safety. But how about making helmets obligatory for cyclists? Personally, I would not mind such a law. Of course, a first impulse is to say “That’s my business, and no one else’s”. But it’s not quite so simple. It is not only your business – what you do also creates a social norm. Many people do not wear helmets because they fear looking like a dork. By not wearing a helmet yourself, you add to the social discomfort of others who do. A law, even if it is never actively enforced, might create a new social norm here.

DU

So you advocate a normative approach? Like: social norms should be formed, someone should make the decision? We want this social norm – let’s establish it?

AK

In some cases, yes. And in other cases I am not quite sure. The central and most interesting question is: Who gets to make the decision?

DU

Well, I think that is a dangerous path. The whole problem is about knowing where to draw the line. Let’s revisit the health insurance example: The insurance premium is raised and you can only maintain your old rate if you are willing to let the insurance company track you. Of course, it is still optional / voluntary / your decision. But actually it is not. When you are not able to afford the higher rate, you have to play by their rules. It effectively means forcing people to change and there should be enormous societal benefits to justify such steps. So the next important question is: How do you calculate such societal benefit? To name a provoking example: With a speed limit on German motor ways you would be able to safe many lives each year. But maybe it would weaken the German economy. So how do you offset 2000 human lives against some economical advantage? How can you bring these dimensions together and who makes this decision?

SD

Good point. Who should make the decisions? Who decides which goals are worth to be followed and supported through technology for behavior change, also in the public sector? To what degree can societal goals be imposed on individuals?

ML

Again, it is important to ask who the main beneficiaries of an intervention are. Is it me, is it the society, is it someone else – like the insurance company?

DU

Difficult question and risky topic. Changing individual behavior for the sake of society, the earth, our all wellbeing, sounds honorable. But I am not surewhether so-called societal goals are really improving our all wellbeing. In effect, the authorities decide which standards and opinions are the right ones. And it is difficult to draw a line, because there is not really a line here.

JN

Yes, you are right. It is really difficult to draw a line here, and the fast technological progress and all the new opportunities for changing and tracking behavior makes it even more complicated. In my opinion the two most important questions are: As you already said: who makes the decisions? And who are the main beneficiaries of the behavior change? I think, we are really facing a new sociopolitical challenge. It is absolutely important that various disciplines join forces and work on these questions. And this has to be a continuous process. The technological development continuously asks for new answers.

Figure 3 
          Different types of influence in design for behavior change [11].
Figure 3

Different types of influence in design for behavior change [11].

3 Foci and Blind Spots in Different Disciplines

SD

When you consider this challenge – the ethical design of technology for behavior change – from your own perspective or discipline: What distinguishes your discipline from other disciplines? Where do you see strengths of your discipline, where maybe weaknesses?

ML

Designers are often not that aware of the normative character of the things they create, that there can be side-effects of designs. For example: I can design a self-driving car, but in consequence, the driver will experience less competence. But it is also difficult to find this kind of information somewhere. For me as a designer, it would be great to have some kind of ethical guideline or something like that. I think there is much information available that does not enter design.

AK

Philosophy provides a lot of this,ethical guidelines and criticism are something that philosophy is good at. But in some sense this is also the weakness of my discipline, because we can get hung up on criticism. I was talking to someone who works in the British government recently, and he said: “Well, you philosophers are a smart bunch, I give you that. But you’re a pain to work with, because you just have to criticize everything to death.” I thought about that, and to some extent, he is right. In my discipline, critique is a way of showing respect, showing that an idea is interesting enough to get us thinking. But in an interdisciplinary setting, we might come to see that there is also need for more pragmatism and more positive action beyond criticism.

DU

In the field of media informatics we have similar issues as Matthias mentioned regarding Design. We are rather thinking in possibilities, than in consequences. All the psychological aspects like social pressure, interpersonal relationships, social interactions are rarely considered.

ML

Yes exactly. We should formalize this kind of reflection. A systematic reflection on consequences and so on.

DU

You are right. Something in parallel to the dialog principles for usability. Principles for behavior change, but with a humanistic focus.

JN

In contrast to design and media informatics, the weakness of psychology goes in the other direction and rather misses the practical application. We focus on human experiences and behavior and we try to describe change processes. But our approach is often too complex, too model-like, we rather think in consequences than in possibilities. That is why we often are stuck when it comes to the practical implementation. Therefore, I think, design, media informatics, philosophy and psychology can complement each other very well to improve this situation.

SD

Interdisciplinary discussion, closer collaboration, this is exactly why we are here together. Which tasks do you see as most relevant as a basis for this? For example, is there a need for a shared model of behavior change? Or a common vocabulary?

ML

The challenge is to bring together the different strands of thinking from the separate disciplines. Much is happening in parallel, but bringing everything together can quickly get confusing.

AK

Right! But another challenge, even before the synthesis that Matthias calls for, is to learn to speak a common vocabulary. To get an understanding of which concepts we share, and which we use in importantly different ways. And in those cases in which our concepts differ, it is interesting to ask why they do so.

4 Conclusion

SD

In conclusion: What are, in your opinion, the most pressing questions that we will have to address together, as scientists, designers, members of the industry and, lastly, as users and citizens?

ML

First, it would be great to make up a collection of all the prototypical examples of behavior changing technology from different disciplines. Such as the fly in the urinal, the piano stairs, or the default for organ donation in the case of nudges. We could then find systematizing principles that guide us to useful categories of these interventions. Those might help bringing discussions together that are running in parallel in the different disciplines.

JN

On a more sociological level, it would be interesting to get a comprehensive view on all of the hopes and promises that are currently attached to behavior changing technologies. The realistic and the unrealistic ones, and contrast those with the worries and ethical conundrums we run into when we decide where to apply such technologies. This could help us to develop a big picture perspective of possible future courses we could navigate with these projects.

AK

An interesting aspect for me would be the role of human heuristics and biases as a possible legitimation for nudging. The whole debate about nudging rests on the theory of heuristics and biases (e. g., [12]). In a nutshell, this theory depicts human agents as deeply and incorrigibly irrational. This irrationality is used to justify nudging: People are saved from the consequences of their irrational choices, often via manipulation of these very same psychological mechanisms. But you could also interpret such biases in human reasoning in a radically different way. For example, Gerd Gigerenzer and his co-authors have long argued that these heuristics and biases are not a sign of irrationality, but rather a mark reasoning optimally attuned to its environment. They even denoted heuristics to be the foundations of adaptive behavior. If you see it like this, there is nothing that people need to be saved from and the whole justification of nudging becomes questionable.

Figure 4 
          Interdisciplinary discussion on technology for behavior change.
Figure 4

Interdisciplinary discussion on technology for behavior change.

DU

Another question in this vein: Can we design technology that immunizes us from those mistakes, so that we don’t make them in the first place, as opposed to technology that corrects our faulty behavior? The heuristics and biases camp seems to assume that this is impossible, that we cannot shield ourselves from our biases even if we know about them. But maybe we simply haven’t thought about such solutions enough, and that seems close to what people in the Gigerenzer camp that you mentioned think, as well as other cognitive scientists like Richard Nisbett. If de-biasing is feasible, then a whole new range of nugdes and technologies is waiting to be explored.

About the authors

Sarah Diefenbach

Sarah Diefenbach is professor for market and consumer psychology at the Ludwig-Maximilians-University Munich. Her research focuses on the design and evaluation of interactive technology with a special attention to emotional experience and psychological needs.

Andreas Kapsner

Andreas Kapsner’s work in philosophy spans the field from theoretical to practical philosophy. He holds a PhD in cognitive science and works at the psychology department of the LMU Munich. Also, he is co-founder of nifu.tv, a collective of interactive media artists and scientists dedicated to the joint exploration of complex ideas (www.nifu.tv).

Matthias Laschke

Matthias Laschke is a postdoctoral researcher in Prof. Dr. Marc Hassenzahl’s workgroup at Folkwang University of the Arts, Germany. He focuses on the design and aesthetics of transformational objects (“pleasurable troublemakers”) and persuasive technologies addressing diverse topics such as sustainability, procrastination, willpower, adherence, and driver concentration in traffic.

Jasmin Niess

Jasmin Niess is a researcher in Prof. Dr. Sarah Diefenbach’s workgroup at Ludwig-Maximilians-University Munich, Germany. Her research focuses on interactive technologies for self-improvement, in particular on the implementation of psychological knowledge within these products, in order to improve the User’s Experience.

Daniel Ullrich

Daniel UIllrich is researcher in the institute of informatics at Ludwig-Maximilians-University Munich. His research focuses on the interaction with and influence of robots in the field of human-robot-interaction, in particular robot personality and application of social psychological mechanisms.

Related Literature

[1] Gigerenzer, G., Hertwig, R., & Pachur, T. (2011). Heuristics: The foundations of adaptive behavior. Oxford University Press, Inc.10.1093/acprof:oso/9780199744282.001.0001Search in Google Scholar

[2] Kapsner, A., & Sandfuchs, B. (2015). Nudging as a threat to privacy. Review of Philosophy and Psychology, 6(3), 455–468.10.1007/s13164-015-0261-4Search in Google Scholar

[3] Laschke, M., Diefenbach, S., & Hassenzahl, M. (2015). “Annoying, but in a nice way”: An inquiry into the experience of frictional feedback. International Journal of Design, 9(2), 129–140.Search in Google Scholar

[4] Laschke, M., Diefenbach, S., Schneider, T. & Hassenzahl, M. (2014). Keymoment: Initiating Behavior Change through Friendly Friction. In Proceedings of the NordiCHI 2014 Nordic Conference on Human-Computer Interaction (pp. 853–858). New York: ACM Press.10.1145/2639189.2670179Search in Google Scholar

[5] Morozov, E. (2013). To Save Everything, Click Here: The Folly of Technological Solutionism. Philadelphia, PA: Publicaffairs.Search in Google Scholar

[6] Niess, J. & Diefenbach, S. (2016). Communication Styles of Interactive Tools for Self-Improvement. Psychology of Well-being, 6(3), 1–15.10.1186/s13612-016-0040-8Search in Google Scholar PubMed PubMed Central

[7] Nisbett, R. E. (2015). Mindware: Tools for Smart Thinking. Macmillan.Search in Google Scholar

[8] Sunstein, C. (2013). The storrs lectures: behavioral economics and paternalism. Yale Law Journal 122, 1826–1899.10.2139/ssrn.2182619Search in Google Scholar

[9] Stibe, A. (2016). Persuasive Cities: Health Behavior Change at Scale. In A. Meschtscherjakov, B. De Ruyter, V. Fuchsberger, M. Murer & M. Tscheligi (Eds.). Persuasive Technology 2016 Adjunct Proceedings (pp. 42–45). Salzburg: Center for Human-Computer Interaction, University of Salzburg.Search in Google Scholar

[10] Thaler, R., and C. Sunstein (2008). Nudge: improving decisions about health, wealth, and happiness. New Haven: Yale University Press.Search in Google Scholar

[11] Tromp, N., Hekkert, P., & Verbeek, P. P. (2011). Design for socially responsible behavior: a classification of influence based on intended user experience. Design Issues, 27(3), 3–19.10.1162/DESI_a_00087Search in Google Scholar

[12] Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. science, 185(4157), 1124–1131.10.1126/science.185.4157.1124Search in Google Scholar PubMed

[13] Ullrich, D., Diefenbach, S., & Butz, A. (2016). Murphy Miserable Robot: A Companion to Support Children’s Well-being in Emotionally Difficult Situations. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (pp. 3234–3240). ACM.10.1145/2851581.2892409Search in Google Scholar

Published Online: 2016-08-16
Published in Print: 2016-08-01

© 2016 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 2.3.2024 from https://www.degruyter.com/document/doi/10.1515/icom-2016-0025/html
Scroll to top button