Skip to content
Publicly Available Published by De Gruyter Mouton November 1, 2016

The ethics of researching unlikeable subjects

Language in an online community

Sofia Rüdiger and Daria Dayter


This article explores ethical conundrums in linguistic research on online platforms populated by ‘pick-up artists’ (PUAs), a community that learns and practices speed-seduction for short-term mating. Originally a male heterosexual community, PUAs encourage men to use manipulative strategies to select, pursue, isolate and sexually conquer women (Hall, Jeffrey A. & Melanie Canterberry. 2011. Sexism and assertive courtship strategies. Sex Roles 65(11). 840–853). Using so-called ‘field reports’ – detailed accounts of interactions with women – from Anglophone PUA forums as our data, we investigate the narrative stance devices that PUAs use to impose the game frame on their activities. Unavoidably, sampling language in an environment where risky topics are under constant discussion presents ethical dilemmas. The article focuses on how conducting research in a hostile community may influence traditional methodological decisions. Through the example of the PUA community, we discuss the vulnerability of subjects and potential harm in linguistic research, and whether anything gives the researcher the freedom to forego informed consent, especially when dealing with publicly available data in an open forum. We also address the myth of the unbiased researcher that is prevalent in contemporary social science, arguing that an analysis should benefit from the fact that the analyst inevitably takes part in the “fight to construct reality” (Latour, Bruno & Steve Woolgar. 1979. Laboratory life: The construction of scientific facts. Princeton, NJ: Princeton University Press).

1 Introduction

When we set out to study the language of Pick-Up Artist (PUA) forums, we did not envision that this project would lead to a full-fledged engagement with research ethics. We knew everything we needed to know about online research methods, including data collection, analysis and of course research ethics. At least, that is what we thought. But the more we engaged with the data and other researchers in the field, the more concerns regarding ethical and scientific conduct came up. In particular, difficult questions arose regarding the informed consent process and our own objectivity. Instructors and seasoned peers often impress upon students and young researchers the importance of being objective. But is absolute objectivity a realistic goal? Are scientists ever able to be utterly neutral regarding their research? And, of particular relevance to our research, what happens if you find out that you dislike your research subjects?

Recent treatments of research ethics in digital social science, most notably the AoIR guidelines by Markham and Buchanan (2012), emphasize the contextual understanding of harm and our responsibility to subjects, but also to society. It is clear that ethical decisions involve factors much more complex than physical invasiveness or even direct harm to reputation. Markham and Buchanan (2012) advocate a balance of fundamental ethical principles with a process approach to ethical decision-making. An attempt to achieve this balance led us to continue with the project that involved unlikeable subjects, which one might otherwise be tempted to abandon in view of the less than black-and-white ethical picture.

To explore the options of interacting with our subjects, a community of pick-up artists, we find Douglas’s (1976) notion of limited informed consent quite useful. It is, perhaps, liberating to admit that not only we as researchers may be biased against the subject groups, but the subjects may be biased against us – for reasons of research fatigue, previous experiences, or our gender. ‘Limited consent’ in this context covers the whole range of compromises that can be made over the fully informed consent (an idealisation in itself), including well-established practices in computer-mediated communication (CMC) research such as obtaining consent post-factum or only from the medium administrators. Introducing limited informed consent chimes with the tendency to consider various methodological decisions as a continuum instead of a dichotomy. Page et al.’s (2014) cline of informed consent in CMC involves adjusting the stringency of ethical demands according to how likely it is that the informant can be identified from the published data. The AoIR Ethics Committee (Ess et al. 2002: 5) argues in favour of a public/private CMC data cline: “the greater the acknowledged publicity of the venue, the less obligation there may be to protect individual privacy, confidentiality, right to informed consent etc.” – a sentiment which is picked up by Bolander (2013) who suggests that a two-way public/private distinction is too simplistic. These insightful contributions recommend using responsibility to our subjects as an ethical benchmark, rather than blind submission to ubiquitous informed consent. They invite us to reconsider the notion of harm habitually adopted from medicine and relying on key phrases such as “physical invasiveness”, “risk”, or “withdrawal from the study”. We are certain that more than one PhD student has felt the absurdity of explaining the “non-invasiveness” of their research in the ethics section of their thesis. It is self-evident that a linguistic study is physically non-invasive, with the exception – admittedly important, but rare – of certain phoneticians who videotape the movements of a person’s larynx by lowering a camera down their throat.

Our research into the language on PUA forums certainly did not involve anyone’s larynx. Pick-up artists are a loose community of heterosexual men [1] who encourage the use of manipulative strategies to select, pursue, isolate and sexually conquer women (Hall and Canterberry 2011). The underlying belief of the community members is that success with women (in this case, short-term mating) can be achieved through learning and practicing certain techniques. A pervading element of PUA ideology is the scientification of the flirting and seduction process (Denes 2011). The PUA movement also has an economic background: the self-proclaimed masters of the art, the PUA gurus, travel around the world to teach (pricey) seminars. The PUA community is very active online and there exist many forums dedicated to the exchange of advice, instruction, experience and spreading knowledge. The main source of language data for our study were the narrative accounts of successes and failures that pick-up artists habitually post in thematic forums under the heading of “field reports”. We zoomed in on the specialized terminology (the numerous acronyms such as IOIindicator of interest; or adaptations of stock trading jargon “put her into scarcity”, “portray her as abundance”) which achieve the formalization of seduction techniques. The second area of interest for us were framing devices in the depiction of successes and failures in PUA field reports. Apart from the ‘scientificated’ terminology, framing devices included left-dislocated clauses, agentless passives and marked pronoun choice. Overtly, field reports are supposed to serve educational purposes – a sort of postflight debriefing with comments from the experienced members – but also, not least, to brag.

Let us come out with it at the very start: we do not have informed consent from the participants of the forums. This was not, however, an authoritarian decision to exclude participants from the dialogue. We contacted moderators and forum members using email addresses provided on the platforms, both US and UK based, but none of our attempts to reach out with a short description of the project received any reaction. We consider it a positive sign that our emails were not blocked and there was no explosion of indignant chatter on the forums about judgemental researchers – it appears that our attempts at contact were missed rather than pointedly ignored. The difficulty with locating the informants represents one of the hurdles in digital ethics that we will discuss below and that ultimately led us to utilise limited informed consent.

Our views on the subject grew out of many conversations we held over the data, bouncing ideas back and forth and trying to come to terms with the specimens of graphic narrative. Each of us has her own ethical hobby horse. In the end, we decided to preserve our separate voices in the write-up and give every co-author her space; thus, the first half of the article is mostly the work, and the viewpoint, of Sofia Rüdiger, and the second of Daria Dayter. Throughout the article we draw parallels with biomedical ethics; this is not to imply that we see no difference between the areas of application or the complexity of issues involved, but to provide a point of departure and to trace the antecedence of many concepts in research ethics that we take for granted today. Engaging with landmarks identified in contemporary work on research ethics, we explain how we reconcile the adoption of limited informed consent with our responsibility to subjects and understanding of potential harm (issues which we by no means want to discard!). We believe that openly admitting bias, and accepting that all social research ultimately involves perspectivization, allows one to avoid the main ethical pitfall of research on unlikeable subjects: framing one’s disparaging account as the one and only scientific truth. We want to present our perspective on PUA discourse as just that, a perspective, one of many possible interpretations of the social world that we interact with.

2 Informed consent: Sign this and we are ready to go! – Sofia Rüdiger

Academics’ interest in and concern with ethical questions around the research process in general is embodied in book-length philosophical, legal, and ethical treatises (see e. g. Faden and Beauchamp 1986; McDowell 1991; Wear 1998). Despite being informed by their individual research histories, cultures and perspectives, a commonality between disciplines can usually be found in a chapter on research ethics, mirroring the active engagement with ethical issues by the academic community (see e. g. Bowern 2008 for linguistic fieldwork; Cohen et al. 2007 for education research; Dunn 2013 for social psychology; Smith 2011 for business studies; and Veal and Darcy 2014 for sport studies and sport management). The depth with which these topics are treated varies but one of the inevitable key phrases coming up in ethical considerations in view of human research subjects is ‘informed consent’. This notion reaches a near magical dimension, quasi absolving the researcher/s from their responsibilities towards the research subjects: consent is often (wrongfully) equated with compliance with research ethics. Nevertheless, many questions remain open: When is informed consent truly informed and how can it be obtained? Furthermore, consent is traditionally obtained before commencement of a study, but research goals often shift as the work progresses, so “what is it that participants are consenting to when they agree to join a study?” (Miller and Bell 2002: 65). In this section, I examine the notion of consent in more detail. First, I trace the history of consent from its roots in medical research to its application in linguistics. The subsequent section maps the process of ethical decision-making that we undertook in our research on PUA and links it to the notion of consent.

2.1 A short history of informed consent – from medical research to CMC studies

Informed consent and related ethical considerations in the social sciences have their roots in medical research which licenses a short overview of the history of consent in medicine and its development up to the application in applied linguistics studies.

In medicine, informed consent needs to be sought in two situations: scientific studies (subjects give consent to researchers before taking part in a study, e. g. testing a newly developed drug or treatment) and medical care contexts (patients give consent to practitioners before undergoing medical treatment or procedures; see Tronto 2009). Both kinds of consent are connected to the concept of autonomy (but of course also other concepts such as authority and power, Tronto 2009), which can legally be framed as “the right of self-determination” (Faden and Beauchamp 1986: 43). Informed consent in the second case, where a practitioner seeks the consent of a patient before performing a treatment or procedure, has its roots in the Hippocratic writings. Regarding research with human subjects, the attention towards moral issues and questions is a more recent development, even though the research itself has an ancient tradition (Faden and Beauchamp 1986: 151). The Nuremberg Code (1948) and the Declaration of Helsinki (1964) were the founding stones for an in-depth engagement with informed consent in biomedical research and were followed by a plethora of academic discussions and publications (Faden and Beauchamp 1986: 153–157).

Many horror stories of medical research can be found, such as the case of Willowbrook State School, where, between the 1950s and 1970s, children with severe mental disabilities were infected on purpose with a strain of hepatitis in order to facilitate medical experiments (Faden and Beauchamp 1986: 163). In addition to criticism of cases in which researchers failed to obtain subjects’ consent, the adequacy of informed consent in general has been under heavy scrutiny in biomedical research, where the randomized assignment of research subjects to groups can for example lead to sick subjects taking placebos instead of receiving more adequate care. This situation is complicated even further by the use of double blind methods, where neither the physician nor the patient are aware of who is receiving which treatment. Furthermore, medical researchers generally have not the well-being of the individual patient in mind, but rather the outcomes which the research can have for humanity in general (see the “therapeutic misconception” as discussed by Appelbaum et al. 1982). Considering this, it seems surprising that the notion of informed consent has often been taken over with no or little modification from biomedical research to applied linguistics research.

Whereas the attainment of consent at least appears straightforward in ‘offline’ language research (but see the discussion in the following section on how ‘informed’ the consent can truly be), the study of language on the internet adds a further dimension of difficulty. Online communication frequently blurs the boundaries between public and private discourse (see e. g. Bolander and Locher 2014: 17; Lawson 2004: 86–87; Giaxoglou 2016; Spilioti 2016) as well as authors and research participants (Markham and Buchanan 2012; Pihlaja 2017). For Bolander and Locher (2014: 18) it is essential that ethical questions and problems in the study of online language material are openly discussed in “scholarly output”, consequently enabling the research community to engage with the thought processes of the researcher(s). This would also allow the detection of cases where ethical considerations were forgone for convenience’s sake. Page et al. suggest that the decision of whether to seek informed consent in internet research can be based on a cline “where the more closely a project is likely to identify a particular individual the more likely it is for informed consent to be required” (2014: 72). Further factors on this cline are the vulnerability of research subjects and the sensitivity of the topics investigated (Page et al. 2014: 72).

2.2 Implications for PUA online research

In medical research, subjects can suffer severe consequences such as the side effects of drugs and treatments, sometimes even including the risk of death. Even if no side effects occur, the patient might have been better off with an alternative treatment, without even knowing about it. In the case of our PUA research, bodily harm does not pose a risk to the community members. Veal and Darcy (2014: 117) argue that possible harm to research subjects does not only consist of threats to the body and psyche of the subjects but also includes “affront[s] to the subject’s moral principles”. A compromise is possible, however, when the risk of moral or psychological harm is acceptably low (Veal and Darcy 2014: 123). Robinson (2010: 188) also sounds a note of caution when it comes to “potentially incriminating or embarrassing material”. One of the ‘risks’ of being included in a study on PUA could be constructed as being found out as a PUA practitioner by family members, friends, colleagues or supervisors, with respective consequences. But then, PUA is a very public community and its members actively pursue media coverage in order to ‘spread the word’. Although this does not absolve us of ethical responsibility as researchers, who should protect even those research subjects who do not appear to be protecting themselves, we take the publicity-oriented stance of PUAs as one more indication of the low vulnerability of the subjects: many PUAs willingly represent the PUA movement in the media.

I also want to draw attention to the thought that if subjects in medical research have problems grasping the consequences of their participation (even though these consequences can usually be stated in a very straightforward manner), the task is even harder for the participants of an applied linguistics study. Appelbaum et al. 1982: 327) found that even “well-educated, intelligent, relatively asymptomatic patient-subjects with a good overall understanding of research methodology” have significant problems understanding all aspects of the research procedure, even after extensive, sometimes hour-long discussions of research design and after giving informed consent. This begs the question as to how ‘informed’ the consent can be. It seems, therefore, that in some cases, consent has the function of easing the minds of academics rather than really protecting the subjects. As the attainability of fully informed consent is at least doubtful, we suggest that other (maybe additional) measures should be considered in order to protect the individuals depending on the research project and context, such as a strengthened focus on anonymization of subjects and data. The internet, of course, provides a challenge for anonymization, as most data itself is searchable via search engines. Although many PUA forum users choose pseudonyms as nicknames for themselves, it is worth considering that the users who willingly use their real-life first and last name together with other identifying information in the PUA forums might not have any trepidations being connected to the movement in the first place, despite it being a high-risk environment. This distinction is in line with Markham and Buchanan’s (2012: 13) observation that despite the wide-spread assumption that CMC is a private affair whose producers require anonymity, there exist users who see themselves as authors and wish to be given credit for their writing (see Pihlaja 2016).

The deception of subjects by researchers is particularly problematic from an ethical point of view and also influences the consent process. Faden and Beauchamp (1986: 183) report cases where research subjects thought they were going to a job interview instead of an experiment or, even more drastically, where subjects were given LSD under its technical name to boost participation rates. Participant-observation can often lead to the deception of the observed population as disclosure of all information is considered counter-productive to research goals (cf. the observer’s paradox). In the case of our PUA-research though, we have not actively participated in the forums or in any other exchanges with its members. Internet research makes it particularly easy to deceive, as researchers can easily fabricate an identity which is conducive to their research purposes. We, even though female and not inclined to become members of the PUA community, could have, for example, posed as male newcomers to the forum, eliciting advice and narratives from the other members. I argue that we kept our level of intrusion to a minimum by simply observing. The same level of intrusion is carried out by ‘regular’ internet users who read posts in the PUA forums without being a member of the PUA community. Analyzing the data and publishing the results is definitely beyond the level of intrusion by other internet users and of course complicates this matter further. However, my argument here is that identification of individuals is made more unlikely by our focus on the microanalysis of clausal structure and narrative moves, rather than, for example, the more holistic study of selected individuals’ behaviour; in other words, we are interested in the PUA texts, rather than the people, a point also made by Page et al. (2014). While it is inarguable that there are elements other than names in the textual material that may lead to the identification of participants, e. g. situational cues or recognisable identities (Cherny 1999), the risk remains low in our case. We would like to point out that the PUA community is enormous and spread all over the world. One is not dealing with a small community of practice that includes identifiable participants but with millions of individuals who follow the same behavioural patterns and use scripted speech. Indeed, the size and uniformity of the PUA online community ensures that a user’s “actions will be submerged in the hundreds (or thousands) of other actions taking place there” (McKenna and Bargh 2000: 60; also cf. Zimbardo 1969). The women who actually found themselves at the centre of PUAs’ attention and are mentioned in the forum posts are protected by the lack of names or personal details. In their writing, the PUAs focus instead on numerical ratings of appearance and general descriptions of behaviour which are highly unlikely to lead to the identification of individuals.

According to Roberts and Roberts (1999: 1028) the informed consent process needs special consideration in the case of vulnerable subjects, such as children, people with physical disabilities and psychological disorders, patients in hospitals and institutions as well as the economically disadvantaged. Further problems arise in complex social/relational contexts, such as research with prisoners and those in terminal care. Furthermore, Roberts and Roberts (1999: 1028) identify those as vulnerable who are “deaf or speaking a different language, feeling desperate and constrained, feeling indebted to or dependent on the research recruiter”. Special safeguards have to be taken when working with vulnerable individuals or groups in order to avoid their exploitation. This has also been recognized by the AoIR Ethics Committee, who state that “[t]he greater the vulnerability of the community/author/participant, the greater the obligation of the researcher to protect the community/author/participant” (Markham and Buchanan 2012: 4). Members of the PUA community are typically male adults with an Anglophone background and do not belong to any of the previously mentioned vulnerable groupings. They are indeed involved in risky topics (such as the ‘seduction’ of women), but they voluntarily subscribe to the community values, including its emphasis on publicity.

In addition to the previously stated arguments, I finally want to draw the reader’s attention to the fact that in the PUA community we have a group exhibiting hostile behaviour patterns (see Denes 2011 on the blurring between rape and seduction scripts). Of course, research ethics also apply when working with these communities. However, I want to argue that in those cases comprehensive informed consent can be forfeited in certain circumstances. The PUA data was collected from openly accessible forums from a community of generally non-vulnerable members. The researchers did not engage in any deceptive behaviour or intervene in the forums either before, during or after data collection. In lieu of fully informed consent, the researchers assigned pseudonyms over PUA forum members’ chosen nicknames (resulting in double anonymization for users who chose an anonymous nickname in the first place), besides the usual anonymization measures such as removing references to names and geographical location. Therefore, we decided to proceed with limited informed consent at this stage of our research project. We are, however, open to re-examination of our decision as the project continues; should, for example, new forum members answer our attempts to contact them. As a “practice of regular reflection” is essential to “ensure that ethical and methodological considerations are continually reassessed” (Miller and Bell 2002: 67), we will continue to re-evaluate harms and potential risks in the subsequent stages of research through, for instance, adjusting the degree of disclosure of informant details or the length of verbatim quotes.

3 The myth of the unbiased researcher: If you can’t say something nice, say nothing at all? – Daria Dayter

In the previous section, my co-author put up a tentative question: if we do not research a vulnerable, hidden population, may we act more freely in regard to informed consent? I would like to tackle this question from a more practical perspective. It is not so long ago – as late as the 1960s – that social scientists began to question how acceptable it is to deceive and manipulate their human subjects to achieve a clean experimental setting (Jourard 1968; Kelman 1967; Panel on Privacy and Behavioural Research 1967; Schulz 1969). In the context of CMC research, the urgency of their arguments for deception is tempered, since we collect the language post-factum and the disclosure of specific research questions does not compromise the data. Ultimately, the informed consent debate in CMC is very much focussed on its implications for data dissemination, such as whether publication is permitted.

The question, therefore, is: why would our subjects refuse to play along? Marginalised groups who have been subject to previous research may suffer from research fatigue (Moore 1996; Atkinson and Flint 2001) which manifests as hostility and suspicion. Barnes (1996), for example, argues that certain population groups are wary of researchers because they can easily be misrepresented, or have been in the past. In stark opposition to the traditional view of a researcher in search of the ultimate unbiased truth, Barnes names other considerations that should guide our behaviour in researching marginalised populations: “There is no independent haven or middle ground when researching oppression: academics and researchers can only be with the oppressors or with the oppressed” (1996: 110).

It is easy to see how the community of pick-up artists could align themselves with the misrepresented and the oppressed in this story. Indeed, that is exactly what happens when the topic of academic investigation of PUA is brought up in the PUA online community. The following excerpts come from a thread on a PUA forum initiated by a sociologist who is looking for pick-up artists to interview [2]:

The doctoral dissertation this woman is writing about PUA is absolute nonsense. She is prejudiced and is therefore absolutely incapable of seeing the positive sides [of the movement] and doing proper research.

Given the attitude which [name] usually demonstrates in her work on PUA, I would advise everyone to stay away.

(Pick Up Tipps Forum, our translation from German)

This critical stance is followed by a few comments that express readiness to participate in the study, but only if the researcher is unbiased and prepared “to view objectively both the good and the bad”. More than that, one of the users is openly mistrustful about the research ethics:

I won’t give an interview because one can easily twist the words and read the most absurd things into what I said. I will gladly fill out a questionnaire in which I can formulate my answers myself in such a way that they are absolutely unambiguous and leave no room for interpretation.

(Pick Up Tipps Forum, our translation from German)

The problem of a disparaging research population is not new in social sciences. I argue that more often than not the hostility of the subjects has to do not with the fact of ‘exploitation by research’, but with how the data is framed in the final write-up. In the context of much research, both academics and their subjects share a mindset that Bloor (1976) dubs “the sociology of scientific error”: an absolutist reading of the facts that presupposes the existence of a single truth and of many alternative, but wrong, versions. Scientific and pop-science literature alike is dominated by the ‘empiricist repertoire’ (Gilbert and Mulkay 1984): a set of stylistic, grammatical and lexical resources that depict the process of knowledge acquisition as objective and independent from the researcher’s personality. Experimental data tends to be given chronological and logical priority; neither the author’s pre-existing commitment to a particular analytical position nor her social ties are acknowledged; the research process is characterised in a conventionalised manner as instances of impersonal, universally effective routines (Gilbert and Mulkay 1984: 56). In scientific texts “the physical world seems regularly to speak, and sometimes to act, for itself” (Gilbert and Mulkay 1984: 56; cf. Wolf 1992). In this simplistic worldview, we assume that a human subject functions as a “stimulus-response machine: you put a stimulus in one of the slots, and out comes a packet of reactions” (Burt 1962: 232 in Schulz 1969).

Images of the investigator as a seeker of absolute truth, and of the subject as a reliable stimulus-response machine, sprout the myth of the unbiased researcher. It is, however, just that – a myth. Even event reporting, in itself, is a situated activity: reports are always underdetermined by the event and leave varying degrees of interpretative freedom to those who are given the power to disseminate their interpretation. Kneebone (2002: 516) also mentions the keyword ‘power’: “Traditionally, scientific writing is seen as a clear pane of glass through which an observer can see the work of a detached and unbiased researcher. In fact, however, the whole process of presentation is shot through with selectivity – any researcher wields a ‘colonial’ power, choosing which issues to present and which to ignore, how to present them, the framework for the analysis”. A heroic citizen protecting fellow men may become another gun-toting hick from Texas who happened to be on the right side of the latest shooting; a confident ladies’ man who knows what women like to hear may become a manipulative creep cum date rapist. As the ethnographer Gary Fine (1993: 290) succinctly puts it, “[w]e take idiosyncratic behaviours, events with numerous causes, which may – God forbid! – be random (or at least inexplicable to us mortals), and we package them”. As researchers, we need to recognise that we have this power, and accept the responsibility associated with it.

One aspect of the reality that we habitually crop from the picture as linguists is our emotional reaction to the subjects. This is due, perhaps, to the perception that personal sensibilities are irrelevant to all but the most deeply ethnographic reporting. And yet it is inevitable that in qualitative research we begin to like or dislike subjects, identify with or ‘otherize’ them (Bott 2010: 160). In the early 1990s, Fine (1993) called on the academic community to recognise another set of myths (or as he uncompromisingly dubs them, lies) that run parallel to the myth of the unbiased researcher. He pointed out that being an ethical and competent field researcher comprises being kindly, friendly, honest, and fair to your subjects (Fine 1993: 269). Far from the indisputable values of human character, these descriptors all refer to a certain kind of bias a social scientist often takes for granted. Some hidden, marginalised, deviant populations may be hard to contact for the purposes of research, and the researcher finds herself entering an ‘unholy alliance’ with the subjects in return for access rights. As friendly and kindly fieldworkers, we commit to save face: “If you let me into your world – as an overt, upfront researcher – I will promise to report only your socially acceptable side. I will never reveal – even if I am allowed to discover them – your deepest, ugliest secrets. Above all, I promise not to tell the truth about you!” (Goode 1996: 14).

In research on language online, the unhampered access to the treasure troves of data on public webpages should dispense with the need for such an alliance – at least from a practical point of view. There is also a very limited potential for giving the subjects’ characters either a positive or negative spin in a microlinguistic study (non-phrasal coordination and agentless passives rarely trigger social value judgement). We have, however, often run into presumptions about a friendly researcher’s responsibility to her subjects at the stage of obtaining informed consent. For example, while collecting the data for a study of negative references on the hosting website CouchSurfing (see Dayter and Rüdiger 2014), several subjects refused to allow us to use their references because they felt they would be smearing the character of the person about whom the reference was written. This happened despite the fact that the consent form explained our interest in impersonal microlinguistic features rather than any kind of content analysis. This illustrates the assumption among the wider public, perhaps perpetrated by the entertainment industry with its heroic tales of social scientists fighting for the rights of the misunderstood, that neither the guarantee of anonymity nor the focus of research free the academic from her contract to be friendly and kindly.

Despite being at odds with the myth of unbiased researcher, the friendly bias traditionally is more acceptable than its opposite. But what if we happen to dislike our subjects? This situation is more common than one might think (see Bott 2010 on timeshare salesmen; DeCapua and Boxer 1999 on male brokers; Kennedy 1990 on the Ku Klux Klan; Hardaker 2013 on online trolls; Peshkin 1986 on fundamentalist Christians etc.). Unpleasant subjects present the conundrum of reporting ‘the scientific truth’ vs. reconciling it with what your informants want to hear (lest they veto further use of their speech data). As we established earlier, the world is always seen from a perspective. It is likely that a researcher may let her animosity colour the narrative she constructs, especially if in the course of the investigation she becomes persuaded of the pre-existing evaluation. Fine (1993: 273–274), for example, admits that he took private pleasure in writing negatively in his book about a baseball coach who repeatedly humiliated him during fieldwork; he concludes that “a spurned ethnographer can be a dangerous foe… Taunt us, if you dare.”

If it seems that one’s likes and dislikes are of consequence only in the confessional mode of ethnographic research, let me bring the point home with an example from our latest work on language online. In the study that served as the inspiration for this paper we were faced with subjects that at the start had seemed harmlessly obnoxious. As we engaged deeper with the material, however, we began to feel much more strongly than that.

Although neither of the two researchers is particularly sensitive to gendered language, and despite being primed for sexist attitudes in the texts, we were shocked by the phrasing of field reports. Our initial reaction was to brush it off: as a coping mechanism, we set up a shared file “atrocious quotes” into which we copied the worst of material during coding. After all, for our purposes, the reports of sexual exploits were nothing more than a database of narrative moves. As the quotes gained in atrociousness, however, we began to ask ourselves if the people capable of speaking of women in such terms were not indeed the date rapists commonly portrayed by the media:

My friends manny, al and my brother lem accompanied me to this place of bountiful pussy. With the promise we would not leave this place till our d1cks had been wetted and our loins emptied. [… ] They ask me for my name. I tell them I’m fucking god and I grab two of them and BOOM begin 3 way make out.

By the end of the coding process, both of us were firmly disapproving of our subjects and of anything to do with pick-up artists. I hesitate to call this problem an analytical pre-judice because our emotional and ethical judgement was passed after having been immersed into the PUA culture for a long period of analysis. What our attitude amounts to, however, is the distinctly negative framing of the PUA movement in our writing. When operating with polarising examples which are bound to produce strong opinions in any reader, it is not enough to simply withhold judgement: indeed, an absence of explicit stance marking reads as an endorsement of the subjects’ worldview. Moreover, the evaluation crops up as early as the introduction section of any report on the subject. Do we mention “speed seduction” and “short-term mating” in our definition of PUA, which are technically correct descriptions of the community’s practices but carry a wealth of negative connotations? Do we purposefully avoid the accepted definitions in favour of our own, carefully devised to omit loaded terms and by this very fact transparent in its efforts to salvage PUAs’ public image? Do we restrict the evidence in the article to tables and linguistic terms, or do we cite examples of language which increase credibility and clarity but can be seen as pushing the reader’s buttons?

I suspect it does not help our case that we are both female; after all, an interviewed PUA, when called out by a journalist on his use of pick-up techniques, famously remonstrated: “You women should stay out of the manosphere. It’s not for you” (Grey 2014). In addition, the gender bias that is immediately associated with such work (though unfairly, since there exists a broad female PUA movement as well) leads even sympathetic parties to doubt our objectivity. We as women are expected to dislike pick-up artists, and therefore the drive is strong to overcompensate, to cast the subjects in a positive light just to prove our objectivity. The proverbial unbiased researcher would need to separate her personal and professional identities completely. Moreno (1995: 246), for instance, remarks that as woman academics, our lives on campus and in the offices are supposedly gender-free and that our ‘womanhood’ should not influence our working identity. In qualitative research, however, the personal and the professional inevitably collapse, if not as a part of our analytical process then certainly in the eyes of subjects.

In the end it comes down to this: if an unbiased researcher is a myth, what is the best way to deal with the bias in research dissemination? I think that first of all we need to dispense with the feeling of guilt attached to any personal evaluation of our subjects. The opposite facet of this issue, sexual attraction to the subject, has been explored in ethics writing before (Kendall 2008). If we admit that human subjects lead us to form opinions and cultivate likes and dislikes, we can grant ourselves the freedom to enrich the analysis through our – human – feelings. Kendall, for example, says that the discomfort with acknowledging her own erotic feelings caused her to relegate the discussion of female sexuality on the Bluesky internet platform to the background and to concentrate on the male users. She admits that this weakened her analysis, and now she wishes she had given more prominence to the discussion of female sexuality (Kendall 2008: 114).

Our biases, I argue, can become our strengths. By making explicit the interpretative nature of analysis, we can avoid playing the God trick (Haraway 1991) and open the eyes of our readers to more than one alternative interpretation of the material. This way, we may even reconcile hostile subjects with the idea of being investigated. If a negative take on their activities is presented as only one of several possible interpretations, and one given by a researcher who openly admits her disapproval, the subjects are less likely to feel that they are being judged and dismissed. The pick-up artists, for instance, place great value on popularisation of their community and may be glad of a chance to spread the word through publication. We as researchers, on the other hand, will not have to fight a constant battle of rewriting and reformulation trying to suppress our horror at yet another piece of advice to “go caveman”.

4 Conclusion

In view of the complicated ethical landscape of internet research, it has been a most welcome development to see social researchers such as Markham and Buchanan (2012), Bolander and Locher (2014), and Ess (2014) dispense with physical invasiveness and instead talk of a contextual understanding of “harm” as moral and psychological and an inductive, rather than universal, application of ethical principles.

The limited informed consent which we choose to advocate here is embedded into the paradigm of ‘fluid ethics’. This guileful term might remind the reader of a US presidential nominee “taking a factual detour around the truth”; but it simply stands for a stance in ethical decision making that is situation-based and rejects cookie cutter ethical fixes. More than any other area of social research, internet studies call for such an approach. Take, for instance, the fact that it is often very difficult to find the users who have produced the content that researchers intend to study. What in offline research is a straightforward matter of approaching people for their consent (however difficult it is to ensure that their informed consent can then be gained), in online environments is an obstacle course of expired links, changed nicknames, inactive email addresses, and spam filters. The extent of ‘informed’ is further complicated in qualitative research: an analyst working within the tenets of Grounded Theory does not know what she is looking for until she finds it. To borrow the words of psychologist Bronfenbrenner, “the only safe way to avoid violating principles of professional ethics is to refrain from doing social research altogether” (1952: 453).

The myth of the unbiased researcher also has no place in the situation-based ethical paradigm. Instead, we are prepared to admit that all steps of our study were impacted by researcher subjectivity. If such an admission is made on a conscious level and initiates reflection about what that impact was exactly, it can benefit the study rather than detract from its credibility. As Bott notes,

[s]elf-reflexivity in research processes has become an increasingly important area of concern… [methodological approaches] have stressed the need for researchers to remain in ‘flexible’ dialogue with their research subjects and contexts, in order to preserve a sense of the researcher’s own subjectivity within the process – and therefore avoid the tendency to become ‘absent’ from or ‘above’ our research contexts

(2010: 159).

In the process of selection that accompanies the analysis and write-up, we choose a version of reality to be presented to the reader. An explicit admission of this perspectivization – in contrast to the quest for absolute scientific truth – may help reconcile unlikeable subjects with our (mis)interpretation. The claim to researcher subjectivity lends the subjects a measure of control over their data, the freedom to interpret their behaviours in any way they are accustomed to without feeling dismissed by academics. This is especially important in online contexts where data technically may be collected years after it had been produced. We understand, of course, that this is very much an ideal situation; it is unrealistic to expect every researcher to provide all possible viewpoints in a final write-up. That is why in our work on PUA, we opt for only one take on the community discourse. This, however, we explicitly flag as an interpretation by female social researchers who were deeply uncomfortable with some of the examined speech, and this discomfort may have led to negative judgements that bleed through in the writing.

To conclude, we still dislike our foul-mouthed pick-up artists, but we believe we have found a way to write about them without attaching stigmas or completely misrepresenting our personal identities. Our aim in writing this paper was not to provide definite answers for CMC researchers, nor indeed to justify our own ethical choices. What we wanted to do is engage in a conversation, sometimes in a devil’s advocate role, about many ethical issues that are often taken for granted in contemporary internet research. The discussion of limited informed consent and the bias myths only skims the surface of the complicated ethical whirlpool that combines copyright with moral responsibility to society and subjects. Acknowledging its existence perhaps makes life more difficult for CMC scholars who could otherwise sample their data where they find it without further hassle. We believe, however, that such an acknowledgement takes us a step closer to answering key questions about human communication online and society’s understanding of the internet.


Appelbaum, Paul S., Loren H. Roth & Charles Lidz. 1982. The therapeutic misconception: Informed consent in psychiatric research. International Journal of Law and Psychiatry 5(3–4). 319–329.10.1016/0160-2527(82)90026-7Search in Google Scholar

Atkinson, Rowland & John Flint. 2001. Accessing hidden and hard-to-reach populations: Snowball research strategies. Social Research Update 33. (accessed 17 November 2015).Search in Google Scholar

Barnes, Colin. 1996. Disability and the myth of the independent researcher. Disability and Society 11(1). 107–112.10.1080/09687599650023362Search in Google Scholar

Bloor, David. 1976. Knowledge and social imagery. London: Routledge.Search in Google Scholar

Bolander, Brook. 2013. Language and power in blogs. Amsterdam: Benjamins.10.1075/pbns.237Search in Google Scholar

Bolander, Brook & Miriam Locher. 2014. Doing sociolinguistic research on computer-mediated data: A review of four methodological issues. Discourse, Context & Media 3. 14–26.10.1016/j.dcm.2013.10.004Search in Google Scholar

Bott, Esther. 2010. Favourites and others: reflexivity and the shaping of subjectivities and data in qualitative research. Qualitative Research 10(2). 159–173.10.1177/1468794109356736Search in Google Scholar

Bowern, Claire. 2008. Linguistic fieldwork – A practical guide. Basingstoke: Palgrave Macmillan.10.1057/9780230590168Search in Google Scholar

Bronfenbrenner, Urie. 1952. Principles of professional ethics: Cornell studies in social growth. American Psychologist 7. 452–455.Search in Google Scholar

Cherny, Lynn. 1999. Conversation and community: Chat in a virtual world. Stanford: CSLI Publications.Search in Google Scholar

Cohen, Louis, Lawrence Manion & Keith Morrison. 2007 [2000]. Research methods in education, 6th edn. London & New York: Routledge.10.4324/9780203029053Search in Google Scholar

Dayter, Daria & Sofia Rüdiger. 2014. Speak your mind, but watch your mouth: Objectification strategies in negative references on CouchSurfing. In Kristina Bedijs, Gudrun Held & Christiane Maaß (eds.), Face work and social media, 193–212. Zürich & Berlin: LIT.Search in Google Scholar

DeCapua, Andrea & Diana Boxer. 1999. Bragging, boasting and bravado: Male banter in a brokerage house. Women and Language 22(21). 5–11.Search in Google Scholar

Denes, Amanda. 2011. Biology as consent: Problematizing the scientific approach to seducing women’s bodies. Women’s Studies International Forum 34. 411–419.10.1016/j.wsif.2011.05.002Search in Google Scholar

Douglas, Jack. 1976. Investigative social research. Beverly Hills: Sage.Search in Google Scholar

Dunn, Dana S. 2013 [2009]. Research methods for social psychology, 2nd edn. Hoboken, NJ: Wiley.Search in Google Scholar

Ess, Charles and the AoIR ethics working committee. 2002. Ethical decision making and Internet research. Recommendations from the AoIR ethics working committee. (accessed 29 November 2015).Search in Google Scholar

Ess, Charles. 2014. Digital media ethics. Malden: Polity Press.Search in Google Scholar

Faden, Ruth R. & Tom L. Beauchamp. 1986. A history and theory of informed consent. New York & Oxford: Oxford University Press.Search in Google Scholar

Fine, Gary Alan. 1993. The lies of ethnography: Moral dilemmas of field research. Journal of Contemporary Ethnography 22(3). 267–294.10.1177/089124193022003001Search in Google Scholar

Giaxoglou, Korina. 2016. Reflections on internet research ethics from language-focused research on web-based mourning: Revisiting the private/public distinction as a language ideology of differentiation. Applied Linguistics Review. doi:10.1515/applirev-2016-1037.Search in Google Scholar

Gilbert, Nigel & Michael Mulkay. 1984. Opening Pandora’s box. A sociological analysis of scientists’ discourse. Cambridge: Cambridge University Press.Search in Google Scholar

Goode, Erich. 1996. The ethics of deception in social research: A case study. Qualitative Sociology 19(1). 11–33.10.1007/BF02393246Search in Google Scholar

Grey, Stella. 29 November 2014. Online dating: Men often sound like pick-up artists. The Guardian. (accessed 18 November 2015).Search in Google Scholar

Hall, Jeffrey A. & Melanie Canterberry. 2011. Sexism and assertive courtship strategies. Sex Roles 65(11). 840–853.10.1007/s11199-011-0045-ySearch in Google Scholar

Haraway, Donna. 1991. Simians, cyborgs and women: The reinvention of nature. London: Free Association Books.Search in Google Scholar

Hardaker, Claire. 2013. “Uh.....not to be nitpicky,,,,but…the past tense of drag is dragged, not drug”: An overview of trolling strategies. Journal of Language Aggression and Conflict 1(1). 57–86.10.1075/jlac.1.1.04harSearch in Google Scholar

Jourard, Sidney. 1968. Disclosing man to himself. Princeton: Van Nostrand.Search in Google Scholar

Kelman, Herbert. 1967. Human use of human subjects: The problem of deception in social psychological experiments. Psychological Bulletin 67. 1–11.10.4324/9781315128498-16Search in Google Scholar

Kendall, Lori. 2008. How do issues of gender and sexuality influence the structures and processes of qualitative internet research? In Annette Markham & Nancy Baym (eds.), Internet inquiry: Conversations about method, 99–118. Thousand Oaks: Sage.10.4135/9781483329086.n10Search in Google Scholar

Kennedy, Stetson. 1990 [1954]. The Klan unmasked, reprint. Boca Raton: Florida Atlantic University Press.Search in Google Scholar

Kneebone, Roger. 2002. Total internal reflection: An essay on paradigms. Medical Education 36(6). 514–518.10.1046/j.1365-2923.2002.01224.xSearch in Google Scholar

Lawson, Danielle. 2004. Blurring the boundaries: Ethical considerations for online research using synchronous CMC forums. In Elizabeth Buchanan (ed.), Readings in virtual research ethics: Issues and controversies, 80–100. Hershey: Idea Group.10.4018/978-1-59140-152-0.ch005Search in Google Scholar

Markham, Annette N. & Elizabeth Buchanan. 2012. Ethical decision-making and Internet research 2.0: Recommendations from the AoIR ethics working committee. Association of Internet Researchers (accessed 29 May 2016).Search in Google Scholar

McDowell, Banks. 1991. Ethical conduct and the professional’s dilemma – Choosing between service and success. New York: Quorum Books.Search in Google Scholar

McKenna, Katelyn & John Bargh. 2000. Plan 9 from cyberspace. Personality and Social Psychology Review 4(1). 57–75.10.1207/S15327957PSPR0401_6Search in Google Scholar

Miller, Tina & Linda Bell. 2002. Consenting to what? Issues of access, gate-keeping and ‘informed’ consent. In Melanie Mauthner, Maxine Birch, Julie Jessop, & Tina Miller (eds.), Ethics in qualitative research, 53–69. London: SAGE Publications.10.4135/9781849209090.n3Search in Google Scholar

Moore, Robert. 1996. Crown Street revisited. Sociological Research Online 1(3): article 2. (accessed 17 November 2015).10.5153/sro.24Search in Google Scholar

Moreno, Eva. 1995. Rape in the field: Reflections from a survivor. In Don Kulick & Margaret Wilson (eds.), Taboo: Sex, identity and erotic subjectivity in anthropological fieldwork, 219–248. London: Routledge.10.4324/9780203420379_chapter_8Search in Google Scholar

Page, Ruth, David Barton, Johann W. Unger & Michele Zappavigna. 2014. Researching language and social media – A student guide. London & New York: Routledge.10.4324/9781315771786Search in Google Scholar

Panel on privacy and behavioural research. 1967. Science 155. 535–538.10.1126/science.155.3762.535Search in Google Scholar

Peshkin, Alan. 1986. God’s choice: The total world of a fundamentalist Christian school. Chicago: University of Chicago Press.Search in Google Scholar

Pick Up Tipps Forum. (accessed 29 November 2015).Search in Google Scholar

Pihlaja, Stephen. 2016. More than Fifty Shades of Grey: Copyright on social network sites. Applied Linguistics Review. doi:10.1515/applirev-2016-1036Search in Google Scholar

Roberts, Laura Weiss & Brian Roberts. 1999. Psychiatric research ethics: An overview of evolving guidelines and current ethical dilemmas in the study of mental illness. Biological Psychiatry 46(8). 1025–1038.10.1016/S0006-3223(99)00205-XSearch in Google Scholar

Robinson, Laura C. 2010. Informed consent among analog people in a digital world. Language & Communication 30(3). 186–191.10.1016/j.langcom.2009.11.002Search in Google Scholar

Schulz, Duane. 1969. The human subject in psychological research. Psychological Bulletin 72(3). 214–228.10.1037/h0027880Search in Google Scholar

Smith, Malcolm. 2011 [2003]. Research methods in accounting, 2nd edn. Los Angeles, London, New Delhi, Singapore & Washington, DC: Sage.10.4135/9781849209809Search in Google Scholar

Spilioti, Tereza. 2016. Media convergence and publicness: Towards a modular and iterative approach to online research ethics. Applied Linguistics Review. doi: 10.1515/applirev-2016-1035Search in Google Scholar

Tronto, Joan C. 2009. Consent as a grant of authority – A care ethics reading of informed consent. In Hilde Lindemann, Marian Verkerk & Margaret Urban Walker (eds.), Naturalized bioethics – Toward responsible knowing and practice, 182–198. Cambridge: Cambridge University Press.10.1017/CBO9781139167499.011Search in Google Scholar

Veal, A. J. & Simon Darcy. 2014. Research methods in sport studies and sport management – A practical guide. London & New York: Routledge.10.4324/9781315776668Search in Google Scholar

Wear, Stephen. 1998. Informed consent – Patient autonomy and clinician beneficence within health care. Washington: Georgetown University Press.Search in Google Scholar

Wolf, Margery. 1992. A thrice told tale: Feminism, postmodernism and ethnographic responsibility. Stanford: Stanford University Press.Search in Google Scholar

Zimbardo, Philipp. 1969. The human choice: Individuation, reason, and order vs. deindividuation, impulse and chaos. In William Arnold & David Levine (eds.), Nebraska symposium on motivation, 237–307. Lincoln: University of Nebraska Press.Search in Google Scholar

Published Online: 2016-11-1
Published in Print: 2017-5-24

© 2017 Walter de Gruyter GmbH, Berlin/Boston