Jump to ContentJump to Main Navigation
Show Summary Details
More options …


Official Journal of the Society to Improve Diagnosis in Medicine (SIDM)

Editor-in-Chief: Graber, Mark L. / Plebani, Mario

Ed. by Argy, Nicolas / Epner, Paul L. / Lippi, Giuseppe / Singhal, Geeta / McDonald, Kathryn / Singh, Hardeep / Newman-Toker, David

Editorial Board: Basso , Daniela / Crock, Carmel / Croskerry, Pat / Dhaliwal, Gurpreet / Ely, John / Giannitsis, Evangelos / Katus, Hugo A. / Laposata, Michael / Lyratzopoulos, Yoryos / Maude, Jason / Sittig, Dean F. / Sonntag, Oswald / Zwaan, Laura

See all formats and pricing
More options …

Use of clinical reasoning tasks by medical students

Elexis McBee
  • Corresponding author
  • Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University, Naval Medical Center San Diego, 34800 Bob Wilson Drive, San Diego, CA 92134, USA
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Christina Blum / Temple Ratcliffe / Lambert Schuwirth / Elizabeth Polston / Anthony R. Artino Jr / Steven J. Durning
Published Online: 2019-03-09 | DOI: https://doi.org/10.1515/dx-2018-0077



A framework of clinical reasoning tasks used by physicians during clinical encounters was previously developed proposing that clinical reasoning is a complex process composed of 26 possible tasks. The aim of this paper was to analyze the verbalized clinical reasoning processes of medical students utilizing commonly encountered internal medicine cases.


In this mixed-methods study, participants viewed three video recorded clinical encounters. After each encounter, participants completed a think-aloud protocol. The qualitative data from the transcribed think-aloud transcripts were analyzed by two investigators using a constant comparative approach. The type, frequency, and pattern of codes used were analyzed.


Seventeen third and fourth year medical students participated. They used 15 reasoning tasks across all cases. The average number of tasks used in cases 1, 2, and 3 was (respectively) 5.6 (range 3–8), 5.9 (range 4–8), and 5.3 (range 3–10). The order in which medical students verbalized reasoning tasks varied and appeared purposeful but non-sequential.


Consistent with prior research in residents, participants progressed through the encounter in a purposeful but non-sequential fashion. Reasoning tasks related to framing the encounter and diagnosis were not used in succession but interchangeably. This suggests that teaching successful clinical reasoning may involve encouraging or demonstrating multiple pathways through a problem. Further research exploring the association between use of clinical reasoning tasks and clinical reasoning accuracy could enhance the medical community’s understanding of variance in clinical reasoning.

Keywords: clinical reasoning tasks; ecological psychology; medical education


Traditionally, clinical reasoning research has focused predominantly on diagnostic reasoning and less so on therapeutic reasoning or on the tasks that constitute clinical reasoning as a whole. Recent research has investigated what physicians reason about during clinical encounters and admission case reviews [1], [2], [3]. In 2013, Goldszmidt and colleagues conducted a two-stage validation study utilizing international experts to develop a unified list of 24 reasoning tasks thought to likely occur in a clinical encounter; they were broken down into four domains: (1) Framing the Encounter, (2) Diagnosis, (3) Management, and (4) Self-Reflection. This framework supported the notion that clinical reasoning is not a unitary construct but instead comprises multiple tasks that underlie a complex process.

In 2016, we studied the use of these reasoning tasks in internal medicine resident physicians viewing videos of clinical encounters in internal medicine [3]. This work was followed by a study in 2017 evaluating resident physicians’ use of reasoning tasks during admission case review [2]. Together, these studies led to a new modified list of 26 reasoning tasks hypothesized to occur in a clinical encounter (Table 1). Previous research has not addressed what medical students reason about in a clinical encounter. Insights into how these processes occur could help educators optimize how clinical reasoning is acquired, taught, and assessed.

Table 1:

List of clinical reasoning tasks proposed in 2017 (Juma and Goldszmidt [2]).

Ecological psychology is a helpful lens to analyze how medical students reason. Ecological psychology contends that our actions are the result of affordances and effectivities. Affordances are defined as recognized opportunities for action, and effectivities are an individual’s ability for action [4], [5]. Merely having or knowing about certain processes is not enough to achieve a desired outcome. Rather, it is the combination of the ‘tools’ available, the ability to effectively use the tools, and the ability to select the right tools in a specific context that leads to success. For clinical reasoning, ecological psychology predicts that multiple physicians would not arrive at a diagnosis or therapy using the same affordances or effectivities. These affordances and effectivities would also not likely be the same across clinical cases because context-specific (situation-specific) knowledge and skills are key aspects of performance when viewed from an ecological psychology perspective [4], [5]. A better understanding of the ‘tools’ that comprise students’ effectivities and their situational awareness for selecting the right ‘tools’ in a given context could inform the education of clinical reasoning.

The aim of this study was to analyze the verbalized clinical reasoning processes of medical students to describe: (1) what clinical reasoning tasks occur during a clinical encounter in medical students; (2) with what frequency different types of clinical reasoning tasks are utilized by medical students; (3) if the use of clinical reasoning tasks occur in a sequential vs. non-sequential manner; and (4) if there are clinical reasoning tasks that are not accounted for by the current proposed framework [2].



Three hundred third and fourth year medical students at the Uniformed Services University of the Health Sciences (USUHS), Bethesda, MD, USA, were invited to take part in this mixed-methods study from 2015 to 2016. Participants were invited to participate via email by a research assistant. There were no exclusion criteria. Informed consent was obtained prior to participation, and students were not compensated for participation in the study. The Institutional Review Board (IRB) at USUHS approved the study.

Ethical approval

The authors obtained IRB approval from the USUHS, protocol number R083VC-03.


This study design has been used previously and is part of an ongoing program of research [3], [6], [7]. Figure 1 provides an overview of the study design. Participants viewed a series of three videos (each 2–4 min in length) which displayed clinical encounters (including history and physical exam) each with a specific contextual factor. Case 1 portrayed a diagnosis of human immunodeficiency virus (HIV) in a patient with English as a second language. Case 2 portrayed a diagnosis of colorectal cancer in a patient presenting with emotional volatility. Case 3 portrayed a diagnosis of diabetes mellitus in a patient with both limited English proficiency and emotional volatility. The final diagnosis was not provided in the video.

Flow chart demonstrating the components of the study design that led to data collection.
Figure 1:

Flow chart demonstrating the components of the study design that led to data collection.

The scripts for the video recordings were written by a group of medical education experts in the field of internal medicine. The cases were written to be of equal intrinsic difficulty and to present a variety of selected contextual factors, defined as features that typically occur in a patient encounter, but which are not specific to a diagnosis (e.g. a patient who does not have English language fluency or a patient who suggests an incorrect diagnosis). The cases were pilot tested with a group of six internal medicine faculty to help ensure the cases were common, relevant, and of low diagnostic complexity. Edits were made based on pilot testing. This group of six faculty unanimously agreed that the cases represented commonly encountered disease processes seen in actual practice and were of low diagnostic complexity. The videos were filmed with professional actors under the guidance of a study investigator (SD). There was one patient actor and one physician actor used to film the videos. These same videos have been used in prior experiments and demonstrated differences in performance in faculty and residents [3], [6], suggesting they would be reasonable for studying clinical reasoning task use in students. In addition, the selected cases matched cases from our medical student clinical reasoning curriculum; the medical content of the cases was explicitly covered in a clinical reasoning course prior to student participation in the study.

After watching each video-recorded clinical encounter, participants completed a post-encounter form. This form has been used in research with resident and board certified physicians, and validity evidence for the scores and their intended use has been published [6]. The form asked participants to identify the following items: (a) what additional history or physical exam information they would seek, (b) what the differential diagnosis would be, (c) leading diagnosis and the data that supported this diagnosis, and (d) what diagnostic and treatment plan the participant would institute for the patient. Immediately following this, the participant re-watched the videotape while engaging in a recorded think-aloud protocol facilitated by a research assistant. The instructions in the think-aloud protocol were to at least arrive at a diagnosis for the case. Participants were asked to “think-aloud” if they did not speak for 5 or more seconds. Participants could start and stop the video as needed during the think-aloud protocol. After the video was complete, they were given as much time as needed to finish articulating their thoughts. The think-aloud protocol [8] is a standard technique in which participants are asked to verbalize their thought processes. When conducted properly, think-aloud protocols have been found to be a trustworthy method for capturing the thought processes of subjects [9], [10], [11]. Watching a videotape allowed for a standard stimulus upon which the participants reflected.

Data analysis

The qualitative data from the think-aloud protocol were transcribed into written transcripts and then coded using the framework for clinical reasoning tasks published by Goldszmidt and colleagues (Table 1) [1]. Two investigators (EM, CB) used a constant comparative approach to conduct iterative coding of utterances and classify utterances by task number. Following initial coding of five participants, the two investigators (EM, CB) met to discuss the reasoning tasks and resolve disparities. After all coding was complete, both investigators met for a final time to review coding and to resolve differences by consensus. Agreement between the two investigators who coded the think-aloud transcripts (EM, CB) was greater than 90%, and full agreement on all codes was ultimately reached through consensus. Descriptive statistics, including means, standard deviations, modes, and ranges, were calculated for types of tasks used and the number of times tasks were used using SPSS 22.0 (IBM, Armonk, NY, USA).


A total of 17 third (n=7) and fourth (n=10) year medical students (13 male, 4 female) participated in the study. Use of reasoning tasks was analyzed from two perspectives. First, we analyzed the variety of reasoning tasks used collectively by all students throughout the three cases. Across all cases, the 17 participants used 15 clinical reasoning tasks. Table 2 displays the mean, mode, and range for each clinical reasoning task used by participants for each case in addition to the number of participants using each task. For Case 1, 11 different clinical reasoning tasks were used by 17 participants. The average number of verbalized reasoning tasks used during the think-aloud protocol across all participants was 5.6 (range 3–8), and the average number of total task utterances (including use of repeat tasks) was 17.8 per participant (range 4–33). For Case 2, the 17 participants used nine different clinical reasoning tasks. Each participant verbalized an average of 5.9 reasoning tasks (range 4–8) during the think-aloud protocol, and the average number of reasoning task utterances (to include use of repeat tasks) was 20.7 per participant (range 12–38). For Case 3, the 17 participants used 14 different clinical reasoning tasks. Each participant verbalized an average of 5.3 clinical reasoning tasks (range 3–10) during the think-aloud protocol, and the average number of reasoning task utterances (to include use of repeat tasks) was 22.5 per participant (range 7–42). The average number of clinical reasoning tasks verbalized by participants across all three cases in aggregate was 7.8 (range 6–12).

Table 2:

Clinical reasoning tasks used by medical students in think-aloud protocols for each clinical case, 2015–2016.

We then analyzed how the reasoning tasks were utilized individually (Figure 2). Every participant verbalized Task 1 (Identify active issues) in each of the cases. This was the only task that was utilized by each participant in each transcript. Task 7 (Determine the most likely diagnosis) and Task 8 (Identify modifiable and non-modifiable risk factors) were used by each of the 17 participants in at least one clinical case although they were not used by each student in every case. Task 4 (Consider and prioritize differential diagnoses) was the most frequently uttered clinical reasoning task; it was used 325 times in aggregate across all three cases by 16 out of 17 students. The second most utilized reasoning task was Task 1 (Identifying active issues), which was used 311 aggregate times by all participants across all cases. Task 7 (Determine the most likely diagnosis) was verbalized next most often at 98 aggregate times and was used by all participants in at least one of the clinical cases. Task 8 (Identify modifiable and non-modifiable risk factors) was used 94 times and Task 6 was used 93 times across all three cases in aggregate.

Frequency of use of clinical reasoning tasks by medical students in think-aloud transcripts, 2015–2016. n, number of participants using that task across all cases.
Figure 2:

Frequency of use of clinical reasoning tasks by medical students in think-aloud transcripts, 2015–2016.

n, number of participants using that task across all cases.

The order in which participants verbalized clinical reasoning tasks varied, but participants tended to verbalize reasoning tasks related to ‘Framing the Encounter’ and ‘Differential Diagnoses’ early (Tasks 1–4) followed by ‘Diagnosis related tasks’ in the middle to end of the transcripts (Tasks 5–10). Across all cases, participants verbalized Task 1 (Identify active issues) first in 76% of transcripts (39 out of 51), second in 65% of transcripts (33 out of 51), and third in 57% of transcripts (29 out of 51). The next most frequently verbalized task was Task 4 (Consider and prioritize differential diagnoses), which occurred first in six (12%) of the transcripts, second in 16 (31%) of transcripts, and third in 15 (29%) of transcripts. Management tasks were only used by four participants on six occasions and occurred in the second half of the transcripts (Table 2). No self-reflection tasks were verbalized in any of the transcripts. Table 3 provides examples of verbalized utterances to demonstrate how the investigators interpreted the utterances.

Table 3:

Example utterances of clinical reasoning tasks used by medical students in think-aloud transcripts, 2015–2016.a

Participants verbalized one reasoning task that did not appear to be adequately captured by the clinical reasoning framework. Across all cases, seven participants verbalized a process for deciding what physical exam would be appropriate to perform (followed by a comment on how the physical exam did or did not support the development of their differential diagnoses). This did not occur in all transcripts but did occur at least once for each clinical case. This occurred most frequently in Case 2 (Diagnosis of colorectal cancer), being verbalized by five participants. An example utterance for this task is also included in Table 3.


This study utilized a think-aloud protocol to explore the use of clinical reasoning tasks by medical students viewing simulated internal medicine cases in an effort to further expand our understanding of clinical reasoning. The modified framework of 26 reasoning tasks (Table 1) was used to guide this exploration [1], [2]. Participants employed a wide array and number of tasks in all three cases demonstrating variability in how these tasks were used. By a wide margin, the most frequently used clinical reasoning tasks were Task 1 (Identifying active issues) and Task 4 (Consider and prioritize differential diagnoses). In addition, it highlights findings from Juma and Goldszmidt [2] that Task 1 (Identifying active issues) is an overarching task required to guide additional reasoning throughout a clinical encounter. Beyond this, the most commonly used reasoning tasks were related to determination of the most likely diagnosis, identification of modifiable risk factors, selection of diagnostic investigations, and identification of precipitants or triggers to the active problem.

The order in which medical students verbalized reasoning tasks varied, demonstrating variability amongst participants and cases. Despite this variation, the progression of tasks used did appear purposeful both individually and in aggregate. For example, although early in the encounter medical students tended to utilize tasks related to framing the encounter and generation of a list of differential diagnoses, they would often move back and forth between these tasks interchangeably. As the encounter progressed, participants employed additional diagnosis tasks related to identification of risk factors, determining a most likely diagnosis and selection of diagnostic studies but also continued to intermittently employ tasks related to framing the encounter or generation of a differential diagnosis. Use of management tasks was not common, but if it did occur, participants would verbalize these toward the end of the encounter. The small number of verbalized management tasks and lack of use of self-reflection tasks may reflect that the participants were asked to at a minimum arrive at a diagnosis. In addition, given that the participants were medical students, the expectation that a large number of management or self-reflection tasks be used may be unrealistic as they have not advanced far enough in their medical education to reliably do this well [12]. Furthermore, self-reflection may be more likely to occur during a live encounter when one can reflect on his/her own actions with knowledge of the outcome of the clinical encounter.

Prior research with resident physicians also supported that the use of clinical reasoning tasks occurred in a purposeful but non-sequential manner [3]. Together, this suggests that the clinical reasoning processes needed for Case 1 are not necessarily the same processes that are needed for Case 2 or 3. Participants demonstrated the tendency to use tasks recurrently with some participants verbalizing more than 40 tasks in one encounter. For example, one participant used the following tasks in Case 2 (Diagnosis of colorectal cancer): Task 1 eight times (Identify active issues), followed by Task 4 four times (Consider and prioritize differential diagnoses), then Task 8 (Identify modifiable and non-modifiable risk factors), Task 1, Task 8, Task 1, Task 6 (Select diagnostic investigations), Task 4, before reaching a final diagnosis (Task 7). As this example illustrates, the participant progressed through the encounter in a purposeful fashion but reasoning tasks related to framing the encounter and diagnosis were not used in succession but interchangeably.

Consistent with ecological psychology, this interchangeability highlights the use of clinical reasoning tasks as they relate to each participant’s unique, dynamic set of affordances and effectivities. This may also explain, in part, context specificity – the phenomenon that when presented with two patients with the same complaint, symptoms, and findings, different physicians might not arrive at the same diagnosis [13]. In each clinical encounter clinical information (e.g. affordances) is perceived and processed uniquely, which leads any given physician to use different strategies or clinical reasoning tasks (e.g. effectivities) for each clinical encounter and do so in a nonlinear fashion.

We observed one reasoning behavior that was not explicitly captured by the current framework of reasoning tasks. Forty-one percent of participants verbalized thoughts on what additional physical exam maneuvers would be beneficial to establishing a diagnosis or excluding other potential diagnoses. Task 1 (Identify active issues), Task 2 (Assess priorities), and the diagnosis tasks did not capture this task completely. In specialties such as Internal Medicine, the use of the physical exam is of high importance for establishment of a diagnosis. Deciding which component of the physical exam to use or not use appeared to be an integral part of this decision-making process in medical students. Some participants commented on how the physical exam did or did not support the development of their differential diagnosis, which would be related to Task 4 (Consider and prioritize differential diagnoses). In previous research in resident physicians, this behavior was not present [3]. It may be that resident physicians, with more sophisticated illness scripts, did not verbalize this process because it is something that had been well developed in previous training. Alternatively, resident physicians may have anticipated that diagnostic testing beyond the physical exam would be required to reach a final diagnosis in many clinical cases.

This study expands on previous work to develop a consolidated framework of reasoning tasks thought to occur during a clinical encounter [1], [2], [3]. This phenomenon of flexibility in problem solving has been found in the expertise literature and may be applicable to clinical reasoning [14], [15]. We provide additional empirical data to support this in more novice learners. When teaching clinical reasoning, this may mean that multiple tasks need to be taught in numerous ways in order to account for variation of context. In addition, when assessing clinical reasoning, multiple observations would be required as the path to diagnosis or treatment may vary widely. In particular, this study highlights that although experts employ numerous reasoning tasks [2], medical students may use reasoning tasks in a less comprehensive fashion. If a student has not yet developed the ability to utilize a particular reasoning task, such as disease management, it cannot be employed. From the perspective of ecological psychology, this implies that although cases in this study provided the affordances for more extensive reasoning processes, the medical students’ effectivities were too limited to allow for greater reasoning. This suggests that teaching successful clinical reasoning may involve encouraging or demonstrating multiple pathways through a problem. Students need repeated practice to improve the richness of their perceptions of affordances within clinical cases and to develop a wide array of effectivities to approach these cases. It is both the recognition of affordances and the agile use of these effectivities that allows the student or clinician to navigate the problem effectively.

This study has important limitations. First, the number of participants is relatively small (n=17), but coding saturation was reached by the ninth participant, suggesting an adequate sample size. Nonetheless, the generalizability of the findings may be limited by the small sample size that was male predominant from a single center. Second, video recorded cases may not capture all clinical reasoning processes that occur during a clinical encounter [10]; a live encounter, for example, could generate different cues to stimulate clinical reasoning. Third, think-aloud protocols may not fully capture non-analytic (system 1 or pattern-recognition) reasoning; specifically, all utilized reasoning tasks may not be consciously verbalized by participants. Fourth, information processing may in part be dependent on information delivery. How the information was provided to the participants may impact the perceived linearity of their clinical reasoning.


Research exploring the association between the use and sequencing of clinical reasoning tasks could enhance the medical educators’ understanding of the variance in medical student performance related to clinical decision-making. We believe an understanding of this variance is essential in helping to mitigate diagnostic and therapeutic reasoning errors. Furthermore, questions regarding how the use of reasoning tasks develop, how diagnostic and management reasoning tasks are used in clinical practice, and how reasoning tasks vary as expertise develops could help optimize educational initiatives aimed to improve clinical reasoning.


  • 1.

    Goldszmidt M, Minda JP, Bordage G. Developing a unified list of physicians’ reasoning tasks during clinical encounters. Acad Med 2013;88:390–7. CrossrefWeb of SciencePubMedGoogle Scholar

  • 2.

    Juma S, Goldszmidt M. What physicians reason about during admission case review. Adv Health Sci Educ 2017;22:691–711. CrossrefWeb of ScienceGoogle Scholar

  • 3.

    McBee E, Ratcliffe T, Goldszmidt M, Schuwirth L, Picho K, Artino AR, et al. Clinical reasoning tasks and resident physicians: what do they reason about? Acad Med 2016;91:1022–8. Web of SciencePubMedCrossrefGoogle Scholar

  • 4.

    Gibson J. The theory of affordances. In: Shaw R, Bransford J, editors. Perceiving, acting, knowing. Hillsdale, NJ: Lawrence Erlbaum Associates, 1977. Google Scholar

  • 5.

    Young MF, Barab SA, Garrett S. Agent as detector: an ecological psychology perspective on learning by perceiving-acting systems. In: Jonassen DH, Land SM, editors. Theoretical foundations of learning environments. Mahwah, NJ: Lawrence Erulbaum Associates, 2000:147–72. Google Scholar

  • 6.

    Durning SJ, Artino AR, Dorrance K, van der Vleuten C, Schuwirth L. The impact of selected contextual factors on experts’ clinical reasoning performance: does context impact clinical reasoning performance in experts? Adv in Health Sci Edu 2012;17:65–79. CrossrefGoogle Scholar

  • 7.

    Durning SJ, Artino A, Boulet J, La Rochelle J, van Der Vleuten C, Arze B, et al. The feasibility, reliability, and validity of a post-encounter form for evaluating clinical reasoning. Med Teach 2012;34:30–7. Web of SciencePubMedCrossrefGoogle Scholar

  • 8.

    Ericsson K, Simon H. Verbal reports as data. Psychol Rev 1980;87:215–51. CrossrefGoogle Scholar

  • 9.

    Ericsson K. Protocol analysis and expert thought: concurrent verbalizations of thinking during experts’ performance on representative tasks. In: Ericsson K, Charness N, Feltovich P, Hoffman R, editors. The Cambridge handbook of expertise and performance. New York: Cambridge University Press, 2007. Google Scholar

  • 10.

    Ericsson K, Simon H. Protocol analysis: verbal reports as data. Cambridge, MA: MIT Press, 1993. Google Scholar

  • 11.

    Russo JE, Johnson EJ, Stephens DL. The validity of verbal protocols. Mem Cognition 1989;17:759–69. CrossrefGoogle Scholar

  • 12.

    Pangaro L, Ten Cate O. Frameworks for learner assessment in medicine: AMEE Guide No. 78. Med Teach 2013;35: e1197–210. PubMedWeb of ScienceCrossrefGoogle Scholar

  • 13.

    Eva K, Neville A, Norman G. Exploring the etiology of content specificity: factors influencing analogic transfer and problem solving. Acad Med 1998;73(Suppl.):1–5. CrossrefGoogle Scholar

  • 14.

    Ark TK, Brooks LR, Eva KW. The benefits of flexibility: the pedagogical value of instructions to adopt multifaceted diagnostic reasoning strategies. Med Educ 2007;41:281–7. PubMedCrossrefWeb of ScienceGoogle Scholar

  • 15.

    Eva KW. What every teacher needs to know about clinical reasoning. Med Educ 2005;39:98–106. PubMedCrossrefGoogle Scholar

About the article

Corresponding author: Dr. Elexis McBee, DO, MPH, Department of Medicine, F. Edward Hébert School of Medicine, Uniformed Services University, Naval Medical Center San Diego, 34800 Bob Wilson Drive, San Diego, CA 92134, USA, Phone: +1-619-532-6678

Received: 2018-08-20

Accepted: 2019-02-08

Published Online: 2019-03-09

Published in Print: 2019-06-26

Author contributions: All of the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

Disclaimer: The views expressed in this paper are those of the authors and do not necessarily represent the views of the Uniformed Services University of the Health Sciences, the Department of Defense, or other federal agencies.

Research funding: This project was supported, in part, by an unrestricted educational grant from MedU/iInTime as well as local intramural grant funding.

Employment or leadership: None declared.

Honorarium: None declared

Competing interests: The funding organizations played no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the report for publication.

Citation Information: Diagnosis, Volume 6, Issue 2, Pages 127–135, ISSN (Online) 2194-802X, ISSN (Print) 2194-8011, DOI: https://doi.org/10.1515/dx-2018-0077.

Export Citation

©2019 Walter de Gruyter GmbH, Berlin/Boston.Get Permission

Comments (0)

Please log in or register to comment.
Log in