Corpus Linguistics and Linguistic Theory
Founded by Gries, Stefan Th. / Stefanowitsch, Anatol
Ed. by Wulff, Stefanie
IMPACT FACTOR 2017: 1.200
5-year IMPACT FACTOR: 1.386
CiteScore 2017: 0.80
SCImago Journal Rank (SJR) 2017: 0.288
Source Normalized Impact per Paper (SNIP) 2017: 0.930
This paper tackles the issue of the validity and reliability of coding discourse phenomena in corpus-based analyses. On the basis of a sample analysis of coherence relation annotation that resulted in a poor kappa score, we describe the problem and put it into the context of recent literature from the field of computational linguistics on required intercoder agreement. We describe our view on the consequences of the current state of the art and suggest three routes to follow in the coding of coherence relations: double coding (including discussion of disagreements and explicitation of the coding decisions), single coding (including the risk of coder bias, and a lack of generalizability), and enriched kappa statistics (including observed and specific agreement, and a discussion of the (possible reasons for) disagreement). We end with a plea for complimentary techniques for testing the robustness of our data with the help of automatic (text mining) techniques.
Here you can find all Crossref-listed publications in which this article is cited. If you would like to receive automatic email messages as soon as this article is cited in other publications, simply activate the “Citation Alert” on the top of this page.