Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter Mouton November 4, 2010

Coding coherence relations: Reliability and validity

Wilbert Spooren and Liesbeth Degand
From the journal

Abstract

This paper tackles the issue of the validity and reliability of coding discourse phenomena in corpus-based analyses. On the basis of a sample analysis of coherence relation annotation that resulted in a poor kappa score, we describe the problem and put it into the context of recent literature from the field of computational linguistics on required intercoder agreement. We describe our view on the consequences of the current state of the art and suggest three routes to follow in the coding of coherence relations: double coding (including discussion of disagreements and explicitation of the coding decisions), single coding (including the risk of coder bias, and a lack of generalizability), and enriched kappa statistics (including observed and specific agreement, and a discussion of the (possible reasons for) disagreement). We end with a plea for complimentary techniques for testing the robustness of our data with the help of automatic (text mining) techniques.

Published Online: 2010-11-04
Published in Print: 2010-October

© 2010 Walter de Gruyter GmbH & Co. KG, Berlin/New York

Scroll Up Arrow