Significance tests were originally developed to enable more objective evaluations of research results. Yet the strong orientation towards statistical significance encourages biased results, a phenomenon termed “publication bias”. Publication bias occurs whenever the likelihood or time-lag of publication, or the prominence, language, impact factor of journal space or the citation rate of studies depend on the direction and significance of research findings. Although there is much evidence concerning the existence of publication bias in all scientific disciplines and although its detrimental consequences for the progress of the sciences have been known for a long time, all attempts to eliminate the bias have failed. The present article reviews the history and logic of significance testing, the state of research on publication bias, and existing practical recommendations. After demonstrating that more systematical research on the risk factors of publication bias is needed, the paper suggests two new directions for publication bias research. First, a more comprehensive theoretical model based on theories of rational choice and economics as well as on the sociology of science is sketched out. Publication bias is recognized as the outcome of a social dilemma that cannot be overcome by moral pleas alone. Second, detection methods for publication bias going beyond meta-analysis, ones that are more suitable for testing causal hypotheses, are discussed. In particular, the “caliper test” seems well-suited for conducting theoretically motivated comparisons across heterogeneous research fields like sociology. Its potential is demonstrated by testing hypotheses on (a) the relevance of explicitly vs. implicitly stated research propositions and on (b) the relevance of the number of authors on incidence rates of publication bias in 50 papers published in leading German sociology journals.