O. S. Miettinen

Understanding the Research Needed to Inform Public Policies on ‘Screening’ for a Cancer

De Gruyter | Published online: November 18, 2015

Tyler VanderWeele, one of the two Editors of this journal, in response to my suggestion, agreed to arrange in this journal a critical discussion of the research intended to serve quantification of the mortality benefits from public policies promoting ‘screening’ for breast cancer (VanderWeele 2015).

This discussion was opened by, and intended to be focused on, a keynote article by myself on this topic (VanderWeele 2015; Miettinen 2015). That article first outlined the theoretical fundamentals of professional thinking about ‘screening’ for a cancer in community medicine – in (the practice of) epidemiology, that is – with focus on the benefit from it. It then sought to make plain the fundamental misguidedness of the still-orthodox research purportedly addressing the magnitude of the benefit that in this context is of epidemiological and, hence, public-policy concern. And finally, sketched in that article also was the nature of the heterodox research, both experimental and non-experimental, that, when appropriate in its particulars, would be truly relevant for that purpose.

That lead article is, in the foregoing, followed by six commentaries on it (Steurer 2015; Huwiler et alii 2015; Petitti 2015; Franco 2015; Weiss 2015; Robins 2015), invited by the Editor (VanderWeele 2015). And now, on his invitation, I round out the discussion by this rejoinder to those commentaries on that article of mine. In this, I first – and in the main – address those commentaries one-by-one. Then, in closing, I first briefly sketch the implications of those commentaries as for the need to modify what I said in the opening disquisition, and comment on what might be done as a follow-up to this discussion.

Steurer

In the article of mine that the commentators were to address (VanderWeele 2015; Miettinen 2015), I said that “It would be, I suggest, most instructive to have these questions [of highest-priority focus] initially addressed by some Swiss expert(s).” And the Editor indeed solicited, successfully, two commentaries from there, the country whose recent public-policy concerns about ‘screening’ for breast cancer were the point of departure in that article of mine. Very appropriately, one of these Swiss commentaries was by Johann Steurer (Steurer 2015), given that he is the founder and current director of the Center for Patient-Oriented Research and Knowledge Transfer, located in the University of Zurich.

Steurer notes, quite significantly, that “The findings of the Swiss Medical Board published in December 2013 [ref.], noted in [the opening article of this discussion], have so far had no impact on the screening policies in the various cantons of the country” (italics added). And equally stagnant has been the centrally policy-relevant research: “[Even] in the most recent Cochrane review on the topic [ref.], the thinking and evidence put forward by Miettinen et alii, a decade earlier [on the ‘centrally relevant measure’ of the mortality-related benefit from screening for breast cancer] were left without any comment, as though obviously irrelevant or worthless.” Steurer himself had learned these things, from that source, years ago.

Setting out to learn from the lead article (Miettinen 2015) that was to be the focus of this discussion, Steurer remarks, for orientation, that “Clearly, the topic is quite complex and understanding the issues quite demanding” (italics added). So: “To facilitate the readers’ understanding of [that] article, I put some questions to [its author].” In his commentary he reports the questions and answers. He then notes that “even upon these clarifications, comprehension of that article of Professor Miettinen requires mental excertion, except perhaps by those to whom it is directly addressed, namely epidemiological researchers.” But he himself grasps the core of my teaching in it: “Lack of appreciation of the (profound) difference between the epidemiological and clinical concepts of mortality from a cancer is, so I have come to understand, at the root of the confusion” (italics added). And he also understands that: “While clinical in nature, the relevant measure is not subject to quantification by any practicable clinical trials [but] requires epidemiological research, addressing the clinical-level etiology of community-level deaths from the cancer –” (italics added).

Steurer evidently understands, and agrees with, everything I intended to convey in my article on the topic (Miettinen 2015), and his commentary amounts to a much-needed introduction to, and explication of, that admittedly “dense” opener of this discussion, the need for which I only now realize. I strongly advise readers of this rejoinder to the commentaries to read what Steurer says in his.

Huwiler et alii

The other Swiss commentary comes from an equally illustrious source: it is by Karin Huwiler, Beat Thürlimann, Thomas Cerny, and Marcel Zwalen (Huwiler et alii 2015), representing, respectively, the Swiss Cancer League, the Breast Center of a cantonal hospital, the union (Oncosuisse) of five Swiss anticancer organizations (incl. the Swiss Cancer League), and the Institute of Social and Preventive Medicine – of community medicine, that is – at the University of Bern. Their input to the discussion here has “two main parts.” One of these addresses “some of the methodological points raised by Professor Miettinen,” while the other is directed to “more specific aspects of the Swiss Medical Board statement on mammography screening for early detection of breast cancer.” I’ll focus on the former of these two parts.

The first point of these commentators, specific to my article, is that I consider “well-known difficulties” with the usual measure of mortality reduction – incidence-density ratio of deaths from the cancer, contrasting a ‘screened’ cohort with one receiving ‘usual care’ – deployed in randomized trials on ‘screening’ for a cancer. But actually, I did not consider any well-known difficulties in these trials. Instead, I deplored a remarkable routine in them: the treatment of that ratio as though it were constant (apart from chance variation) over the duration of the follow-up, whatever this is, and also independent of the duration of the screening, whatever this is – treating this ratio as though it were a parameter of Nature and, as such, relevant to policies about the ‘screening.’

These authors’ second remark on my article is that “Professor Miettinen proposes a new metric for determining whether to screen for a cancer or not.” But again: I did nothing of the sort. Nowhere in my article did I posit the existence of a metric, whether old or new, that by its magnitude would provide for “determining whether to screen for a cancer or not.” In the Abstract already, I alluded to the necessity to distinguish among three concepts of mortality from the cancer at issue: the epidemiological one, the clinical one, and the one used in the now-orthodox trials on ‘screening’ for a cancer. And in the very first section of the article proper I explained that, while the aim of ‘screening’-promoting public policies is reduction in mortality from the cancer in the epidemiological – community-medicine – meaning of ‘mortality,’ the reduction actually derives from reduction in the clinical counterpart of this: the cancer’s case-fatality rate. To an epidemiologist there should be nothing new in this. But new may be the idea that the only parameter of Nature relevant to estimation of the policy’s epidemiological benefit is its consequent reduction in the cancer’s case-fatality rate, and that nothing meaningful is addressed by the mortality results of the now-orthodox trials on ‘screening’ for a cancer (cf. above).

Then this: “it is surprising that Professor Miettinen argues that it should be possible to validly estimate the reduction in the cancer’s case-fatality rate resulting from screening and earlier treatment for cases that would have become clinically manifest.” To that purported arguing of mine they posit this counterpoint: “As much as it would be interesting to know whether and by how much [the case-fatality rate is reduced] it is an elusive metric which by no study design, even if ideally conducted, can be validly estimated.” Later, they add this remark: “As discussed above, an RCT [randomized controlled trial] in mammography screening evaluating case-fatality rate, taking into account overdiagnosis, as proposed by Professor Miettinen, in our opinion would be welcome but is not feasible.” But in point of fact, I addressed two trial designs that in principle would provide for quantification of the reduction in the case-fatality rate; and regarding one of these I concluded that, “as a practical matter, the proportional reduction in the cancer’s case-fatality rate by its early diagnoses and treatments is not subject to proper quantification by clinical research involving only a few rounds of the ‘screening’” (italics original). And I said the same about the other design-in-principle.

These commentators ignore, completely, the non-experimental, etiologic-type study design I sketched as a realistic way of estimating that clinical parameter of critical importance for the epidemiological purpose at issue here. And besides, they ignore my plea for a high-priority focus, in these commentaries, “on the question whether the relevant parameter has been, as I argued, seriously misrepresented by the results of such studies as now are viewed as the only source of policy-relevant information on ‘screening’ for a cancer” (italics original).

To say what would go without saying, I did not expect this level of discipline in high officials’ commentary on my serious criticisms of the research that they take to be of importance to their work. And in particular, I didn’t expect their unfailing misrepresentation of what I said.

Petitti

Eminence in matters surrounding public policies on ‘screening’ for breast cancer characterizes also Diana Petitti as a participant in this discussion (Petitti 2015). For she was the vice-chairperson of the U. S. Preventive Services Task Force that produced the most recent, 2009 update of the ‘guidelines’ for these practices in the U. S., and has subsequently been an eminent apologist for these.

In the bulk of her commentary she “highlights” various points in my article (Miettinen 2015) on which she agrees with me. “But,” she says, “I disagree with the main point he tries to make” (italics mine).

Now, the main point I was trying to make was, already, in the title of my article, which stated that misguided research is misleading public policies on ‘screening’ for breast cancer. And in line with this, I ended with this wish in respect to this public discussion on the matter:

The initial focus in this, I suggest, would best be the question of what is the parameter of Nature whose estimation really is relevant to decisions about public policies promoting a cancer’s early diagnosis and treatment; and in particular, is it, as I argue, the reduction in the cancer’s case-fatality rate resulting from its early clinical care replacing the late counterpart of this? And a related focus in this discourse needs to be, I suggest, on the question of whether the relevant parameter has been, as I argue, seriously misrepresented by the results of such studies as now are viewed as the only source of policy relevant information on ‘screening’ for a cancer. [Cf. above; italics original.]

Petitti read this main point of mine to mean that “Dr. Miettinen criticizes policy-makers for judging the benefit of breast cancer screening programs based on analyses that assess the effect of the program on breast cancer mortality in the population of women who have been the target of the program.” In truth, though, I presented no criticism whatsoever of epidemiologists – practitioners of community medicine – in respect to their concern to control the rate (incidence density) of death from a cancer in their cared-for populations by advocacy of ‘screening’ for it, nor did I in any way criticise policy-makers who act on the ‘screening’s’ availability – in the light of expert inputs to their understanding of the magnitude of the mortality reduction that is being, or would be, achieved in the population of their concern. My criticism was focused on those who, like Petitti herself, conduct or review purportedly policy-relevant research, original of derivative, on ‘screening’ for a cancer and, on this basis, (mis)inform makers of public policies on such ‘screening’ regarding the mortality benefit from the policies in question.

Petitti continues: “He champions adoption of a ‘clinical’ perspective on screening and use of a ‘clinical’ metric to assess the benefit of screening. He suggests that screening – not programs but screening – should be judged based on [the reduction in the cancer’s case-fatality rate when screening-associated early treatments replace treatments in the absence of treatment].” But in truth, again, I was championing something quite different, and strictly from the perspective of epidemiology (community medicine) and public policy relevant to it. I was arguing that the research relevant to those community concerns had to address a clinical parameter, but in non-clinical, epidemiological terms:

So the only mortality parameter (of Nature) relevant to policy decisions about population-level programs of screening for the cancer is that clinical parameter Q [the screening-associated proportional reduction in the cancer’s case-fatality rate], which is not subject to quantification by clinical research (cf. above). But if epidemiological research – on etiology/etiogenesis of death from the cancer – addresses the causal incidence-density ratio, IDR, contrasting the index history of ideal diagnostics and treatment (above) with the reference history of no screening, these in a defined domain, then [that parameter Q is estimable from the data, while the other inputs to the estimate relevant to the policy-makers are not parameters of Nature]. [Italics original.]

Her commentary on my “main point” Petitti summarizes in her closing paragraph:

It is not that policy-makers are using an incorrect metric. They are using a metric that permits them to answer a question that is not Dr. Miettinen’s question.

Rather than parsing this summary of Petitti’s disagreement with what she takes to have been my main point, I rephrase it yet again: Misguided original research – still-orthodox trials on the ‘screening’ at issue – and its derivative research – reviews by the U.S. Preventive Services Task Force, inter alia – continue to mislead makers of public policies on the extent to which a program of ‘screening’ – commonly the reimbursement of the cost of it – serves to reduce mortality from the cancer in the population of the policy-makers’ concern. This reduction in mortality is not quantified by the mortality reductions in those now-orthodox trials, contrary to the misconception propagated by adherents of this doctrine.

All in all, thus, Petitti’s purported disagreement with me, like that of Huwiler et alii (above), actually is quarrel with what she falsely imputes to me.

Birthwistle et alii

I was keen on the Editor to ask for a commentary from the Canadian counterpart of the U.S. Preventive Services Task Force (from which he had secured Petitti’s participation). This is the Canadian Task Force on Preventive Health Care. As for screening for breast cancer, its most recent “recommendations for clinicians and policy-makers” were published in 2011. The next update of these is expected to appear in 2016.

The Editor approached two members of the CTFPHC: Richard Birthwistle of Queens University and Michael Joffres of Simon Fraser University. They agreed to produce a commentary, in collaboration with Marcello Tonelli of the University of Calgary. But in the end, in lieu of an actual commentary they e-mailed a pithy comment to the Editor (Birthwistle et alii 2015).

We have carefully reviewed Dr. Miettinen’s paper and have concluded that there is nothing we can add in a commentary. Thank you for the opportunity to read this work and will [sic] look forward to how other Task Forces’ [sic] respond. [Italics added.]

Franco

Expertise somewhat different from that in the two Task Forces on preventive healthcare (which these groups take ‘screening’ to represent) is that of Eduardo Franco (Franco 2015). For, in his capacity as the Chairman of the Department of Oncology and also Professor in the Department of Epidemiology, Biostatistics and Occupational Health at McGill University, he represents expertise in both the clinical and epidemiological aspects of the practice of ‘screening’ for a cancer and, also, in the theory of directly practice-relevant research, oncologic and other.

The title of Franco’s commentary is not, alas, promising in the respect to focus on my article and, especially, on the points in it that I suggested for highest-priority consideration in the commentaries (see Petitti above). And indeed, it is a bit of a challenge to find in his commentary anything that directly bears on whether he agrees with the contents of my article (Miettinen 2015), its main point (see Petitti above) in particular.

I here, for a final time, rephrase my main point: The mortality reduction that policy-makers on ‘screening’ for a cancer are concerned to have an estimate of (from relevant experts, epidemiological more than clinical) is not manifest, even in principle, in such experimental studies (‘screening trials’) as now are commonly held as the best – and solely sufficient – basis for that estimation. These trials, thus used, mislead public policies on ‘screening’ for a cancer, insofar as any estimate is synthesized from them. (Recall the meaninglessness of the synthetic results I cited in my article.) My sense is that Franco disagrees with this.

Franco begins with the commonly-held notion that most compelling evidence – that of greatest ‘strength’ – is provided by randomized trials (even when not otherwise specified). But, disappointingly, he does not make the point that this ranking, if justifiable at all, has to do with research on the effects of interventions. And he seems to share the common view that ‘screening’ for a cancer is an intervention. I much prefer to think of the entire algorithm of the use of diagnostics in the pursuit of early diagnosis (rule-in) rather than of the initial test in this; but either way, the use of diagnostics has no effect on the risk of dying from the cancer, and it thus is not an intervention (intended to be preventive of that outcome).

Franco evidently agrees that a ‘screening’-related randomized trial would ideally be one in the domain of cases diagnosed under the ‘screening,’ contrasting undelayed (early) treatment with clinical-stage (late) treatment, were such a trial not impracticable on account overdiagnoses, inter alia. This I take to represent agreement that the core question in the scientific knowledge-base of decisions and policies about ‘screening’ for a cancer is the gain in the cancer’s curability rate or, correspondingly, the reduction in its case-fatality rate.

Given the impracticability of the theoretically ideal trial, Franco seems to think that in the transition from the “perfect”-but-impracticable to the practicable-and-still-“good” (cf. title of his commentary) one need not compromise the use of a randomized trial but, instead, change the nature of the domain and the contrast: in the domain of the ‘worried well,’ one contrasts not early treatment with late treatment (of cases diagnosed under the ‘screening’) but ‘screening’ – the initial diagnostic testing – with no ‘screening.’ This presumably is the meaning of Franco’s saying, in the context, that “Trialists have merely executed the best research money could buy based on era-specific standards of study design and ethical boundaries.”

I have sought to argue that this commonly-adopted practicable succedaneum to the scientifically “perfect” but impracticable is not “good” at all; that it is very bad: As I explain in my article (Miettinen 2015), the proportional reduction in mortality that policy-makers want to be estimated (by experts) is very different from the proportional reduction in the cancer’s case-fatality rate resulting from diagnoses-cum-treatments under ‘screening’; and the proportional reductions in mortality from the cancer estimated in those now-orthodox trials – with their arbitrary durations of the ‘screening’ and of the follow-up too – have no merit at all as estimates of what the policy-makers are concerned to know, nor as estimates of the reduction in the cancer’s case-fatality rate. These trials represent, I say, RCTism run amok.

Given Franco’s high regard for randomized trials as a means of learning about the effectiveness of interventions (per his competence as a methodologist in research to advance the knowledge-base of medicine) and for the concept of a cancer’s case-fatality rate (per his competence as an oncologist), it is astonishing to me that he seems not to have the concept of how the ‘screening’-associated proportional reduction in the cancer’s case-fatality rate can be estimated from the ratio of the incidence-density of deaths from the cancer in a suitably defined segment of follow-up time in ‘screening’ trials of sufficiently long-term ‘screening’ and follow-up. This is explained in his reference #8, which Steurer (Steurer 2015) evidently had actually studied and learned from; and I allude to it in my article (Miettinen 2015). That trial design is not marred by overdiagnoses nor by “ethical boundaries,” but its implementation with suitably close adherence to the protocol is rather impractical on account of the requisite durations of the ‘screening’ and follow-up (for deaths from the cancer). (In my article I allude also to an alternative to this design, involving only short-term screening in conjunction with long-term follow-up, the theory of which is more demanding to understand.)

My “totem pole of medical and public health decision-making” is quite different from Franco’s as for “study design and lines of evidence” in the context at issue here. I sought to argue that, in research on ‘screening’ for a cancer, highest on that “totem pole” should be non-experimental studies on the ‘screening’-related etiology/etiogenesis of death from the cancer in question, suitably designed as to their source populations, inter alia. Somewhat disappointingly to me, Franco says nothing about this difference in the ranking of evidence between him and me. But: feedback on this matter was not a high-priority concern of mine.

Franco says that “Miettinen has few (but sharp) criticisms for those involved with systematic reviews, meta-analyses, and policy decisions” (italics added), in the context of ‘screening’ for a cancer. Yes, my criticisms are few, but perhaps more fundamental than sharp. And they are directed to those “trialists” in their original research and not only to the reviewers of this. And yes, they are directed, also, to policy decisions, but specifically in the sense of their being misinformed – on account of their advisers’ failure to understand the profound disconnect, conceptual and quantitative, between the mortality reduction of policy-makers’ concern and that quantified in the trials and in the syntheses (“meta-analyses”) of their results.

Franco advances a sharp (sic) criticism of what he calls my utopian logic, saying that it “has no place in policy decisions. His logic for this criticism immediately follows:

[Policy decisions on screening] take into account the balance of risks to benefits from screening, costs, utilities, the political risk of inaction, the societal tolerance to risk, healthcare providers’ preferences, patient choices, and other imponderables of subjective variables. The Swiss report that was the focus of Miettinen’s commentary [ref. to that report] cannot be faulted for lack of clarify and sincerity of purpose. … [It’s authors’] choice with breast cancer screening is a resolute interpretation of the evidence base reconciled with their perception that women should be better informed about the risk-benefit balance” [ref. to that same Swiss report].

To this I say, simply, that in my article, like in the Swiss report at issue and the trials it addressed, the policy-relevant focus was, solely, on the benefit from ‘screening’ (for breast cancer) and this, specifically, in the epidemiological – community-medicine rather than clinical – framework of reduction in mortality from the cancer in the population of the policy-makers’ concern (cf. Petitti, above).

As is evident, Franco and I remain quite far apart in our respective ways of thinking about the epidemiologically very important topic of ‘screening’-based controlling of the cared-for population’s levels of mortality from particular forms of cancer – and, especially, about the scientific input into policy-relevant quantification of a ‘screening’ program’s effectiveness in reducing this mortality. But this is not irremediable: In yet another here-relevant capacity, Franco is the Editor-in-Chief of the journal Preventive Medicine, and he has agreed to organize, in that journal, a follow-up for this discussion of the fundamentals of policy-relevant research on screening for a cancer.

Weiss

Among the commentators on my keynote article (Miettinen 2015) – on research bearing on the mortality-benefit inputs to public-policy decisions on ‘screening’ for a cancer – Noel Weiss (Weiss 2015), of the University of Washington (in Seattle, WA), is special in an important sense: He has been active in thinking, and writing, about the theory of this research, non-experimental as well as experimental. After all, this is what my article was about and, consequently, what the commentaries were expected to be focused on.

As the title of his commentary indicates, Weiss focuses on experimental research – randomized trials – designed to serve the end at issue (above). This is consistent with what I suggested as the highest-priority concerns in these commentaries. And about the purpose of the research at issue he says that “it is important for studies that seek to assess the efficacy of cancer screening to document as accurately as possible not just the presence of a benefit to be weighed against the negatives, but the size of that benefit” (italics added). He points out that those randomized trials “generally are needed when the expected benefit of screening is relatively small or moderate in magnitude.”

The need to accurately address the size of the benefit Weiss justifies in the usual, rational way: “There are always some downsides to cancer screening,” mentioning costs inter alia. For accurate quantification of the benefit, he outlines three ways of “Enhancing the validity and generalizability of randomized trials of cancer screening” (title). And the need to here address these enhancements he formulates thus: “as Miettinen reminds us, there are features of many randomized trials of screening that can lead to a falsely low estimate of the mortality reduction associated with the introduction of a long-term screening program.”

So, Weiss implies that the problem with the now-orthodox trials in the “evaluation of the efficacy of the screening intervention” I took to be their biases – which Weiss specifies as arising from non-adherence to the assigned “intervention” (experimental ‘screening’ or ‘usual care’), changes over time in the diagnostics and treatments, and the ‘screening’ being of too short duration. But: the criticism of those trials that I presented actually was very different and much more serious than that, as I restate above in the contexts of the commentaries by others, most notably those of Petitti and Franco, and need not repeat here.

The only other point that Weiss makes about my article is this:

Miettinen bemoans the relative dirth of involvement of epidemiological researchers in randomized trials of screening for cancer, implying (to me) that he believes that the planning [sic] of the design and analysis of future trials would do well to consider issues such as [the three I address]. I would add to this the recommendation that when seeking to estimate the benefits of recommending or providing screening to members of a given population, persons INTERPRETING the results of randomized trials of cancer screening (past and future) also would do well to consider these issues.

Now, what I actually lamented about epidemiological researchers was this:

We epidemiological researchers and, especially, theoreticians thereof have mostly been missing in action on the ‘screening’ front of the ‘war on cancer,’ inexplicably and unjustifiably. We, therefore, are largely responsible for misguided research continually misinforming public policies on ‘screening’ for cancers; …”

And as for interpreting the results of the now-orthodox types of trial, my concerns are, as I noted above, very different from, and much more serious than, the biases Weiss addresses.

I presume Weiss would agree that the persons interpreting the trial results are, first, (some of) the trialists themselves, and then, commonly, (some of) those who ‘systematically’ – meaning under a protocol, à la original studies – review the original research; but it is somewhat unclear whether Weiss envisions the need for a third level of interpretation: that by members of the scientific community at large that is concerned with the truths of the matters at issue in the research, original and derivative. This third level of interpretation was at issue in my article, following the Swiss derivative study based on a set of original studies, which they criticised only in terms of saying that they were “predominantly outdated” in their clinical aspects; and in this discussion about that article of mine, notable members of the relevant scientific community were invited to express and justify their agreement/disagreement principally as for my main interpretive point (cf. Petitti above), dramatically different from those of the Swiss “expert panel” that apprised the evidence. I said that “experts should have a consensus about the meaninglessness of the evidence that the panel (only superficially) reviewed and (only minimally) apprised” (italics original). Weiss, very different from me, seems to be comfortable with this type of research in principle, concerned only to optimize the “validity and generalizability” of its results.

Robins

The last commentary (Robins 2015) is from is from Jamie Robins, a colleague of the Editor at the Harvard School of Public Health and an eminent theoretician of epidemiological research.

Much to my surprise and disappointment, even Robins, in his first two paragraphs already, seriously misrepresents what I was saying (Miettinen 2015), giving his particular personal version of the misattributions.

While I wrote about epidemiologists’ expert inputs, scientific and particularistic, into policy-makers’ decisions about ‘screening’ for a cancer, with special reference to the scientific input into estimation of the mortality benefit from the policy at issue, Robins orients his readers thus:

From my perspective, one’s goal should be to discover a near optimal screening strategy for the prevention of breast cancer mortality to be pursued from birth until the time of the subject’s death.

And from this (utopian) perspective he proceeds to elaboration – extensive and quite recondite – of what he takes to be involved in the pursuit of this discovery.

Only after all of this does Robins turn to what was expected of him (VanderWeele 2015): critical commentary on my article (Miettinen 2015). He says, for a start, this:

Both of us agree that, in principle, a near optimal policy could be estimated from ideal (but difficult to implement) randomized trials of alternative strategies based on closed cohorts.

But: I did not say, nor do I believe, anything like this. Instead, I hold, firmly, that scientific research can only produce inputs to decisions, and that these inputs are never determinative of rational decisions. While this is the case quite generally, it is so in particular when the research in question, as here, addresses only a select one of the decision’s consequences and this, even, only in conjunction with ad-hoc, particularistic inputs.

Another one of Robins’ commentaries is this:

To estimate the community effect of a given near optimal strategy, Miettinen needs to assume a steady state population to obtain simple formulae that relate the trial parameters to the distribution of breast cancer mortality in the community. As discussed earlier, I believe that through microsimulation the unrealistic assumption of steady state can be eliminated.

But: I did not write about the estimation of the effect of any defined “strategy” of early care for the cancer in question, specifically on mortality from that cancer in a particular community; I wrote about the mortality effect of a public policy to promote such care in the community in question, with the particulars of the care and its utilization not stipulated by the policy. And I did not say, nor imply, that the policy advisor’s need to “assume” (presume) a steady state of the population (dynamic) in question. What I said instead – and continue to hold – is that the magnitude (proportional) of the reduction in that mortality (incidence-density of death from the cancer) attributable to early diagnosis-and-treatment of the cancer at any given time (calendar) is determined by the pattern of histories for that care in that population at that time.

That retrospective explanation of a population’s level of whatever mortality (or morbidity) at whatever point in (calendar) time was one of the reasons I proposed primacy for etiologic/etiogenetic/etiognostic studies – non-experimental, with dynamic/open study populations – for addressing the relevant parameter(s) of Nature. To this Robins presents a counterpoint:

As I have argued over many years, in the face of time-dependent confounders affected by prior treatment, estimation of the incidence density ratio using standard epidemiologic approaches of the sort alluded to by professor Miettinen fail to have an interpretation as a causal incidence density ratio [ref.].

I disagree with this wholesale dismissal of “standard epidemiologic approaches” in etiologic/etiogenetic research but take discussion on the fundamentals of epidemiological research (on the etiogenesis of phenomena of health) to be out of place at this stage of this discussion.

A productive discussion on the article at issue here (Miettinen 2015) is characterized by a sense of priorities in what to focus on, and for this I suggested “the question of what is the parameter of Nature whose estimation really is relevant to decisions about public policies promoting a cancer’s early diagnosis and treatment [and that of] whether the relevant parameter has been, as I argue, seriously misrepresented by the results of such studies as now are viewed as the only source of policy-relevant information on ‘screening’ for a cancer.” Even Robins, to my chagrin, failed to focus on, and answer, these questions.

The implications of the commentaries

The reader concerned to understand the research needed to inform public policies on screening for a cancer would do well, I suggest, to first (re)read the commentary by Johann Steurer (Steurer 2015) and then the article of mine (Miettinen 2015) that all of the commentators were to provide feedback on. The conclusion from this, mine and presumably the reader’s, is that the lead article is dense and, thereby, not sufficiently instructive to non-experts on all the theoretical issues the understanding of which is needed in proper counseling of policy-makers on the magnitude of the benefit from a program of ‘screening’ in the population of their concern.

Reading the rest of the commentaries (Huwiler et alii 2015; Petitti 2015; Franco 2015; Weiss 2015; Robins 2015) leads to the conclusion that the lead article is not sufficiently instructive even to experts on the issues. For, in none of those commentaries was there evidence of the commentators’ having apprehended the message intended to be clear from the title of the lead article already, as there is no expression of agreement or disagreement with this.

So, one implication of the commentaries is that the lead article, directed to epidemiologists and epidemiological researchers (but not to makers of public policies) and concerned with ‘screening’ for a cancer, requires rewriting to more effectively serve advancement of the understandings at issue in it.

And another, related implication of most of the commentaries is that they are prone to be unfocused and also otherwise undisciplined and, thereby, unproductive. I learned nothing that called for substantive revision of my article (Miettinen 2015).

All in all, thus, the principal implication of this discussion on its very important, challenging topic I take to be the need for a follow-up discussion, though perhaps in a different format. Most productive might be a process organized around a sequence of (hierarchical) propositions put forward against the backdrop of an updated version of my lead article here. In respect to any given one of these propositions, the members of an expert panel would first give their initial comments on it independently of the others. These comments would then be shared among all members of the panel and discussed among them, with a chairperson maintaining order and a proper sense of purpose – consensus-seeking – in the process. Such a ‘consensus conference’ could be implemented in various ways, communication by e-mail one of the possibilities.

Failure to vigorously seek experts’ consensus on these matters, of great importance to public health as they are, is not an ethically-justifiable option. I therefore look for further leadership in this, from Editors of epidemiology-related scientific journals in particular.

References

Birthwistle, R., Joffres, M. and Tonelli, M. (2015). Written communication. Search in Google Scholar

Franco, E. L. (2015). Perfect is the enemy of good: Going to the war on cancer with less evidence than we could have. Epidemiologic Methods, 4(1). Search in Google Scholar

Huwiler, K., Thürlimann, B., Cerny, T. and Zwahlen, M. (2015). Comment on: ‘Screening’ for breast cancer: Misguided research misinforming public policies by O. S. Miettinen. Epidemiologic Methods, 4(1). Search in Google Scholar

Miettinen, O. S. (2015). ‘Screening’ for breast cancer: Misguided research misleading public policies. Epidemiologic Methods, 4(1). Search in Google Scholar

Petitti, D. B. (2015). Comment on misguided research misleading public policies. Epidemiologic Methods, 4(1). Search in Google Scholar

Robins, J. M. (2015). Discussion of a paper by professor Miettinen. Epidemiologic Methods, 4(1). Search in Google Scholar

Steurer, J. (2015). Comments on “‘Screening’ for a cancer: Misguided research misleading public policies” by O.S. Miettinen. Epidemiologic Methods, 4(1). Search in Google Scholar

VanderWeele, T. J. (2015). Editorial: Research to inform public policies on screening for a cancer: A critical disquisition followed by invited commentaries. Epidemiologic Methods, 4(1). Search in Google Scholar

Weiss, N. (2015). Enhancing the validity and generalizability of randomized trials of cancer screening. Epidemiologic Methods, 4(1). Search in Google Scholar

Published Online: 2015-11-18
Published in Print: 2015-12-1

©2015 by De Gruyter