Anonymous ID: 434ccb Aug. 30, 2020, 10:17 a.m. No.10473200   🗄️.is 🔗kun   >>3303

Why are politicians and media pundits claiming to be smarter than physicians? What gives them the right to control medicine?

Medicine’s Fundamentalists

The randomized control trial controversy: Why one size doesn’t fit all and why we need observational studies, case histories, and even anecdotes if we are to have personalized medicine

 

https://www.tabletmag.com/sections/science/articles/randomized-control-tests-doidge

 

If the study was not randomized, we would suggest that you stop reading it and go on to the next article.

—Quote from Evidence-Based Medicine: How to Practice and Teach EBM

 

Why is it we increasingly hear that we can only know that a new treatment is useful if we have a large randomized control trial, or “RCT,” that has positive results? Why is it so commonly said that individual case histories are “mere anecdotes” and count for nothing, even if a patient, who has had a chronic disease, suddenly gets better with a new treatment after all others failed for years—an assertion that seems, to many people, to run counter to common sense?

 

Indeed, some version of the statement, “only randomized control trials are useful” has become boilerplate during the COVID-19 crisis. It is uttered as though it is self-evidently the mainstream medical position. When other kinds of studies come out, we are told they are “flawed,” or “fatally flawed,” if not RCTs (especially if the commentator doesn’t like the result; if they like the result, not so often). The implication is that the RCT is the sole reliable methodological machine that can uncover truths in medicine, or expose untruths. But if this is so self-evident, why then, do major medical journals continue to publish other study designs, and often praise them as good studies, and why do medical schools teach other methods?

 

They do because, as extraordinary an invention as the RCT is, RCTs are not superior in all situations, and are inferior in many. The assertion that “only the RCTs matter” is not the mainstream position in practice, and if it ever was, it is fading fast, because, increasingly, the limits of RCTs are being more clearly understood. Here is Thomas R. Frieden, M.D., former head of the CDC, writing in the New England Journal of Medicine, in 2017, in an article on the kind of thinking about evidence that normally goes into public health policy now:

 

Although randomized, controlled trials (RCTs) have long been presumed to be the ideal source for data on the effects of treatment, other methods of obtaining evidence for decisive action are receiving increased interest, prompting new approaches to leverage the strengths and overcome the limitations of different data sources. In this article, I describe the use of RCTs and alternative (and sometimes superior) data sources from the vantage point of public health, illustrate key limitations of RCTs, and suggest ways to improve the use of multiple data sources for health decision making. … Despite their strengths, RCTs have substantial limitations.

 

That, in fact, is the “mainstream” position now, and it is a case where the mainstream position makes very good sense. The head of the CDC is about as “mainstream” as it gets.

 

The idea that “only RCTs can decide,” is still the defining attitude, though, of what I shall describe as the RCT fundamentalist. By fundamentalist I here mean someone evincing an unwavering attachment to a set of beliefs and a kind of literal mindedness that lacks nuance—and that, in this case, sees the RCT as the sole source of objective truth in medicine (as fundamentalists often see their own core belief). Like many a fundamentalist, this often involves posing as a purveyor of the authoritative position, but in fact their position may not be. As well, the core belief is repeated, like a catechism, at times ad nauseum, and contrasting beliefs are treated like heresies. What the RCT fundamentalist is peddling is not a scientific attitude, but rather forcing a tool, the RCT, which was designed for a particular kind of problem to become the only tool we use. In this case, RCT is best understood as standing not for Randomized Control Trials, but rather “Rigidly Constrained Thinking” (a phrase coined by the statistician David Streiner in the 1990s).

Anonymous ID: 434ccb Aug. 30, 2020, 10:33 a.m. No.10473303   🗄️.is 🔗kun

>>10473200

Evidence for Health Decision Making — Beyond Randomized, Controlled Trials

Thomas R. Frieden, M.D., M.P.H. — August 3, 2017 N Engl J Med

 

https://www.nejm.org/doi/full/10.1056/NEJMra1614394

 

A core principle of good public health practice is to base all policy decisions on the highest-quality scientific data, openly and objectively derived.1 Determining whether data meet these conditions is difficult; uncertainty can lead to inaction by clinicians and public health decision makers. Although randomized, controlled trials (RCTs) have long been presumed to be the ideal source for data on the effects of treatment, other methods of obtaining evidence for decisive action are receiving increased interest, prompting new approaches to leverage the strengths and overcome the limitations of different data sources.2-8 In this article, I describe the use of RCTs and alternative (and sometimes superior) data sources from the vantage point of public health, illustrate key limitations of RCTs, and suggest ways to improve the use of multiple data sources for health decision making.

 

In large, well-designed trials, randomization evenly distributes known and unknown factors among control and intervention groups, reducing the potential for confounding. Despite their strengths, RCTs have substantial limitations. Although they can have strong internal validity, RCTs sometimes lack external validity; generalizations of findings outside the study population may be invalid.2,4,6 RCTs usually do not have sufficient study periods or population sizes to assess duration of treatment effect (e.g., waning immunity of vaccines) or to identify rare but serious adverse effects of treatment, which often become evident during postmarketing surveillance and long-term follow-up but could not be practically assessed in an RCT. The increasingly high costs and time constraints of RCTs can also lead to reliance on surrogate markers that may not correlate well with the outcome of interest. Selection of high-risk groups increases the likelihood of having adequate numbers of end points, but these groups may not be relevant to the broader target populations. These limitations and the fact that RCTs often take years to plan, implement, and analyze reduce the ability of RCTs to keep pace with clinical innovations; new products and standards of care are often developed before earlier models complete evaluation. These limitations also affect the use of RCTs for urgent health issues, such as infectious disease outbreaks, for which public health decisions must be made quickly on the basis of limited and often imperfect available data. RCTs are also limited in their ability to assess the individualized effect of treatment, as can result from differences in surgical techniques, and are generally impractical for rare diseases.

 

Many other data sources can provide valid evidence for clinical and public health action. Observational studies, including assessments of results from the implementation of new programs and policies, remain the foremost source, but other examples include analysis of aggregate clinical or epidemiologic data. In the late 1980s, the high rate of the sudden infant death syndrome (SIDS) in New Zealand led to a case–control study comparing information on 128 infants who died from SIDS and 503 control infants.9 The results identified several risk factors for SIDS, including prone sleeping position, and led to the implementation of a program to educate parents to avoid putting their infants to sleep on their stomachs — well before back-sleeping was definitively known to reduce the incidence of SIDS. The substantial reduction in the incidence of SIDS that resulted from this program became strong evidence of efficacy; implementation of an RCT for SIDS would have presented ethical and logistic difficulties. Similarly, the evidence base for tobacco-control interventions has depended heavily on analysis of the results of policies, such as taxes, smoke-free laws, and advertising campaigns that have generated robust evidence of effectiveness — that is, practice-based evidence.

 

Current evidence-grading systems are biased toward RCTs, which may lead to inadequate consideration of non-RCT data.10