Sunday, November 7, 2010

Stopping a trial early in oncology: for patients or for

F. Trotta1, G. Apolone2, S. Garattini2 & G. Tafuri1,3*
1Italian Medicines Agency (AIFA), Rome; 2Mario Negri Institute for Pharmacological Research, Milan, Italy; 3Utrecht University, Utrecht Institute for Pharmaceutical
Sciences, Utrecht, The Netherlands
Received 18 December 2007; revised 25 January 2008; accepted 28 January 2008
Background: The aim of this study is to assess the use of interim analyses in randomised controlled trials (RCTs)
testing new anticancer drugs, focussing on oncological clinical trials stopped early for benefit.
Materials and methods: All published clinical trials stopped early for benefit and published in the last 11 years,
regarding anticancer drugs and containing an interim analysis, were assessed.
Results: Twenty-five RCTs were analysed. The evaluation of efficacy was protocol planned through time-related
primary end points, >40% of them overall survival. In 95% of studies, at the interim analysis, efficacy was evaluated
using the same end point as planned for the final analysis. As a consequence of early stopping after the interim
analysis, 3300 patients/events across all studies were spared. More than 78% of the RCTs published in the last 3
years were used for registration purposes.
Conclusion: Though criticism of the poor quality of oncological trials seems out of place, unfortunately early
termination raises new concerns. The relation between sparing patients and saving time and trial costs indicates that
there is a market-driven intent. We believe that only untruncated trials can provide a full level of evidence which can be
translated into clinical practice without further confirmative trials.

Safety Results of Randomized Controlled Trials May Be Inconsistently Reported

Laurie Barclay, MD


October 26, 2009 — Safety results of randomized controlled trials (RCTs) may be inconsistently reported, according to the results of a review in the October 26 issue of the Archives of Internal Medicine.

"Reports of clinical trials usually emphasize efficacy results, especially when results are statistically significant," write Isabelle Pitrou, MD, MSc, from Université Denis Diderot, INSERM, in Paris, France, and colleagues. "Poor safety reporting can lead to misinterpretation and inadequate conclusions about the interventions assessed. Our aim was to describe the reporting of harm-related results from [RCTs]."

The reviewers searched the MEDLINE database for reports of RCTs published from January 1, 2006, through January 1, 2007, in 6 widely read and respected general medical journals. A standardized form used for data extraction allowed evaluation of how safety results were presented in the text and tables of published reports.

Among the 133 reports identified, 88.7% mentioned adverse events. However, 27.1% of reports gave no information concerning severe adverse events, and 47.4% of reports gave no information concerning withdrawal of patients because of an adverse event.

The reviewers noted restrictions in the reporting of harm-related data in 43 articles (32.3%), with 17 describing the most common adverse events only, 16 describing severe adverse events only, 5 describing statistically significant events only, and 5 having more than 1 restriction. Nearly two thirds of articles (65.6%) clearly reported the population considered for safety analysis.

"Our review reveals important heterogeneity and variability in the reporting of harm-related results in publications of RCTs," the study authors write."Despite the CONSORT statement extension for harm-related data, efforts should still be made to describe safety results with accuracy in reports of RCTs and to standardize practices for reporting."

Limitations of this review include exclusion of specialized medical journals or those with lower impact factors, exclusion of specific study designs, and extraction of all the data by a single reviewer.

"Perhaps conflicts of interest and marketing rather than science have shaped even the often accepted standard that randomized trials study primarily effectiveness, whereas information on harms from medical interventions can wait for case reports and nonrandomized studies," John P. A. Ioannidis, MD, from the University of Ioannina School of Medicine in Greece, writes in an accompanying editorial. "Nonrandomized data are very helpful, but they have limitations, and many harms will remain long undetected if we just wait for spontaneous reporting and other nonrandomized research to reveal them. In an environment where effectiveness benefits are small and shrinking, the randomized trials agenda may need to reprogram its whole mission, including its reporting, toward better understanding of harms."

Dr. Pitrou was supported by a grant from the Ministry of Higher Education and Research, France. The study authors and Dr. Ioannidis have disclosed no relevant financial relationships.

Arch Intern Med. 2009;169:1737–1739, 1756–1761.