Laurie Barclay, MD
October 26, 2009 — Safety results of randomized controlled trials (RCTs) may be inconsistently reported, according to the results of a review in the October 26 issue of the Archives of Internal Medicine.
"Reports of clinical trials usually emphasize efficacy results, especially when results are statistically significant," write Isabelle Pitrou, MD, MSc, from Université Denis Diderot, INSERM, in Paris, France, and colleagues. "Poor safety reporting can lead to misinterpretation and inadequate conclusions about the interventions assessed. Our aim was to describe the reporting of harm-related results from [RCTs]."
The reviewers searched the MEDLINE database for reports of RCTs published from January 1, 2006, through January 1, 2007, in 6 widely read and respected general medical journals. A standardized form used for data extraction allowed evaluation of how safety results were presented in the text and tables of published reports.
Among the 133 reports identified, 88.7% mentioned adverse events. However, 27.1% of reports gave no information concerning severe adverse events, and 47.4% of reports gave no information concerning withdrawal of patients because of an adverse event.
The reviewers noted restrictions in the reporting of harm-related data in 43 articles (32.3%), with 17 describing the most common adverse events only, 16 describing severe adverse events only, 5 describing statistically significant events only, and 5 having more than 1 restriction. Nearly two thirds of articles (65.6%) clearly reported the population considered for safety analysis.
"Our review reveals important heterogeneity and variability in the reporting of harm-related results in publications of RCTs," the study authors write."Despite the CONSORT statement extension for harm-related data, efforts should still be made to describe safety results with accuracy in reports of RCTs and to standardize practices for reporting."
Limitations of this review include exclusion of specialized medical journals or those with lower impact factors, exclusion of specific study designs, and extraction of all the data by a single reviewer.
"Perhaps conflicts of interest and marketing rather than science have shaped even the often accepted standard that randomized trials study primarily effectiveness, whereas information on harms from medical interventions can wait for case reports and nonrandomized studies," John P. A. Ioannidis, MD, from the University of Ioannina School of Medicine in Greece, writes in an accompanying editorial. "Nonrandomized data are very helpful, but they have limitations, and many harms will remain long undetected if we just wait for spontaneous reporting and other nonrandomized research to reveal them. In an environment where effectiveness benefits are small and shrinking, the randomized trials agenda may need to reprogram its whole mission, including its reporting, toward better understanding of harms."
Dr. Pitrou was supported by a grant from the Ministry of Higher Education and Research, France. The study authors and Dr. Ioannidis have disclosed no relevant financial relationships.
Arch Intern Med. 2009;169:1737–1739, 1756–1761.
No comments:
Post a Comment