NEW YORK (Reuters Health) - In an analysis of over 40,000 clinical trials registered in a government database, researchers found that many of those studies -- looking at the effects of drugs, devices or behavioral interventions -- were small and of inconsistent quality.
Those are the studies doctor groups rely on when it comes to setting guidelines about the best evidence for preventing and treating a given disease, according to a report led by Dr. Robert Califf at the Duke Translational Medicine Institute in Durham, North Carolina.
But if the evidence comes from small groups of patients in trials with less-than-reliable methods, doctors are left without a lot to work with when developing recommendations and making decisions in everyday care.
"What's at stake for the public is, you would want your doctor to know what he or she is doing as opposed to just guessing or having an opinion," Califf told Reuters Health.
The majority of guidelines and major medical decisions, he added, aren't supported by high-quality published evidence that requires large, rigorous trials.
Separately from the new study, Califf has been involved in reviewing the work of cancer researcher Dr. Anil Potti, who resigned from Duke in late 2010 amid an investigation into his work and has had to retract or correct 16 papers.
In 1997, the United States Congress mandated researchers register trials on a website, ClinicalTrials.gov, so information about ongoing studies would be available in a single database.
The site requires information on each trial's lead researcher and collaborators, funding source and purpose.
Study organizers also report the number of people they have enrolled or expect to enroll, along with whether participants are randomly assigned to different interventions and if they and the doctors treating them are "blinded" to what type of prevention or treatment they're receiving.
Califf and his colleagues looked specifically at the records of 40,970 trials on medications, medical devices or lifestyle interventions for heart disease, cancer and mental health conditions registered on ClinicalTrials.gov between 2007 and 2010.
The majority of those trials -- 62 percent -- reported recruiting 100 or fewer patients. Two-thirds were conducted at a single research site.
Depending on the type of disease they covered, between 36 and 80 percent of studies randomly split patients between different interventions -- such as an active drug and a placebo drug.
So-called randomized controlled trials are considered the "gold standard" for determining how effective a drug or device is because they're less likely to be influenced by underlying differences between participants on various prevention or treatment regimens.
About seven percent of entries were missing information about the study's primary purpose.
Of the trials that cited a funding source, 44 percent were being supported by a drug or device company, Califf's team reported Tuesday in the Journal of the American Medical Association. Nine percent were funded by the National Institutes of Health, and almost half had another funding source, typically individual academic medical centers.
That shows an opportunity for more researchers working at a single site with their own funding to join forces on larger studies that produce more clinically-useful evidence, according to Califf.
"What I am concerned about is that we organize (trials) better," he said.
MISSING DATA A CONCERN
Kay Dickersin from the Johns Hopkins Bloomberg School of Public Health in Baltimore said the web registry is an important way to keep track of the many studies that are started but never end up getting published, sometimes because the results weren't what researchers were expecting.
She cited one case where there was plenty of unpublished data showing anti-arrhythmic drugs were potentially harmful for some patients -- but because doctors didn't have easy access to that information, they kept on prescribing them.
Patients "want to believe that the treatment that's being recommended to them is based on evidence," Dickersin, who wrote a commentary published with the study, told Reuters Health.
"If the (published) evidence is only half the story and the positive half of the story, than in fact the evidence isn't something solid, it's not a solid foundation on which a decision is being made."
Because of that, she added, the missing data in ClinicalTrials.gov is a big concern, and shows that some researchers may not appreciate the scientific importance of the records.
What's more, patients who participate in clinical trials should know if the study will ever see the light of day -- and if it's designed in a way that could inform future patient care, researchers said.
"Everyone who participates in a study should say, ‘Are you planning on publishing these results and, can I get a copy of what you find?'" Dickersin said.
SOURCE: bit.ly/4HWZ7 Journal of the American Medical Association, online May 1, 2012.