The C.A.M. Report
Complementary and Alternative Medicine: Fair, Balanced, and to the Point
  • About this web log

    This blog ran from 2006 to 2016 and was intended as an objective and dispassionate source of information on the latest CAM research. Since my background is in pharmacy and allopathic medicine, I view all CAM as advancing through the development pipeline to eventually become integrated into mainstream medical practice. Some will succeed while others fail. But all are treated fairly here.

  • About the author

    John Russo, Jr., PharmD, is president of The MedCom Resource, Inc. Previously, he was senior vice president of medical communications at www.Vicus.com, a complementary and alternative medicine website.

  • Common sense considerations

    The material on this weblog is for informational purposes. It is not medical advice or counsel. Be smart, consult your health professional before using CAM.

  • Recent Posts

  • Recent Comments

    What does it take to design a “good” study?

     When I criticize studies for their poor design, I sometimes receive emails from readers defending the researchers.

    The arguments are predictable. “They don’t have the money.” “Only big bad pharmaceutical companies can afford to do ‘good’ studies,” etc.

    Baloney.

    Regardless of where you come down on this question, here are the 10 criteria for a well-designed clinical trial, courtesy of PEDro, the Physiotherapy Evidence Database.

    Random allocation

    • This ensures that the treatment and control groups are comparable.
    • Coin tossing and dice rolling are considered random.
    • Allocation by hospital record number or birth date is not.

    Concealed allocation

    • The person determining if a participant is eligible for the study must be unaware of which group the participant will be assigned to.
    • If group assignment is not concealed, there’s the risk of biases in an otherwise random allocation.

    Baseline similarity

    • Random assignment prevents selection bias, but doesn’t ensure that groups are equivalent at the start of the study.
    • The study report must present baseline data that compare important demographic variables, at least 1 measure of the severity of the condition, AND 1 key outcome measure.
    • p-values are not sufficient to confirm that groups are statistically similar.

    Blinding participants, therapists, and assessors

    • These people must not know the group the participants were assigned to.
    • Participants and therapists are “blind” if they are unable to distinguish between the treatments applied to different groups.
    • Where key outcomes are self-reported (visual analogue scale, pain diary), the assessor is blind if the participant was blind.
    • Blinding the therapists ensure the effects found were not affected by the therapists’ enthusiasm or lack thereof.

    Measure key outcomes from more than 85% of participants

    • The study report must state the number of participants starting the study and the number from whom key outcomes were measured.
    • Where outcomes are measured several times, a key outcome must have been measured in more than 85% of participants following at least 1 treatment.

    Intention to treat analysis

    • Analysis of data according to how participants were treated (instead of how participants should have been treated) may produce biases.
    • Data must be analyzed as if each participant received the treatment or control condition as planned.
    • Dropouts are treated as if they received all treatment.

    Between-group statistical comparisons

    • Comparisons can be made between 2 or more treatments, or vs a control condition.
    • It can include changes from the start of the study or a comparison of changes between groups.
    • Significance can be expressed as p-values (p < 0.05), which means the likelihood of the results happening by chance was less than 5%.

    Point measures and measures of variability

    • These include calculations that describe the variability in results (standard deviations, standard errors, confidence intervals, etc.).

    The bottom line?
    None of the criteria for a “good” study costs anything.

    It’s OK to conduct a poorly designed study as long as you don’t pretend it’s anything more than a collection of anecdotal experiences.

    The last 2 criteria require specialized knowledge. Budding researchers can start here with the statistical significance calculator.

    1/6/09 20:01 JR

    Leave a Comment

    You must be logged in to post a comment.