Monday, May 05, 2014

Countering the fog of research

Countering the fog of research

Brenda Goodman
Brenda Goodman (@GoodmanBrenda), an Atlanta-based freelancer, is AHCJ’s topic leader on medical studies, curating related material at healthjournalism.org. She welcomes questions and suggestions on medical study resources and tip sheets atbrenda@healthjournalism.org.
Image by Mark Robinson via flickr.
IMAGE BY MARK ROBINSON VIA FLICKR.
The military uses the phrase “the fog of war” to describe the miscalculations and botched decisions that get made in the heat of combat.
But you need not sign up for active duty to run into foggy thinking. Just call a scientist and interview them about their own research.
One of my favorite examples of this is when researchers conduct observational studies that can’t show cause and effect, yet interpret their findings to reporters as if they do.
John Gever of MedPage Today recently noticed this in an MRI study of the brains of 20 marijuana smokers and 20 people the same age who didn’t smoke pot.
The MRI scans showed differences between the brains of the pot smokers vs. the non-smokers.
While the study text was mostly careful to say all the right things — that the findings couldn’t prove cause-and-effect and that the study was simply a snapshot in time, and thus unable to determine whether the brain changes or pot smoking happened first — the study authors were less cautious in their communications.
From Gever’s post about the study:
“That, however, didn’t stop senior author Hans Breiter, MD, of Northwestern from opining in the SfN press release that the study “raises a strong challenge to the idea that casual marijuana use isn’t associated with bad consequences.”
Um, no, it doesn’t – not without before-and-after MRI scans showing brain structure changes in users that differ from nonusers and documentation of functional impairments associated with those changes.”
Reporters aren’t the only ones who have picked up on this phenomenon. Researchers, themselves, know it happens. A study published in 2010 in the journal Obesity Facts looked at the “inappropriate use of causal language” in 525 peer-reviewed obesity and nutrition papers.
Researchers found misleading causal language in the titles or abstracts of 161 papers – nearly a third of all the studies they surveyed. And many of those studies prove that somebody actually knew better. In 31 cases, the studies also included language, albeit way down in the discussion sections, correctly alerting readers that their findings didn’t show cause and effect.
Which is all very strange, isn’t it? Presumably, the same person who is writing the study title or abstract is the same person who pens the discussion. And, presumably, the same person who adds these cool-headed cautions in the study text is the one you’re talking to when they tell you that based on their study, it certainly looks like pot smoking changes the brain.
I’ve always wondered what happens between writing the study and talking about it that alters the nature of the truth researchers see in their findings.
Is there a mental shift from scientist to salesman when a study is published? Does peer review of a paper force the inclusion of more cautious language that the study authors really want to use? Or does the fog creep in even earlier? Do scientists conduct research already believing so strongly in their own theories that the limitations inherent in their methods seem trivial – like the boilerplate legalese that gets added to the end of contracts?
But those cautions are far from trivial. Time and time again popular theories built on less definitive studies have been shot down because the science supporting them was shaky. (For a nice review of these medical reversals, see this research letter from JAMA Internal Medicine.)
That’s why it is important not to venture onto that slippery slope. If researchers give you quotes that use causal language where it isn’t warranted, don’t use them.
That’s not the easiest thing in the world to do, I realize. Those quotes are often good ones. They’re usually easy to understand. They’re bold. They offer an action someone can take to change an outcome.
But, sadly, they’re just not true.
There are some ways you can help pierce the fog and get accurate quotes.
  • First, read the study so you know its limits.
  • Second, ask questions about those limits. Researchers often take their cues from the tone of the person who’s interviewing them. If you’re cautious in what you say about a study, they will be, too.
  • And third, always — let me say that again – always contact at least one other source who knows the subject but wasn’t involved in the research. That’s usually where the clearest view will be.

No comments:

Post a Comment