Half Of All Medical Reporting Is Subject To Spin
I've long been disturbed that the LA Times "Health" section simply moves over general interest reporters to report on science. They have no idea of the body of work and no idea of how to vet studies, so the reporters I've read there basically act as credulous stenographers. As do "health" reporters at many publications and outlets.
From NHS.uk, a study found that 51% of news items reporting on medical trials -- randomized controlled trials, considered the "gold standard" of studies -- distorted the findings:
The piece first explains spin as it relates to reporting on medical studies:
To spin information is to distort the true picture to fulfil an agenda, often by presenting information in way that creates a positive or favourable impression.The researchers defined spin for the purposes of the study as "specific reporting strategies (intentional or unintentional) emphasising the beneficial effect of the experimental treatment."
Examples of medical spin cited by the researchers are:
Reporting positive effects that were not statistically significant - so that the effects could have been the result of chance.
Focusing on an outcome that the trial was not designed to study - for example, a trial that aimed, without success, to use acupuncture to treat hot flushes found incidentally that the treatment produced a slight improvement in sex drive. So the trial was spun with headlines such as "Acupuncture perks up sex drive."
Focusing on inappropriate sub-groups - for example, a trial of a new type 2 diabetes drug might be a total failure in the population at large but show a slight improvement in women in their twenties. This can be spun as an important breakthrough. However, type 2 diabetes is rare in women in their twenties, so the new drug would not actually be of great use.
Ignoring safety data - we need to be sure that the potential benefits of treatment outweigh the risks but research summaries and press releases routinely omit mention of risks, side effects and so forth, and thus give an overly positive impression of results.
Why this is important?
It is estimated that 90% of the public get information on developments in medicine and healthcare from the mainstream media. So the quality and reliability (or lack of it) of medical and health journalism is vitally important in determining whether we get an accurate idea of medical advances.At best, unreliable medical journalism can lead people to waste time and money on treatments for which there is no evidence of them being effective. At worst, it can kill.
For example, the unfounded link between the MMR vaccine and autism became a "health scare" perpetuated by large sections of the mainstream media from the late 1990s. Despite the lack of credible evidence to back up the link, frightened parents justifiably avoided letting their children have the MMR jab. Official statistics show that this led to a sharp rise in measles cases. While in most cases measles is simply unpleasant, in a small number of cases it can be fatal.Between 1998 and 2008 there were 15 measles-related deaths reported to the Health Protection Agency in England and Wales. All of these deaths may have been prevented by MMR vaccination.
Things to consider:
Things to consider
When you read a news report about a medical study, you may find it useful to consider: Was the research in humans? Headlines that talk of a "miracle cure" often relate to research conducted on, say, mice - and the results may not apply to people.How many people did the study involve? Small studies involving just a handful of people are more likely than large studies to reach conclusions that could simply be the result of chance.
Did the study actually assess what's in the headline? As mentioned, a headline saying acupuncture boosts your sex-life was actually based on a study into whether acupuncture could treat hot flushes.
Who paid for the study? While most commercially funded studies are reliable, it is always worth checking if there could be any potential conflict of interest, for example where a company funds research into its own products.
More on how to read health news.
via @medskep








I have found mainstream media coverage of nutritional science to be so universally awful that the only way to get any useful info is to dig up the actual scientific study and read that.
I've literally seen cases where the media blithely report even the opposite of what a study found.
Lobster at June 30, 2014 8:33 AM
so the reporters I've read there basically act as credulous stenographers
That is often true, no matter the subject at hand. And is something to keep in mind whilst ingesting news.
Long ago I learned to ignore the tech reporters, as there was a strong tendency to either regurgitate press releases, or over blow the significance of technologies.
I was asked by a younger colleague in the early 2000s how I kept up with the tech revolution. I said I didn't, that any tech that persisted would be of interest, but I'd seen too much vapor that never amounted to anything come and go that it wasn't worth my time or effort to keep up.
I R A Darth Aggie at June 30, 2014 9:12 AM
"so the reporters I've read there basically act as credulous stenographers..."
If they were stenographers, that would be an improvement. Far more often these days, they are simply cherry-picking the material and taking quotes and numbers out of context to reinforce their preconceived notions.
Cousin Dave at June 30, 2014 9:21 AM
Spin journalism and eager-beaver scientists are mutually corrupting.
A favorite critic is Ioannidis:
He's cited in this one as well:
And so...
Crid [CridComment at Gmail] at June 30, 2014 12:39 PM
…And so you zoom out a little bit, and tenure is a important part of this problem: Americans have weird ideas about what it means to be smart.
We can't afford to be wrong. No one else is going to carry this load for humanity.
Crid [CridComment at Gmail] at June 30, 2014 12:39 PM
…And so you zoom out a little bit, and tenure is a important part of this problem: Americans have weird ideas about what it means to be smart.
We can't afford to be wrong. No one else is going to carry this load for humanity.
Crid [CridComment at Gmail] at June 30, 2014 12:40 PM
Totes sorry
Crid [CridComment at Gmail] at June 30, 2014 12:41 PM
Scientific reporting in mainstream media has always been terrible. While the WSJ is an excellent business paper, their science reporting isn't fit to go next to the toilet.
As an example, my dad actually had some of his research reported on in the WSJ. One morning he read about all this wonderful cutting edge research his company was doing. He was completely unaware that anything close to that was being done. When he got into work he asked who was working on those project. Who was the head researcher. After all, he really wanted to know who was able to do what the paper claimed he could do. His boss looked him in the eye and said, "You are."
It wasn't even that there was a spin on the research. It was just badly reported by someone who had no idea what they were talking about.
Ben at June 30, 2014 1:14 PM
Oh, and to sorta agree with Crid. The Chinese threat is the same as the post WW2 Japanese threat, and I guess the soon to be Indonesian threat.
Ben at June 30, 2014 1:17 PM
The Goddess quotes: It is estimated that 90% of the public get information on developments in medicine and healthcare from the mainstream media. So the quality and reliability (or lack of it) of medical and health journalism is vitally important in determining whether we get an accurate idea of medical advances.
Harsh as it may sound, the media doesn't owe us a reliable picture. Fox News once sued for the right to lie about the news. They won, too.
Because people choose to rely upon the media, that's their stupidity. But if the media decides to shirk its responsibility, what can you do about it? Nothing.
You simply will have to do better. But the Almighty Media doesn't owe you a damned thing.
As for medical advances, I regard them with suspicion. They are not interested in cures for AIDS, for instance, when controlling it keeps you coming back for more. A cure for AIDS would automatically make AIDS patients ex-customers. A person being treated for AIDS is an continuing customer.
And with the longevity that AIDS patients have nowadays, you've got a good cash cow for twenty years, or more.
Patrick at June 30, 2014 5:22 PM
Actually they are now able to cure Hepatitis A because of the AIDS research. But they haven't been able to cure AIDS because they can kill all the active virus in the bloodstream, but haven't found a way to track it down in the marrow.
Jim P. at June 30, 2014 6:56 PM
Here's another good article about Ioannidis and his research.
Grey Ghost at July 1, 2014 7:48 AM
Amy Alkon
https://www.advicegoddess.com/archives/2014/06/half-of-all-med.html#comment-4809843">comment from Grey GhostHere are Sander Greenland and Steven Goodman vetting Ioannidis.
http://users.stat.umn.edu/~sandy/courses/8801/articles/IOANNIDIS/goodmangreenland.pdf
Amy Alkon
at July 1, 2014 7:59 AM
Amy's feelings have been hurt!
Science needs cheerleaders!
Crid [CridComment at Gmail] at July 1, 2014 11:23 AM
Also "net."
Clever, right?
Fuckin' powerful.
I've been meaning to try it myself with all sorts of arguments in all sorts of contexts.
Except that's a lie, because I haven't been meaning to do so or even thought about it, 'cause its dumb.
Crid [CridComment at Gmail] at July 1, 2014 11:47 AM
Wrong thread! That was for Dave B!
I feel bad!
Crid [CridComment at Gmail] at July 1, 2014 12:07 PM
Amy Alkon
https://www.advicegoddess.com/archives/2014/06/half-of-all-med.html#comment-4810485">comment from Crid [CridComment at Gmail]Um, Crid, what science needs is reporters who understand science and what is and isn't good research methodology. Goodman and Greenland's paper has a few more suggestions. I suggest you, you know, read it instead of just snarking.
Amy Alkon
at July 1, 2014 1:20 PM
Don't forget: Is this correlation between A & B something that the researchers were specifically looking for because there was a theory that suggested it, or was it something they stumbled upon? Quite often, some ballyhooed link between A & B turns out to be from a research project that measured (say) ten items over a sample group, calculated all 1/2 * 10 * 9 = 45 possible correlation coefficients between ten items, and "discovered" two or three correlations at the 95% confidence level. That is, at the level where the chance of this correlation being simply a coincidence is 1 in 20...
If you already had reason to think A and B are linked, getting a 95% correlation in a small study provides partial confirmation. For real confirmation, you need a much bigger study - much too big and expensive for a typical PHD thesis. It's also often beyond the reach of a tenured professor unless he can get one of the few large research grants - which nowadays usually means that it's a politically hot issue, so the prof had better not come up with the wrong answer...
The other side of this problem is that there are thousands of small scale studies that go unreported because they didn't come up with any positive result. (In a well-run graduate program you _can_ get a PHD thesis from a null result - what is supposed to matter is that you did the work, and understand and can explain the results. But don't expect a science journal to publish that inconclusive result.) So that behind that one small study result that found an unexpected and interesting result at 95% confidence, there are may be 19 similar studies that fell flat, and it's quite likely that it was just coincidental.
And then the results are supposed to be replicated. However, they don't wait for replication before publishing. Pre-internet, there was a good reason for that - until it was published, there would be too few people that knew about to provide independent replication. The science journals have not really adjusted to the internet era - and they'll avoid that as long as possible, because if the internet was fully utilized to share experimental protocols and results, there would be few reasons for most journals to exist. And so, you have too many editors that are just looking for something _interesting_ to fill their pages that isn't obviously wrong. (All peer review provides is correcting the obviously wrong, and often it even misses that.)
Finally, after a new result or coincidental link has been published in a narrowly-focused scientific journal (and perhaps misinterpreted and overblown in the mass media), many small teams at Podunk U. will attempt to replicate it. And unless these attempts _consistently_ fail, eventually someone will do a "metastudy", pooling all the small experiments and trying to combine them into the statistical equivalent of one big experiment for a much better confidence level. But remember how negative results are often unreported? It's possible for diligent researchers to find out about the unreported experiments and track down someone who still has the data backed up - but it's putting a LOT of faith in the metastudy researchers to assume that they worked so hard and effectively to track down evidence AGAINST their hypothesis.
Then throw in actual fraud, i.e. researchers who thought failure to replicate results might reflect badly on them, so they fudged their results. I'm pretty sure that in high school and undergraduate science labs, half of my classmates fudged results whenever the experiment didn't work as expected. Hopefully most of those people never completed a BS degree in science - but I'm sure some were... And if you're working in certain fields, including climate and nutrition, the pressure to get the "right" result is far worse than just a grade on a lab.
markm at July 10, 2014 4:08 PM
Leave a comment