"Brain Training" Bullshittery
Ed Yong writes at The Atlantic of a study led by Daniel Simons -- "reviewing every single scientific paper cited by leading brain-training companies in support their products--374 in total":
Their review was published today, and it makes for stark reading. The studies, they concluded, suffer from a litany of important weaknesses, and provide little or no evidence that the games improve anything other than the specific tasks being trained. People get better at playing the games, but there are no convincing signs that those improvements transfer to general mental skills or to everyday life. "If you want to remember which drugs you have to take, or your schedule for the day, you're better off training those instead," says Simons....Brain-training certainly makes intuitive sense, which partly explains its charm. But it butts up against 100 years of psychological studies showing that practice only makes perfect in limited ways. "The things you train will improve, but they don't generalize," says Simons. Chess grandmasters can recall the position of every piece on a board, but aren't better at remembering anything else, for example. Or, "if you search baggage scans for a knife, you don't get better at spotting guns--or maybe not even other knives."
How crappy are the studies?
For example, to show that brain-training works, you need a good control group--that is, you need to compare people who play the games against others who didn't. More than that, the people who aren't being trained need to do something of equal time and effort--otherwise, you couldn't say if the brain-training group was benefiting because of those specific games, or just because they were playing any game at all. Very few studies met these standards. Some had no control group at all. Some asked the control volunteers to just sit around passively. Some gave them tasks that weren't really comparable, like watching educational DVDs.And very few studies accounted for expectations. People who play brain-training games might reasonably expect to become smarter, so they might do better in later tests simply because they were more motivated or confident. These effects are a problem for psychological studies; unlike medical trials, where placebo pills can be made indistinguishable from actual drugs, brain-training players will always know what they're doing. You can't get rid of expectation effects, but you can at least measure and adjust them--and no study did. "I don't fault studies for not doing this perfectly," says Simons, "but most didn't come close."
The best is this guy, Henry Mahncke, a neuroscientist and CEO of Posit Science, who calls the Simon team "moral monsters."
It was over this:
For example, brain-training proponents note that ACTIVE volunteers who trained their brain speed were half as likely to experience a car crash. That sounds incredible, but based on the absolute figures from the study, Simons' team calculated that someone who did the training could expect one fewer crash every 200 years.
Mahnke argues that this would be worth it.
Simon has a more practical idea:
"If you want to improve driving in the elderly, you're probably going to get a more efficient benefit practicing the things that actually get impaired like making left turns, rather than doing it indirectly," says Simons.
Simons is the co-author of a terrific popular science book, published in 2011, called Invisible Gorilla: How Our Intuitions Deceive Us. I had his co-author, Dr. Christopher Chabris, on my podcast to discuss it.








Oh, well. So much for my 2-year Lumosity subscription.
mpetrie98 at October 5, 2016 5:33 PM
Leave a comment