I am at the annual meeting of the Drug Information Association (DIA), a group of drug development/clinical trial professionals, in Philadelphia. Among the sessions are discussions about how to get a more rigorous assessment of whether one medical treatment works better than another—called comparative effectiveness analysis (here, CEA)—and whether the risks of some things like new drugs are worth it (called risk-benefit or harm-benefit analysis). Both are of great interest, and involve challenging efforts that can be easily misunderstood in places like the general press. Some key points:
This is hard—the techniques are being worked out and involved sophisticated mathematical analysis to do things like weighting (as in, is a 50% chance that your migraines will get better in a few hours worth a 1% greater chance you’ll get a heart attack over the next 10 years? How relatively important are those two?), and sensitivity analysis (do we get the same results if the weightings, or the chances of different outcomes, change?).
You can get a different answer depending on who you ask—One great idea in benefit-harm analysis is to ask patients with a disease what they would prefer. Some studies work on that principle, by surveying patients. Patients tend to be more willing to take risks than doctors (“first, do no harm,” and by the way don’t get sued) or regulators (avoid the nasty Congressional inquiry and press coverage if someone decides you approved a risky drug). But both types of groups need to be approached.
The judgments are value-laden, so a single, precise mathematical answer will be elusive if not impossible, but the information gained can help inform reflection and decision-making. There will be a risk to conclude too much. But I bet that much enlightening information can be gleaned.
Techniques are not standard and so it’s hard to get reproduce findings. But every new “study” will be trumped by the press as definitive. Take your medical reporting with a grain of salt.
What you find depends on what you look for—which means that taking time to define the research question carefully is important. Also, once you find something, you tend to assume that you are right in future inquiries.
Note that none of this brings cost into the equation, yet. In fact, some of the work is funded by the ‘stimulus package’ and the ‘Affordable Care Act,” both of which prohibit using funds for cost-effectiveness analysis. That will come but is not the whole schmear.
This work will go forward, and provide a lot of useful data, I bet. There will also be a great risk that results will be overinterpreted, people will jump to conclusions, and there will be a great concern that decision-makers are doing too much with the results. Still, this work is to be followed, viewed soberly, and not overly feared.