Back in December, I recapped an issues panel titled, “Should ICER be NICE (or Not)?” from ISPOR’s 20th Annual European Congress, which sought to compare the use of ICER’s value assessment framework versus NICE’s guidelines when conducting and interpreting cost-effectiveness (CE) analyses. In that post, I summarized the ideas expressed during the panel session, specifically that most of the observed procedural and technical differences between the two approaches are minor. However, the ISPOR discussion piqued my interest so I decided to dig a bit deeper into ICER’s framework and NICE’s guidelines.
One of the primary uses of health technology assessments (HTAs) is to inform reimbursement and coverage decisions for health technologies, based in part on the relative costs and benefits of interventions (i.e., their cost effectiveness). In my research, I discovered that the precise manner in which ICER and NICE perform their HTAs do, in fact, differ. The differences are subtle, yet also quite impactful to the final recommendations.
ICER and NICE do share relatively consistent methodologies when conducting CE analyses, but there are three substantial differences in their approaches. These include the use of a CE threshold, the magnitude of the CE threshold, and the evaluation and circulation of value-based prices:
This is a high-level summary of the main differences between ICER and NICE. For a more in-depth review, stay tuned for the next post in this ICER vs. NICE blog series. Over the next few weeks, I’ll plan to address each difference in a series of posts. The dissimilarities between ICER and NICE may be subtle, but the variations in methodologies can lead to very different recommendations for coverage, reimbursement, and pricing making them – instead – quite impactful.