Value Assessment Frameworks ICER
By Matthew Sussman, MA

Top 3 Questions Raised by ICER’s Proposed Changes to its Value Assessment Framework

Just 2 weeks ago, the Institute for Clinical and Economic Review (ICER) proposed changes to its 2017-2019 Value Assessment Framework, which guides the analytic structure of the primary domains of their framework. These 4 domains include comparative clinical effectiveness, long-term cost-effectiveness (CE), potential other benefits or disadvantages and contextual considerations, and short-term affordability. Why now?

Every 2 years, ICER attempts to improve its methods for evaluation of interventions based, in part, on stakeholder feedback and the latest methodological trends in the industry. ICER requests public comments as part of this process, and pledges to incorporate those comments into their final framework, as they deem appropriate. The public comment period is now open through October 11 and ICER will release the final 2020 Value Assessment Framework on December 18, 2019.

In reviewing the proposed changes, there are a number of key areas that raise more questions than provide answers. Below I have highlighted a few notable questions I plan to submit during the public comment period.

How Will ICER Incorporate RWE?

In their proposal, ICER states its commitment to use existing real-world evidence (RWE), as well as to generate new RWE. This commitment aims to support their assessments of comparative clinical effectiveness, potential advantages or disadvantages and contextual considerations, and cost effectiveness (CE). ICER also indicates their intention to form collaborations with organizations to conduct de novo analyses of real-world data (RWD) and to ensure transparency throughout the process. While I applaud their intentions to use RWE given the richness that RWD now offers, I think further clarification is warranted, specifically:

  • Which organizations will ICER partner with to conduct RWE, and what are their credentials?
  • How will ICER ensure transparency?
    • Will this entail the development of research protocols? Will these research protocols be reviewed and critiqued, and by whom? Would ICER consider publishing the protocol through a peer-review process?
  • At what cost will ICER try to include RWE?
    • In their current evaluation of interventions for rheumatoid arthritis, ICER proposed using an analysis of the CORRONA registry to calculate discontinuation rates beyond treatment response. They referenced a study based on data collected between 2002 and 2011 and clearly did not include a newer class of medications, oral JAK inhibitors (tofacitinib, the first oral JAK, was approved in 2012), thereby limiting its usefulness and applicability. In this evaluation, ICER should consider analyses of alternative data sources that contain all recently-launched treatments. Otherwise, ICER risks applying erroneous projections of discontinuation rates to the JAK inhibitors under study, which may skew model results.

Will ICER Really Consider Alternative Economic Model Assumptions?

ICER indicates that they will add a “Controversies and Uncertainties” section to the CE section of their final evidence report, the goal of which will be to introduce alternative model structures and assumptions suggested by key stakeholders (e.g., manufacturers, patient advocacy groups). This new section will qualitatively discuss how alterations to the model might affect model results, but there is still ambiguity around how ICER will incorporate these findings. More specifically, I am left wondering:

  • Will ICER adopt stakeholder recommendations and quantitatively test the different assumptions and scenarios in their model?
  • If not, how will ICER know which assumptions and scenarios have the greatest impact on model results?

How Will ICER Incorporate Outcomes-based Payment Models?

ICER proposes including analyses of the potential impact of outcomes-based contracts in their CE modeling efforts. While they acknowledge that outcomes-based contracts may reduce the overall cost associated with an expensive intervention and thus decrease its incremental CE ratio, ICER has provided few details regarding the implementation of this practice, raising more questions like the following:

  • Will manufacturers and payers be willing to share sensitive and proprietary information on their outcomes-based contracting arrangements?
  • If outcomes-based contracting arrangements are not made publicly available, how will ICER decide which outcomes/measures, time points, and arrangements to focus on?
    • For instance, in their assessment of CAR-T therapies in 2018, ICER assessed incremental CE ratios based on payment at infusion, payment for responders at 1 month, and payment for responders at 1 year. They could have also included alternative time points based on expected response (e.g., 2, 3, 6, 9 months after treatment) and alternative arrangements such as amortization or loan payments over a defined time period.

ICER’s Next Steps

I applaud ICER for continually updating their framework and weaving in current methodology trends. Perhaps more important is their request for stakeholder feedback. In this post, I identified a number of questions regarding ICER’s proposed changes and I look forward to hearing their answers to these questions and others.

Given national scrutiny on rising drug prices, health technology assessment (HTA) organizations such as ICER that independently evaluate the clinical and economic value of interventions are primed to impact formulary decision-making processes in the US. It is therefore imperative that ICER’s methodology for conducting CE analyses are appropriate and relevant, and meet industry standards. Egregious – or even minor – flaws in ICER’s methodology could have an important downstream impact on drug pricing and market access.

As a result, it is crucial that ICER uses RWE judiciously or they run the risk of incorporating inappropriate RWE into their models, which could skew the CE model structure and impact results. Equally important is the need for ICER to quantitatively test alternative assumptions, scenarios, and input values that may lead to more appropriate model results.

I intend on taking advantage of this feedback period by submitting my questions and invite you to do the same. The final framework is due in a few short months and it’s up to all of us to have a voice in what that looks like.