666
Views
1
CrossRef citations to date
0
Altmetric
Professional

The status of modeled claims

Pages 991-992 | Accepted 13 Oct 2015, Published online: 07 Nov 2015

Abstract

The only acceptable modeled claims for costs and outcomes are those that are testable and can be validated in a timeframe that is acceptable to a formulary committee. This issue provides four papers which explore the methodological issues in validation, the UK experience with NICE, the questions a formulary committee should ask of modeled claims, and the role of Big Data in validating modeled claims.

If we accept the need for ‘reliable, credible, and scientific evidence’ to support formulary decisions, the focus has to be on claims for products or devices that are evaluable in a timeframe that makes sense to decision-makers. While product claims might be modeled and put forward as indicative ‘thought experiments’, the only acceptable claims are those that generate evaluable predictions. They should be capable of being falsified. Otherwise, a formulary committee or similar entity is in the position of being unsure as to whether claims are right or even if they are wrong. This is the argument put forward in the first paper of this special supplement: decisions should be based on claims that are evaluable in a timeframe that is relevant to a formulary committeeCitation1. Claims that are non-evaluable or potentially testable in a timeframe that is impracticable should be rejected, as they can potentially mislead formulary decision-makers.

The issue is one of feedback to formulary committees; feedback which not only includes standards for adverse events and safety, but standards for clinical effectiveness and direct medical cost claims. Standards which support a quality metric in healthcare delivery and which recognize the fiduciary responsibilities of health system directors and managers. Modeled claims, which may be elegant, persuasive, and/or mathematically rigorous, are not acceptable if there is no possibility of claims being evaluated and reported back to a formulary committee in a meaningful time frame. An analogy here is with string theory: rigorous, elegant, and theoretically attractive, yet incapable of generating testable predictions. It fails to meet required standards for experimental evaluation. All assumptions are ultimately negated or validated by experiment.

The rules of science are just as applicable to health technology assessment as they are to particle physics. Predictions as to product or device impact should be evaluable. Of course, a formulary committee is not bound to meet this standard; it may be content to accept non-evaluable claims as ‘thought experiments’. If so, there is the potential risk of making formulary decisions on what may be seen, as evidence for comparative product performance emerges, as misleading information. The argument put forward in the second paper of this special supplement is that formulary committees should set the ground rules for ‘reliable, credible, and scientific’ evidenceCitation2. The suggestion is that a formulary committee should require all submissions for new products and for products subject to ongoing disease area and therapeutic class reviews to be accompanied by a protocol for evaluating comparative product claims. If the protocol is accepted and implemented, the results of the evaluation would be available as feedback; as inputs to ongoing formulary decisions. In this context, initial claims would be considered provisional, subject to ongoing assessment and recontracting.

The third paper in this special supplement focuses on the standards established by the National Institute for Care and Clinical Excellence (NICE) in the UK to evaluate product claims for product listing for the NHSCitation3. The paper points to the absence of standards by which claims might be empirically assessed. Although the NICE reference case has received a favorable reception in academic circles, it should be seen, not as a framework for developing testable predictions as to product impact in the NHS, but rather a device for pricing negotiations and pricing decisions. The few attempts at genuine external claims validation have been somewhat less than successful, as exemplified by the program intended to assess disease modifying therapies in multiple sclerosis. Similarly patient access schemes, rather than exploring the opportunity for evaluating the validity of the underlying modeled claim for willingness to pay, have been used to manipulate the acquisition cost of inputs to achieve a desired numerical result.

Manufacturers may object to a request for a protocol to support claims evaluation on the grounds that it is unreasonable and impractical. They may argue that the timeframe for assessment is too short for data to be accessed or generated, and that it sets another cost impost on a company that has already invested heavily in the product. This is unacceptable. As the fourth paper in this special supplement demonstrates, there are a plethora of data-sets available, under the generic name of ‘Big Data’, that can be accessed relatively quickly and inexpensively, and which can generate comparative data for claims evaluationCitation4. Access to these data would go a long way to meeting formulary needs for data to evaluate claims in a timeframe for feedback, say 2–3 years, that contributes to ongoing disease area and therapeutic reviews. Formulary committees are not necessarily asking for high cost, long-term pragmatic RCTs. The evidence would suggest that such an approach is fraught with administrative and other practical obstacles. Results, if and when they appear, provide feedback only when the horse has well and truly bolted. Claims should be put forward that recognize that there exists ready access to administrative records, linked laboratory values, and electronic medical records, as well as the ability to undertake short-term observational studies. Hence, the importance of submission protocols that set out the data required and study design to evaluate claims.

Standards for evaluation, it should be remembered, are set by the formulary committee. The committee requests a protocol, and the protocol is agreed between the parties. The results of the evaluation are reported, by request, as feedback to the formulary committee. There is no presumption that the protocol meet FDA standards for product promotion or standards that have been proposed to support comparative effectiveness research. Certainly, evaluations would be compatible with an ongoing program of comparative effectiveness research, and publishing the claims would be encouraged, but that is a decision for the parties concerned.

If we accept the message of this special supplement, there is the potential for an evidence-driven research agenda in health technology assessment that focuses on the contribution of evaluable claims in formulary decisions. An evidence-based research agenda which, in its contribution to ongoing disease area and therapeutic class reviews, may contribute to a more effective, high quality and efficient healthcare delivery; the establishment of a quality metric. Even so, adoption of a robust, empirically focused research agenda does not automatically disqualify modeled yet untestable claims. Whether these are to be considered worthwhile lies with the decision-makers.

Transparency

Declaration of funding

This paper has received no funding.

Declaration of financial/other relationships

None reported. JME peer reviewers on this manuscript have no relevant financial or other relationships to disclose.

References

  • Langley P. Validation of modeled pharmacoeconomic claims in formulary submissions. J Med Econ 2015;18(12):993-999
  • Schommer J, Carlson A, Rhee G. Validating pharmaceutical product claims: questions a formulary committee should ask. J Med Econ 2015;18(12): 1000-1006
  • Belsey J. Predictive validation of modeled health technology assessment claims: lessons from NICE. J Med Econ 2015;18(12):1007-1012
  • Wasser T, Haynes K, Barron J, Cziraky M. Using ‘big data’ to validate claims made in the pharmaceutical approval process. J Med Econ 2015;18(12): 1013-1019

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.