901
Views
3
CrossRef citations to date
0
Altmetric
Professional

Validation of modeled pharmacoeconomic claims in formulary submissions

Pages 993-999 | Accepted 05 Oct 2015, Published online: 07 Nov 2015

Abstract

Modeled or simulated claims for costs and outcomes are a key element in formulary submissions and comparative assessments of drug products and devices; however, all too often these claims are presented in a form that is either unverifiable or potentially verifiable but in a time frame that is of no practical use to formulary committees and others who may be committed to ongoing disease-area and therapeutic-class reviews. On the assumption that formulary committees are interested in testable predictions for product performance in target populations and ongoing disease area and therapeutic reviews, the methodological standards that should be applied are those that are accepted in the natural sciences. Claims should be presented in a form that is amenable to falsification. If not, they have no scientific standing. Certainly one can follow ISPOR-SMDM standards for validating the assumptions underpinning a model or simulation. There is clearly an important role for simulations as an input to policy initiatives and developing claims for healthcare interventions and testable hypotheses; however, one would not evaluate such claims on the realism or otherwise of the model. The only standard is one of the model’s ability to predict outcomes successfully in a time frame that is practical and useful. No other standard is acceptable. This sets the stage for an active research agenda.

Introduction

The past 20 years have witnessed increasing attention being placed on cost-outcomesclaims to support formulary submissions for new pharmaceutical products. Typically, however, the predicted cost-outcome claims that are presented are not in a form that allows the claims to be validated. Claims are presented that are either impossible practically to verify or, if practicable, would require a timeline that effectively precludes any meaningful data collection.

At the same time, guidelines for formulary submissions are conspicuous by the absence of any explicit requirements for pharmaceutical manufacturers to put claims in a meaningful, predictive form that would allow validation as part of an ongoing process of disease area and therapeutic class reviews. Such a requirement is absent from the Academy of Managed Care (AMCP) Guidelines in the USCitation1, the guidelines for the National Institute for Care and Health Excellence (NICE)Citation2 and the Scottish Medicines Consortium (SMC)Citation3 in the UK, and those for the Pharmaceutical Benefits Advisory Committee (PBAC)Citation4 in Australia.

In contrast, the WellPoint guidelines for formulary submissions, issued in 2005, were explicit as to the need for predictive claims, to include budget impacts, as an input to regular and ongoing disease area and therapeutic class reviews—the underlying concept being that of an outcomes-based formularyCitation5,Citation6. The guidelines were in two parts: (i) a guideline for submissions for new products, indications and formulations and (ii) a guideline to support re-evaluation of products, indications and formulations. Under (i) it was made clear that WellPoint were interested in cost and outcomes predictions that could be validated in the short-term. To this end, those making the submission were asked to submit a protocol describing how the predictive claims were to be validated. Under (ii) it was emphasized that WellPoint was committed to a process of monitoring and validating claims. Initial claims were to be treated as provisional, subject to ongoing reviews. This was seen in the framework of product life cycles where ongoing re-assessments could capture the entry of new products in the disease or therapeutic area, ongoing comparative effectiveness reviews, modifications to clinical guidelines and changes in the delivery and treatment environment.

Of course, a formulary committee may not be interested in testable predictions or even in the monitoring of product performance. The reimbursement submission may require a ‘modeled’ claim, but the standards set for the submission, for example a ‘reference case’, ensure that the predictions for a new product are not, in practice, testable. This is not to say that the ‘reference case’ model is simply a ‘rite of passage’, but may be nothing more than a threshold pricing deviceCitation7.

It is difficult to believe, on the one hand, that formulary committees are not interested in cost-outcomes claims that could be evaluated in a reasonable timeframe and at a reasonable cost as an input to ongoing disease area and therapeutic class reviews. On the other hand, it may be that the issue of generating predictive claims for new products is seen as too difficult. The formulary committee instead relies on the apparent ‘strength’ of the modeled case, with only cursory ongoing reviews of any claims.

If we accept the need for testable predictive claims this does not mean that we put models and simulations to one side. There is certainly a role for simulations as ‘thought experiments’ in, for example, public policy reviews to evaluate the potential long-term benefits of intervention strategies. The Archimedes modelCitation8, for example, has been applied to a number of public health intervention strategies ranging from adherence and initial treatment decisions to screening for Lynch Syndrome and improved cardiovascular risk interventionsCitation9–12. Even so, if a particular strategy is implemented, then we would look to the predicted outcomes to be in a form that can be evaluated as part of ongoing project reviews. The standards for validation would be no different to those that would apply to new drug products.

Credibility of modeled claims

As part of the ongoing commitment by the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) to encourage good practices in outcomes research, some 48 consensus good practice reports as well as checklists for reporting have been published. Notably, these include the CHEERS checklist and the ISPOR-SMDM series of reports for modeling good practiceCitation13–15. The CHEERS statement sees economic evaluations as being conducted to inform resource allocation decisions: ‘the comparative analysis of alternative courses of action in terms of both their costs and consequences’Citation16. The CHEERS checklist does not, however, attempt to address the issue of the choice of model, stating that ‘we have made every effort to be neutral about the conduct of economic evaluation, allowing analysts the freedom to choose different methods’. The question of predictive validation is not considered.

The ISPOR-SMDM reports also see the purpose of healthcare models as providing decision-makers with quantitative information about the consequences of the options being considered. Models are used to estimate outcomes for specified scenarios, for particular interventions in specified populations; as such they are seen as assisting decision-makers to anticipate outcomes. Mathematical models are seen as occupying a special place when there are no empirical studies that address the decision problem or where it is not feasible to undertake studies in a reasonable time frame. Indeed, it is argued, mathematical models may be the only tool available to give the decision-makers the information they need for informed decision. The fundamental issue is one of credibility.

For a model simulation to be credible and, hence, useful, the ISPOR-SMDM standard points to the need for confidence in the model results. Credibility is enhanced through (i) transparency and (ii) validation. Five forms of validation are described:

  • face validity: model structure, data sources, problem formulation and results;

  • internal validity: consistency;

  • cross validity: comparing different models;

  • external validity: comparisons with actual event data; and

  • predictive validity.

Modeled claims take central stage; the cost and health implications of agreeing to cover a new technology is best accomplished by using a mathematical model that combines inputs from various sources together with assumptions about how these fit together and what might happen in reality. To ensure good practice in meeting these standards, a detailed checklist is provided.

As well as modeled cost-outcomes claims, a formulary committee may also be interested in a manufacturer providing a budget impact claim for a new product. This question has been addressed also by ISPOR in statements of good practice for budget impact analysis provided initially in 2007 and updated in 2014Citation17,Citation18. The budget impact standards recognize the need for short-term predictions and ones that can be validated by formulary committees or budget holders. Where a budget impact analysis is undertaken, this is seen as standing alongside a cost-effectiveness analysis. This is seen as unavoidable as the argument is that they embody different structural considerations and parameter estimates, with often different perspectives and time horizons.

The practice standards ask for the results of the budget impact analysis to be presented in a disaggregated manner with the budget impact, to include resource use and costs, reported for each budget period. The standards also mention that ‘changes in annual health outcomes may also be reported’. It is not clear from the document how these might be reported and what their relation might be to modeled cost-outcome claims.

Even so, it may be of interest to speculate as to whether or not the two modeled approaches could be integrated to generate testable predictions. After all, there are substantive common elements to include comparative clinical performance, the rate of product uptake and intervention mix, as well as the impact on the changing mix on resource utilization and costs. It may be that the framework proposed for budget impact assessment is the appropriate one for accommodating modeled cost-effectiveness scenarios to generate testable predictions; where modeled cost-outcomes claims are one element in a budget impact analysis.

Predictive validity

While meeting standards for face validity, internal validity, cross-validity and external validity may be considered important in establishing a ‘degree of belief’ in a cost-outcomes model, the main point is that the ultimate and only ‘test’ of any model is in its ability to generate predictions that are evaluable. In the context of healthcare decisions, evaluable predictions should be (i) practical and (ii) useful. It is worth emphasizing that the authors of the ISPOR reports not only recognize the importance of testable predictions, but they point to predictive validation as ‘the most desirable type as it corresponds best to modeling’s purpose’ (p. 849). In a similar vein, the authors of the recent ISPOR reports on dynamic simulation modeling recognize also the potential contribution of testing predictions as part of the model validation process, although the term validation is used to include comparison of estimates against outcomes from prospective interventions or natural experiments as well as evaluation of model assumptionsCitation19,Citation20.

For a new product, a formulary committee is presumably interested in assessing claims that the product is a useful or effective addition to drugs presently on formulary for a target population or disease area and that the claims made can be evaluated in a reasonable or relatively short period of time. However, rather than being one form of validation, predictive validation stands alone. Unless a model can be demonstrated to generate practical and useful predictions, outcomes which are unknown, we cannot fall back on ‘degree of belief’ to support claims for new interventions. The WellPoint guidelines, noted above, in requesting manufacturers to submit a protocol for evaluating their claims post-formulary listing, saw the provision of testable claims as an integral part of an ongoing process of product review over the life cycle, linked to a budget impact analysis. A requirement that is re-emphasized in this supplement by Schommer et al.Citation21

Decision models have long been seen as an integral if not the central element in the tool box of the pharmacoeconomist. Whether they merit the attention that has been given to them is another question. If decision modeling as a framework for evaluating costs and benefits (e.g., QALYs) is seen as a necessary tool for supporting healthcare interventions, in particular the addition of new products to a formulary, a reasonable question is whether or not the claims for costs and benefits are verifiable. If the claims made are not verifiable then we are in a quandary. Do we accept, for example, modeled lifetime cost per QALY claims for a new product set alongside a notional ‘standard of care’ on the basis that the model ‘looks realistic’ and is ‘persuasive’ or do we reject the model on the grounds that the predictions are either not in practice verifiable or that any attempt to verify lies quite outside any time frame that a formulary committee would consider reasonable for ongoing product assessment and formulary placement? In a lifetime simulation, the timeframe may extend for decades – a model horizon that is difficult to take seriously.

Or are we missing the point? Should we argue that once we move away from the ‘hard’ sciences such as physics, the rules for scientific rigor and validation cease to be relevant? Do our methodological standards ‘soften’ as we retreat to evaluating claims on the basis of the conformity of the model structure and assumptions about the real world; a simulacrum of the treating environment? If a testable prediction emerges, that’s a bonus. Otherwise we put to one side the more difficult predictive issues in simulation, focusing instead on looking for our car keys under the lamp post because that is where the light is shining.

Perhaps these notions or standards don’t apply because practitioners believe it is impossible to capture within a modeled framework the sheer complexity of treatment behavior and treatment response. We are forced back on ad hoc approaches to evaluating treatment effect with claims that are not intended to be subject to rigorous scientific standards. Decision-makers, faced with claims for new interventions in chronic conditions, may be prepared to accept as second-best the lifetime models where, on the assumptions of appropriate prescribing and adherence behaviors, and absent any consideration of the entry of new drugs, cost-per-QALY claims drive formulary acceptance.

Is the modeling of pharmacoeconomic claims, dressed up to meet standards of scientific rigor, nothing more than a marketing device? Under judiciously selected assumptions and techniques, claims for treatment effect are all too often designed to convince formulary committees and external assessors that product adoption is a ‘good thing’. Modeling a claim may be simply a smokescreen obscuring the fact that the product is ‘me to’ at a price incommensurate with potential benefits.

Comparing models

A model should not be assessed or compared to another model just on the ‘realism’ of its assumptions. This does not mean that the assumptions, whether they relate to the structure of the model or parameter choice, should be put to one side. Even if a model is seen as nothing more than a ‘thought experiment’, the questions addressed in comparing competing therapies necessarily involve a choice and justification of assumptions as to model structure and its parameters. Presumably, model developers and the recipients of modeled claims must have a ‘belief’ in whether or not the exercise is worthwhile and that resultant claims are ‘sensible’.

Even so, the point remains: however much the model structure and assumptions are justified by appeals to internal and external validity, a model is still a model. The ultimate test is one of the ability of the model to generate predictions as to unknown consequences; predictions which are hopefully reproducible in a variety of healthcare settings.

Whether you look upon a model as an attempt to generate testable predictions or as a pricing exercise, transparency is a concern. Decision-makers in healthcare systems have long been wary of ‘black box’ simulations, notably those funded or presented by manufacturers to support submissions for formulary acceptance. Issues of transparency become more critical the more complex the model structure and the more abstruse the techniques employed to generate outcome claims. As formulary committee members would typically lack the skills required to unravel the model, the understandable and real likelihood is of the model being put to one side. Increasing complexity, even with checklists, is probably inversely related to acceptability.

If the attempt is to mimic the ‘real’ world and attempt to account for the impact of interventions on disparate target groups, then the number of cost-outcomes scenarios reported on could easily overwhelm the audience. If, as typically advocated, allowance is made for parameter uncertainty, the reporting (and claims for validation) become even more complex. This situation becomes even more unmanageable once the model is shipped from one healthcare system to another – systems which may have different pricing structures, availability of comparator products and even different treatment guidelines. A recent editorial by Frappier et al.Citation23 raises the issue of cost bias in economic evaluations, noting that there is little guidance available to cost clinical events and resource utilization. Available evidence would suggest that different costing methods can generate substantially different cost-estimates. All will require ‘customized’ simulations; a feature that is emphasized in the practice standards for budget impact claims. This casts doubt on published claims for cost-effectiveness where limited scenarios are explored.

Feedback

If a new product is introduced to a formulary or a new intervention in preventive therapy is introduced into a healthcare system, decision-makers presumably want to know, within a relatively short period of time (within, say, 3 years), whether or not claims made and the resources allocated are justified. Claims made should be evaluable within a timeframe that makes sense to the decision-maker. If these claims are intended to be seen as bone fide hypotheses then they need to be presented in a form that facilitates validation.

The turnover of products and devices in the delivery of healthcare underscores the importance of feedback with a timeframe that is useful to formulary committees. A reference case, such as that required by NICE, is of little if any practical benefit to a formulary committee. If a product is not performing against metrics consistent with clinical trial results and claims for cost-effectiveness then it should be re-assessed. After all, the FDA asks manufacturers to set up programs for risk assessment and safety. Products are monitored continually. Why should outcome claims be treated any differently?

Of course, feedback is not restricted to the experience of a single health system. From this perspective there is an advantage in having a ‘fragmented’ health system such as the US, where there are multiple players and when a new product may enter the health market place in stages, depending on the rate of adoption and diffusion among practice locations and individual physicians. Late adopters may look to the experience of early adopters and require manufacturers to provide evidence of treatment effects.

Claims that are qualified by statements that point to the difficulty of validation in the short-term, insisting that the only option is a retrospective validation at some time in the future, are unacceptable. Of course, a manufacturer may wish to sit on its hands, pointing to the practical and financial obstacles, for example, to underwriting a patient registry. This begs the question. Manufacturers and groups advocating the introduction of new products and programs should not be allowed to take refuge in the ‘too difficult’ redoubt. There are all too many examples of products or programs that should have been, in retrospect, withdrawn, cancelled or smothered at birth, for a denial of the ability to generate short-term assessable predictions to be acceptable.

A further issue is whether or not we should accept that a model is valid, in the sense of giving decision-makers the information for informed decisions, in the absence of any predictive evaluation of claims. One scenario, for example, could be where two manufacturers with competitor products, produce models which meet these standards but which come to opposite conclusions as to the predicted benefits of adopting the therapy. In the absence of assessing modeled claims on the basis of testable predictions, the decision-maker is forced back on a comparison of the competing models. This seems an odd way of establishing the merits of competing claims, yet the literature is replete with modeled claims which are best seen as marketing claims with no claim for being comprehensive in capturing competing products and generating testable predictions. All too often, manufacturer commissioned models are treated with suspicion and reputable journals may refuse publication to the more obvious ‘market access’ support pieces.

Validation and falsification

The hallmark of the scientific method, as it is applied in both the natural and social sciences, is a research program that rests on the concept of falsification. If we accept this standard then, if a model or simulation is built, with its assumptions verified by observation or comparison, but with the inability to generate testable predictions (or predictions that can be tested within a reasonable time frame) then the probability of falsification is zero. We are forced back on attempting to compare models on the basis of the ‘realism’ of their assumptions, on their representation of ‘reality’, not on an independent assessment of the validity of testable predictions. The failure or inability to generate testable predictions means it is impossible to apply methodological standards that underpin natural or normal science, supporting research programs that produce new factsCitation23.

The fact that a model may create an illusion of confidence is not a basis for accepting a model and its predictions. If we believe in the importance of a demarcation between falsifiable and unfalsifiable statements, between science and pseudoscience, then we must require a model to generate predictions that are falsifiable (and not simply falsifiable in principle). The question is epitomized in the famous saying of Wolfgang Pauli that, if an argument fails to be scientific because it cannot be falsified by experiment, ‘This isn’t right. This isn’t even wrong!’Citation24 (p. 350).

It is of interest to note that the issue of experimental verification vs the inherent elegance and consistency of a theory is now a topic of debate in physics, long held to be the exemplar of the Popperian view that a theory must be falsifiable to be scientificCitation25,Citation26. As Ellis and SilkCitation25 point out, it is moving the goal posts to argue, not that belief in a theory increases when observational evidence rises to support it but that theoretical discoveries bolster belief—that a theory is so good, so inherently elegant, that it supplants the need for data and testing. Unfortunately, conclusions arising logically from mathematics need not apply to the real world. The authors conclude: In our view the issue boils down to clarifying one question: what potential observational evidence is there that would persuade you that the theory is wrong and lead you to abandoning it? If there is none, it is not a scientific theory.

However, we should not fall into the trap of what LakatosCitation23 has described as naïve falsification. While, for the naïve falsificationist any theory that can be interpreted as experimentally falsifiable is rejected; for Lakatos, the acceptable research program is one that is described as sophisticated falsificationism. For the sophisticated falsificationist, a theory is only acceptable or scientific if it has corroborated excess empirical content over its predecessor (or rival), leading to the discovery of novel facts.

Provisional claims

Formulary committees and similar entities need to set standards for manufacturers in submitting claims for their products. A committee may eschew the more elegant and mathematically rigorous models that present non-testable lifetime cost-per-QALY claims generated with Markov or other simulation frameworks, settling instead for more prosaic short-term claims that can be validated in a meaningful timeframe.

Establishing standards for submitting and evaluating what may be described as short-term claims puts the ball firmly in the manufacturer’s court. Unless claims meet the standards requested, it seems pointless to underwrite studies that present modeled claims that are bound to be rejected or ignored by formulary committees. Unfortunately, as noted, the health technology assessment literature is replete with such studies. Will manufacturers continue to underwrite such studies, seeing them as a necessary rite of passage—an attempt to get the attention of decision-makers—in setting the stage for market access claims? Or will manufacturers, reflecting on internal budget constraints, eschew underwriting non-testable modeled claims, focusing instead on meeting standards set by formulary committees, presenting evidence that supports claims for their products in a meaningful timeframe. Recognizing a need for evidence as input to disease area and therapeutic class reviews, leading to reviews of pricing agreements and formulary placement, would likely dominate other considerations.

If a manufacturer submits a modeled claim, but indicates that it is provisional and subject to validation as per a proposed protocol, this sets the stage for claims that are in empirically verifiable terms. The ‘degree of belief’ in a product’s performance may well temper the initial claim for product performance if, within a 2–3 year timeframe, manufacturers are aware that evidence has to be presented that assesses initial claims. Indeed, a manufacturer may well challenge a competitor who rests its case for formulary listing on claims that are non-testable, even if the competitor seeks refuge in the defense that, while non-testable, the modeled claims have been peer reviewed, published and meet notional validation standards for modeling.

A commitment to a robust research agenda

To argue that testing predictive claims is impractical and should be put to one side because of the timelines involved is unacceptable. Assuming we want to continue subscribing to the cost-outcomes modeling meme and a belief that decision-makers in healthcare systems see merit in this, then the task facing us is to meet the challenge of developing model frameworks within a research strategy that generate predictions that can be tested in timelines that are agreeable to decision-makers. Focusing on generating testable predictions has the potential to set the stage for a research program that could provide pragmatic and clinically meaningful contributions to healthcare decision-making; a research program that recognizes the need to generate testable hypotheses.

The lack of predictive content in modeled claims for cost-effectiveness points, therefore, to a disconnect between modeled claims and comparative effectiveness research studies. In the absence of testable predictions, the link between modeled claims and comparative effectiveness studies is, at best, tenuous. If comparative effectiveness studies are divorced from initial claims as to relative product performance, it is difficult to see what is achieved by formulary submission guidelines for cost-outcomes claims which accept unverifiable claims. In the absence of testable claims for product impact on the comparative performance of drugs in disease and therapy areas, then there are no reference points for consequent comparative effectiveness studies. There are, in effect, no hypotheses which direct the research; there is no progressive research agenda focused on developing predictions and claims for drug impacts that may guide formulary decisions.

To return to the starting point of this discussion, being predictive of unknown facts is essential to the empirical testing of hypothesesCitation27. This is the only basis on which we can see accretion of knowledge in modeling conforming to the robust standards of normal science. Robust in the sense of providing comparative claims on the future that are non-trivial, testable, refutable and reproducible—all within a timeframe that makes sense to the decision-maker and supports ongoing re-assessments.

If we accept this robust standard for a research program, then we have a long way to go. Surely we are aiming at an accretion of knowledge. If so, then we need to focus on unknown treatment outcomes, notably for new products and interventions, and our ability to understand and predict claims in both clinical as well as resource utilization terms. Or are we locked into a static view of the role of economic evaluations? Are we subjecting claims for costs and outcomes, in the words of Thomas Kuhn, to ‘maximum strain’?Citation28 Are we astronomers or astrologers?

Transparency

Declaration of funding

The author has not received any funding for this study.

Declaration of financial/other relationships

The author has not received any financial support and has no relationships or conflict of interest with regard to the content of this article. JME peer reviewers on this manuscript have no relevant financial or other relationships to disclose.

References

  • Academy of Managed Care Pharmacy. The AMCP Format for Formulary Submissions (V 3.1). Alexandria: AMCP, 2012
  • National Institute for Care and Clinical Excellence. Guide to the methods of technology appraisal. London: NICE, 2013
  • Scottish Medicines Consortium. Submission Guidance and Templates for Submission. Glasgow: Scottish Medicines Consortium, 2014
  • Australian Government. Department of Health. Guidelines for preparing submissions to the Pharmaceutical Benefits Advisory Committee (version 4.4). Canberra: Australian Government, 2013
  • Langley PC. Recent developments in the health technology assessment process. In: Fulda TR, Wertheimer AI, eds. Handbook of pharmaceutical public policy. New York: Pharmaceutical Products Press, 2007
  • Sweet B, Tadlock CG, Waugh W, et al. The WellPoint outcomes based formulary: enhancing the health technology assessment process. J Med Econ 2005;8:13-25
  • Belsey J. Predictive validation of modeled health technology assessment claims: lessons from NICE. J Med Econ 2015;18(12):1007-1012
  • Schlessinger L, Eddy DM. Archimedes: a new model for simulating health care systems – the mathematical formulation. J Biomed Informatics 2002;35:37-50
  • Schlender A, Alperion PE, Grossman HL, Sutherlands ER. Modeling the impact of increased adherence to asthma therapy. PLoS One 2012;7:e51139
  • Van Herick A, Schuetz CA, Alperin P, et al. The impact of initial statin treatment decisions on cardiovascular outcomes in clinical care settings: estimates using the Archimedes Model. ClinicoEcon Outcomes Res 2012;4:337-47
  • Shum K, Alperin P, Shalnova S, et al. Simulating the impact of improved cardiovascular risk interventions on clinical and economic outcomes in Russia. PLoS One 2014;9:e103280
  • Dinh TA, Rosner BI, Atwood J, et al. Health benefits and cost-effectiveness of primary genetic screening for Lynch Syndrome in the general population. Cancer Prev Res (Phila) 2011;4:9-22
  • Husereau D, Drummond M, Petrou S, et al. Consolidated health economics evaluation reporting standards (CHEERS) statement. J Med Econ 2013;16:713-19
  • Caro JJ, Briggs A, Siebert U, et al. Modeling good research practices – Overview: a report of the ISPOR-SMDM Modeling good practices task force – 1. Value Health 2012;15:796-803
  • Eddy DM, Hollingworth W, Caro JJ, et al. Model transparency and validation: a report of the ISPOR-SMDM modeling good research practices Task Force – 7. Value Health 2012;15:843-50
  • Drummond ML, Sculphrt MJ, Torrance GM, et al. Methods for the economic evaluation of health care programmes, 3rd edn. Oxford: OUP, 2005
  • Mauskopf JA, Sullivan SD, Annemans L, et al. Principles of good practice for budget impact analysis: report of the ISPOR Task Force on Good research Practices – Budget Impact Analysis. Value Health 2007;10:336-47
  • Sullivan SD, Mauskopf JA, Augustovski F, et al. Budget impact analysis – principles of good practice: Report of the ISPOR 2012 Budget Impact Analysis Good Practice II Task Force. Value Health 2014;17:5-14
  • Marshall DA, Burgos-Liz L, Ijzerman MJ, et al. Applying dynamic simulation modeling methods in health care delivery research – the SIMULATE checklist: Report of the ISPOR Simulation Modeling Emerging Good Practices Task Force. Value Health 2015;18:5-16
  • Marshall DA, Burgos-Liz L, Ijzerman MJ, et al. Selecting a dynamic simulation modeling method for health care delivery research – Part 2: Report of the ISPOR Dynamic Simulation Modeling Emerging Good Practices Task Force. Value Health 2015;18:147-60
  • Schommer J, Carlson A, Rhee G. Validating pharmaceutical product claims: questions a formulary committee should ask. J Med Econ 2015;18(12): 1000-1006
  • Frappier J, Tremblay G, Charny M, et al. Costing bias in economic evaluations. J Med Econ 2015;18:596-9
  • Lakatos I. Falsification and the methodology of scientific research programs. In: Lakatos I, Musgrave A. Criticism and the growth of knowledge, 3rd impression. London: Cambridge University Press, 1974
  • Prochnow HV. The successful toastmaster: a treasure chest of introductions, epigrams, humor and quotations. New York: Harper Collins, 1966
  • Ellis G, Silk J. Defend the integrity of physics. Nature 2014;516:321–3
  • Frank A, Gleiser M. A crisis at the edge of physics. New York Times, June 7, 2015
  • Ayala F. The candle and the darkness. Science 1996;273:442-4
  • Kuhn TS. Logic of discovery or psychology of research. In: Lakatos I, Musgrave A, editors. Criticism and the growth of knowledge, 3rd impression. London: Cambridge University Press, 1974

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.