TUFTS’ ORDINAL OVERSIGHT
It is unusual, after 46 years of database development to point out that there is a basic flaw that unfortunately renders the entire exercise irrelevant. This is the situation that the Tufts University Cost-Effectiveness Analysis (CEA) database finds itself in once questions of the axioms of fundamental measurement are raised. The concern rests on two considerations: first, the properties, from a measurement perspective, of the direct and indirect multiattribute preference instrument that have been widely accepted and applied in modeling over the 30 or more years and, second, the nature of the quality adjusted life years (QALY) construct. The two are inextricably linked and, taken together, spell the demise of the core product in health technology assessment, the commitment to assumption driven simulation models that produce approximate information.
The decision taken in the early 1990s, given the limited information to support claims for cost-effectiveness at product launch, was to reject hypothesis testing in favor of inventing evidence for non-evaluable lifetime value claims. This clearly violated the standards of normal science. Yet, whether through ignorance or design, analysts persevered in deliberately rejecting the notion of demarcation between science and non-science in favor of the latter. This is now recognized as a major error, building a meme or belief system that is an analytical dead end.
To support the creation of imaginary claims you need a metric which captures in one place the required basis for cost-effectiveness claims: enter the QALY as the gold standard to evaluate therapy benefits and support health care resource allocation decisions. But to create a QALY you require a preference score to discount time spent in the stages of a disease to their healthy time equivalent. This implies, although not articulated, a bounded ratio scale, capped at unity with a true zero. Unfortunately, those developing multiattribute instruments to capture this will o’the wisp did not appreciate the need to develop scores with these properties. Instead, they developed instruments which failed to meet this standard: the inevitable result was that we ended up with a range of instruments that produce only ordinal scores. That is, scores that could not support the standard arithmetic operations of classical statistical analysis but only non-parametric statistics.
This failure went either unrecognized or just ignored. The result is the current situation where analysts believe that the various ordinal preference measures are mystical ratio scales in disguise with a true zero (despite negative preference values) and interval scoring properties. It is a complete mess.
Enter Tufts University Medical Center and the Center for the Evaluation of Value and Risk in Health, Cost-Effectiveness Analysis (CEA) Registry; a database devoted to the evaluation of cost-per-QALY models, summarizing their (mathematically impossible) claims and producing a file of some 36,000 health state preference scores. What was not appreciated, and continues to be the case, is that they are promoting not only ordinal scores but ordinal scores with negative values. A recent review of the first hundred ordinal scores available through the Tufts CEA website resulted in a count of 47% of health states with negative values. This ensures that the hoped for a mystical ratio scale with a true zero is a true will o’the wisp. This is the flaw, unrecognized for 46 years.
REFERENCES
Langley P. An Ordinal Oversight: Abandoning the Tufts Medical Center Cost-Effectiveness Analysis (CEA) Database. Maimon Working Papers. No.4, January 2022 https://maimonresearch.com/an-ordinal-oversight-abandoning-the-tufts-medical-center-cost-effectiveness-analysis-cea-database
Langley P. Nothing to cheer about: Endorsing imaginary economic evaluations and value claims with CHEERS 22. Maimon Working Papers. No. 2. January 2022 https://maimonresearch.com/nothing-to-cheer-about-endorsing-imaginary-economic-evaluations-and-value-claims-with-cheers-22