220
Views
0
CrossRef citations to date
0
Altmetric
Letter to the Editor

Hospital discharge register data on non-ST-elevation and ST-elevation myocardial infarction in Finland; terminology and statistical issues on validity and agreement to avoid misinterpretation

Pages 336-337 | Received 08 Apr 2020, Accepted 05 Jun 2020, Published online: 17 Jun 2020

I was interested to read the papers by Okkonen M and colleagues published in Apr 2020 issue of the Scand Cardiovasc J [Citation1]. The authors aimed to examine the validity of ST-elevation myocardial infarction (STEMI) and non-ST-elevation myocardial infarction (NSTEMI) diagnoses in Finnish nation-wide hospital discharge register (HDR). In the first stage, they sampled 180 patients treated in 1996–2012 for MI in three different hospitals (60 patients in each hospital). A cardiology resident classified the patients on the basis of ECG finding into following categories: NSTEMI, STEMI or not classifiable myocardial infarction (NCMI). In the second stage, they sampled altogether 270 additional patients i.e. 90 patients per hospital. Patients were treated between 2012–2014 for STEMI, NSTEMI and NCMI. The ECGs of these patients were independently evaluated by the cardiology resident and a senior cardiologist and compared with the HDR diagnosis. The authors mentioned that in the first stage, the agreement between the ECG coding of the cardiology resident and the HDR diagnoses was poor (Cohen’s kappa coefficient 0.38. In the second stage, the agreement remained at the same poor level (Cohen’s kappa = 0.22). The agreement between the cardiology resident and the senior cardiologist was, however, good (Cohen’s kappa = 0.75).

There are methodological issues which can affect the main message of the study. First, validity means the degree to which the result of a measurement, calculation, or specification conforms to the correct value or a standard. In another words, accuracy or validity is the most important criteria for the quality of a test and refers to whether or not the test measures what it claims to measure. The core design for determining and measuring the accuracy of a test is a comparison between an index test and a reference standard by applying both on similar people who are suspected of having the target result of interest [Citation2]. Although they reported sensitivity and specificity of HDR, validity studies should report comprehensible information together with the absolute number of true-positive, false-positive, false-negative as well as true-negative information or give info to alter their calculation in order that a minimum of one diagnostic performance indicator [i.e. predictive values or likelihood ratio (LR)] [Citation2–5].

Second, what is critically important is agreement (precision, repeatability, reliability) which is conceptually different with validity (accuracy) and consequently our methodological and statistical approach to assess agreement should be different. Agreement indicates to refinement in a measurement, calculation, or specification, especially as represented by the number of digits given. For validity, a global average approach is usually considered; however, regarding agreement, our approach should be individual based. Applying Cohen’s kappa coefficient is not appropriate to assess agreement. The reason is Cohen’s kappa coefficient depends on the prevalence in each category. It is possible to have the prevalence of concordant cells equal to 90% and discordant cells to 10%; however, get different kappa coefficient value [0.44 as moderate vs. 0.81 as very good], respectively (). Cohen’s kappa coefficient value also depends on the number of categories [Citation2,Citation6,Citation7]. I should mention that applying the weighted kappa would be a good choice to assess intra-rater agreement. However, Fleiss kappa is suggested to assess inter-rater agreement [Citation8]. They concluded that the division of MI diagnoses to STEMI and NSTEMI is not reliable in the Finnish HDR. Such conclusion can be a misleading message due to inappropriate use of statistical tests to assess validity and agreement. To make it brief, any conclusion on validity and agreement should take into account the above-mentioned methodological issues. Otherwise, misinterpretation may occur.

Table 1. Limitation of Cohen’s kappa coefficient to assess agreement between two observers with different prevalence in the two categories.

Disclosure statement

No potential conflict of interest relevant to this article was reported.

References

  • Okkonen M, Havulinna AS, Ukkola O, et al. The validity of hospital discharge register data on non-ST-elevation and ST-elevation myocardial infarction in Finland. Scand Cardiovasc J. 2020;54(2):108–114.
  • Szklo M, Nieto F.J. Epidemiology beyond the basics. 3 rd ed. Manhattan (NY): Jones and Bartlett Publisher; 2014. pp. 313–343.
  • Sabour S, Ghassemi F. Accuracy, validity, and reliability of the infrared optical head tracker (IOHT). Invest Ophthalmol Vis Sci. 2012;53(8):4776.
  • Sabour S, Farzaneh F, Peymani P. Evaluation of the sensitivity and reliability of primary rainbow trout hepatocyte vitellogenin expression as a screening assay for estrogen mimics: methodological issues. Aquat Toxicol. 2015;164:175–176.
  • Pirouzpanah S, Taleban FA, Mehdipour P, et al. The biomarker-based validity of a food frequency questionnaire to assess the intake status of folate, pyridoxine and cobalamin among Iranian primary breast cancer patients. Eur J Clin Nutr. 2014;68(3):316–323.
  • Naderi M, Sabour S. Reproducibility of diagnostic criteria associated with atypical breast cytology: a methodological issue. Cytopathology. 2018;29(4):396.
  • Sabour S. Reproducibility of semi-automatic coronary plaque quantification in coronary CT angiography with sub-mSv radiation dose; common mistakes. J Cardiovasc Comput Tomogr. 2016;10:21–22.
  • Rücker G, Schimek-Jasch T, Nestle U. Measuring inter-observer agreement in contour delineation of medical imaging in a dummy run using Fleiss’ kappa. Methods Inf Med. 2012;51(6):489–494.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.