Abstract
The problem of quantifying the distance between distributions arises in various fields, including cryptography, information theory, communication networks, machine learning, and data mining. In this article, the analogy with the cumulative Jensen–Shannon divergence, defined in Nguyen and Vreeken (Citation2015), we propose a new divergence measure based on the cumulative distribution function and call it the cumulative α-Jensen–Shannon divergence, denoted by . Properties of
are studied in detail, and also two upper bounds for
are obtained. The simplified results under the proportional reversed hazard rate model are given. Various illustrative examples are analyzed.
Authors contributions
Riyahi: Conceptualization, Methodology, Software, Validation, Writing- Original draft preparation, Writing- Reviewing and Editing. M. Baratnia: Conceptualization, Software, Validation, Reviewing and Editing. M. Doostparast: Conceptualization, Methodology, Supervision, Writing- Original draft preparation, Writing- Reviewing and Editing. Authors contributions
Riyahi: Conceptualization, Methodology, Software, Validation, Writing- Original draft preparation, Writing- Reviewing and Editing. M. Baratnia: Conceptualization, Software, Validation, Reviewing and Editing. M. Doostparast: Conceptualization, Methodology, Supervision, Writing- Original draft preparation, Writing- Reviewing and Editing.
Disclosure statement
No potential conflict of interest was reported by the authors.