269
Views
1
CrossRef citations to date
0
Altmetric
Short Communications

Review of sparse sufficient dimension reduction: comment

Page 134 | Received 17 Sep 2020, Accepted 23 Sep 2020, Published online: 12 Oct 2020

High-dimensional data are frequently collected in a large variety of areas such as biomedical imaging, functional magnetic resonance imaging, tomography, tumor classifications, and finance. With recent explosion of scientific data of unprecedented size and complexity, feature ranking and screening are playing an increasingly important role in many scientific studies. There are many significant breakthroughs in this filed. Starting from Zhu et al. (Citation2011), the first paper on model free marginal screening, Li et al. (Citation2020) provided a thorough review of sparse sufficient dimension reduction.

Both marginal Dantzig selector (Yu, Dong, & Shao,Citation2016) and forward trace pursuit (Yu, Dong, & Zhu, Citation2016) can be used for feature screening. Marginal screening is however the first step of forward trace pursuit. Trace pursuit is to connect marginal screening with forward regression. It is also worth mentioning that forward trace pursuit can be used as an initial screening step to speed up computation in the case of ultrahigh dimensionality. Compared with existing screening method in the literature, trace pursuit has a lot of advantages. For details, one can refer to Yu, Dong, and Zhu (Citation2016).

Tan et al. (Citation2020) established the mini-max lower bound for sparse sliced inverse regression (SIR for short), which has never been done in the area of sparse sufficient dimension reduction. Thinking that an optimal estimation of sparse SIR is computationally intractable, they proposed the computationally feasible counterpart for sparse SIR, which however can not maintain the optimal rate. To overcome this issue, Tan et al. (Citation2020) proposed a refined sparse SIR estimate, which is rate-optimal yet computational intractable. However, they proved that the computationally feasible counterpart based on the adaptive estimation procedure is nearly rate-optimal. Lasso-SIR (Lin et al., Citation2019) was shown to be rate optimal only when p=o(n2), however, the sparse SIR approach is rate optimal even when logp=o(n). Therefore, the sparse SIR estimator certainly possesses a much wider range of applications, which, in my view, is a big breakthrough in sparse sufficient dimension reduction.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.