371
Views
6
CrossRef citations to date
0
Altmetric
Original Articles

An Interactive Greedy Approach to Group Sparsity in High Dimensions

, , , , &
Pages 409-421 | Received 27 Jul 2017, Accepted 15 Sep 2018, Published online: 22 Mar 2019
 

Abstract

Sparsity learning with known grouping structure has received considerable attention due to wide modern applications in high-dimensional data analysis. Although advantages of using group information have been well-studied by shrinkage-based approaches, benefits of group sparsity have not been well-documented for greedy-type methods, which much limits our understanding and use of this important class of methods. In this paper, generalizing from a popular forward-backward greedy approach, we propose a new interactive greedy algorithm for group sparsity learning and prove that the proposed greedy-type algorithm attains the desired benefits of group sparsity under high dimensional settings. An estimation error bound refining other existing methods and a guarantee for group support recovery are also established simultaneously. In addition, we incorporate a general M-estimation framework and introduce an interactive feature to allow extra algorithm flexibility without compromise in theoretical properties. The promising use of our proposal is demonstrated through numerical evaluations including a real industrial application in human activity recognition at home. Supplementary materials for this article are available online.

Supplementary Materials

Supplement to “An Interactive Greedy Approach to Group Sparsity in High Dimension” We provide the proofs of Theorems 4.1 and 4.2 in Supplement A. The proofs for the sparse linear model and the sparse logistic regression are given in Supplement B and Supplement C, respectively. The useful lemmas for intermediate steps of these proofs are relegated to Supplement D. (supplement.pdf)

MATLAB package The MATLAB package demonstrates the implementation and use of the IGA and GIGA algorithms for both sparse linear model and sparse logistic regression. It is available at https://github.com/weiqian1/IGA.

Acknowledgement

We sincerely thank the Editor, the Associate Editor and two anonymous reviewers for their valuable and insightful comments that helped to improve this manuscript significantly. Ji Liu's research is partially supported by NSF CCF-1718513, IBM faculty award, and NEC fellowship.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.