Skip to main content

Nonsmooth Low-rank Matrix Recovery: Methodology, Theory and Algorithm

Tu, Wei and Liu, Peng and Liu, Yi and Yao, Hengshuai and Jiang, Bei and Li, Guodong and Kong, Linglong (2019) Nonsmooth Low-rank Matrix Recovery: Methodology, Theory and Algorithm. Working paper. Submitted to The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) (Submitted) (Access to this publication is currently restricted. You may be able to access a copy if URLs are provided)

PDF (main article) - Pre-print
Restricted to Repository staff only
Contact us about this Publication Download (4MB)
[img]
PDF - Supplemental Material
Restricted to Repository staff only
Contact us about this Publication Download (178kB)
[img]

Abstract

Many interesting problems in machine learning can be formulated as \(min_{x}F(x) =f(x) + g(x)\), where \(x\) is the model parameter, \(f\) is the loss and \(g\) is the regularizer. Examples include regularized regression in high-dimensional feature selection and low-rank matrix/tensor factorization. Sometimes the loss function and/or the regularizar could be nonsmooth due to the nature of the problem, for example, \(f(x)\) could be quantile loss to induce some robustness or to put more focus on different parts of the distribution instead of the mean. In this paper we propose a general framework to deal with situations when you have nonsmooth loss or regularizer, specifically we use low-rank matrix recovery as an example to explain the main idea. The framework mostly involves two steps: the optimal smoothing of the loss function or regularizier and then a gradient based algorithm to solve the smoothed loss. The proposed smoothing pipeline is highly flexible, computationally efficient, easy to implement and well suited for problems with high-dimensional data. In the numerical studies, we used \(L_{1}\) loss as an example to illustrate the practicability of the proposed pipeline. The resulting smoothed approximation is actually the well studied Huber loss, and various algorithms such as Adam, NAG and YellowFin all showed promising results for the smoothed huber loss.

Item Type: Monograph (Working paper)
Subjects: Q Science > Q Science (General) > Q335 Artificial intelligence
Divisions: Faculties > Sciences > School of Mathematics Statistics and Actuarial Science > Statistics
Depositing User: Peng Liu
Date Deposited: 06 Sep 2019 11:27 UTC
Last Modified: 10 Sep 2019 08:41 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/76243 (The current URI for this page, for reference purposes)
Liu, Peng: https://orcid.org/0000-0002-0492-0029
  • Depositors only (login required):

Downloads

Downloads per month over past year