# Nonsmooth low-rank matrix recovery: methodology, theory and algorithm

Tu, Wei and Liu, Peng and Liu, Yi and Kong, Linglong and Li, Guodong and Jiang, Bei and Yao, Hengshuai and Jui, Shangling (2019) Nonsmooth low-rank matrix recovery: methodology, theory and algorithm. Working paper. TBA (Submitted) (Access to this publication is currently restricted. You may be able to access a copy if URLs are provided) (KAR id:78761)

Many interesting problems in statistics and machine learning can be written as $$min_xF(x)=f(x)+g(x)$$, where $$x$$ is the model parameter, $$f$$ is the loss and $$g$$ is the regularizer. Examples include regularized regression in high-dimensional feature selection and low-rank matrix/tensor factorization. Sometimes the loss function and/or the regularizer is nonsmooth due to the nature of the problem, for example, $$f(x)$$ could be quantile loss to induce some robustness or to put more focus on different parts of the distribution other than the mean. In this paper we propose a general framework to deal with situations when you have nonsmooth loss or regularizer. Specifically we use low-rank matrix recovery as an example to demonstrate the main idea. The framework involves two main steps: the optimal smoothing of the loss function or regularizer and then a gradient based algorithm to solve the smoothed loss. The proposed smoothing pipeline is highly flexible, computationally efficient, easy to implement and well suited for problems with high-dimensional data. Strong theoretical convergence guarantee has also been established. In the numerical studies, we used $$L_1$$ loss as an example to illustrate the practicability of the proposed pipeline. Various state-of-art algorithms such as Adam, NAG and YelowFin all show promising results for the smoothed loss.