# Nonsmooth Low-rank Matrix Recovery: Methodology, Theory and Algorithm

Tu, Wei and Liu, Peng and Liu, Yi and Yao, Hengshuai and Jiang, Bei and Li, Guodong and Kong, Linglong (2019) Nonsmooth Low-rank Matrix Recovery: Methodology, Theory and Algorithm. Working paper. Submitted to The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) (Submitted) (Access to this publication is currently restricted. You may be able to access a copy if URLs are provided)

Many interesting problems in machine learning can be formulated as $$min_{x}F(x) =f(x) + g(x)$$, where $$x$$ is the model parameter, $$f$$ is the loss and $$g$$ is the regularizer. Examples include regularized regression in high-dimensional feature selection and low-rank matrix/tensor factorization. Sometimes the loss function and/or the regularizar could be nonsmooth due to the nature of the problem, for example, $$f(x)$$ could be quantile loss to induce some robustness or to put more focus on different parts of the distribution instead of the mean. In this paper we propose a general framework to deal with situations when you have nonsmooth loss or regularizer, specifically we use low-rank matrix recovery as an example to explain the main idea. The framework mostly involves two steps: the optimal smoothing of the loss function or regularizier and then a gradient based algorithm to solve the smoothed loss. The proposed smoothing pipeline is highly flexible, computationally efficient, easy to implement and well suited for problems with high-dimensional data. In the numerical studies, we used $$L_{1}$$ loss as an example to illustrate the practicability of the proposed pipeline. The resulting smoothed approximation is actually the well studied Huber loss, and various algorithms such as Adam, NAG and YellowFin all showed promising results for the smoothed huber loss.