Skip to main content
Kent Academic Repository

Sample Average Approximation for Stochastic Optimization with Dependent Data: Performance Guarantees and Tractability

Wang, Yafei, Pan, Bo, Tu, Wei, Liu, Peng, Jiang, Bei, Gao, Chao, Lu, Wei, Jui, Shangling, Kong, Linglong (2022) Sample Average Approximation for Stochastic Optimization with Dependent Data: Performance Guarantees and Tractability. In: 36th AAAI Conference on Artificial Intelligence. . (Access to this publication is currently restricted. You may be able to access a copy if URLs are provided) (KAR id:91884)

PDF Author's Accepted Manuscript
Language: English

Restricted to Repository staff only
Contact us about this Publication
[thumbnail of SAA_AAAI.pdf]
Official URL:
https://aaai.org/Symposia/Spring/sss22.php

Abstract

Sample average approximation (SAA), a popular method for tractably solving stochastic optimization problems, enjoys strong asymptotic performance guarantees in settings with independent training samples. However, these guarantees are not known to hold generally with dependent samples, such as in online learning with time series data or distributed computing with Markovian training samples. In this paper, we show that SAA remains tractable when the distribution of unknown parameters is only observable through dependent instances and still enjoys asymptotic consistency and finite sample guarantees. Specifically, we provide a rigorous probability error analysis to derive $1 - \beta$ confidence bounds for the out-of-sample performance of SAA estimators and show that these estimators are asymptotically consistent. We then, using monotone operator theory, study the performance of a class of stochastic first-order algorithms trained on a dependent source of data. We show that approximation error for these algorithms is bounded and concentrates around zero, and establish deviation bounds for iterates when the underlying stochastic process is $\phi$-mixing. The algorithms presented can be used to handle numerically inconvenient loss functions such as the sum of a smooth and non-smooth function or of non-smooth functions with constraints. To illustrate the usefulness of our results, we present several stochastic versions of popular algorithms such as stochastic proximal gradient descent (S-PGD), stochastic relaxed Peaceman--Rachford splitting algorithms (S-rPRS), and numerical experiment.

Item Type: Conference or workshop item (Paper)
Subjects: Q Science > QA Mathematics (inc Computing science) > QA 75 Electronic computers. Computer science
Divisions: Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Mathematics, Statistics and Actuarial Science
Depositing User: Peng Liu
Date Deposited: 01 Dec 2021 16:27 UTC
Last Modified: 23 Nov 2022 13:58 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/91884 (The current URI for this page, for reference purposes)

University of Kent Author Information

  • Depositors only (login required):

Total unique views for this document in KAR since July 2020. For more details click on the image.