| | |
| | |
Stat |
Members: 3667 Articles: 2'599'751 Articles rated: 2609
08 February 2025 |
|
| | | |
|
Article overview
| |
|
Privacy Preserving Online Convex Optimization | Prateek Jain
; Pravesh Kothari
; Abhradeep Thakurta
; | Date: |
1 Sep 2011 | Abstract: | In this paper, we consider the problem of preserving privacy for online
convex programming (OCP), an important online learning paradigm. We use the
notion of differential privacy as our privacy measure. For this problem, we
distill two critical attributes a private OCP algorithm should have, namely,
linearly decreasing sensitivity and sub-linear regret bound. Assuming these two
conditions, we provide a general framework for OCP that preserves privacy while
guaranteeing sub-linear regret bound. We then analyze Implicit Gradient Descent
(IGD) algorithm for OCP in our framework, and show $ ilde O(sqrt{T})$ regret
bound while preserving differential privacy for Lipschitz continuous, strongly
convex cost functions. We also analyze the Generalized Infinitesimal Gradient
Ascent (GIGA) method, a popular OCP algorithm, in our privacy preserving
framework to obtain $ ilde O(sqrt{T})$ regret bound, albeit for a slightly
more restricted class of strongly convex functions with Lipschitz continuous
gradient. We then consider the practically important problem of online linear
regression and show $O(log^{1.5} T)$ regret for the Follow The Leader (FTL)
method, while preserving differential privacy. Finally, we empirically
demonstrate effectiveness of our privacy preserving algorithms for the problems
of online linear regression and online logistic regression. | Source: | arXiv, 1109.0105 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
|
| |
|
|
|