Seminar: Differentially Private Empirical Risk Minimization with Sparsity-Inducing Norms

SpeakerSesh Kumar
AffiliationImperial College
DateFriday, 18 Jan 2019
Time13:00 - 14:00
LocationRoberts 421
Event seriesDeepMind CSML Seminar Series
Description

Abstract. Differential privacy is concerned about the prediction quality while measuring the privacy impact on individuals whose information is contained in the data. We consider differentially private risk minimization problems with regularizers that induce structured sparsity. These regularizers are known to be convex but they are often non-differentiable. We analyze the standard differentially private algorithms, such as output perturbation and objective perturbation. Output perturbation is a differentially private algorithm that is known to perform well for minimizing risks that are strongly convex. Previous works have derived dimensionality independent excess risk bounds for these cases. In this paper, we assume a particular class of convex but non-smooth regularizers that induce structured sparsity and loss functions for generalized linear models. We derive excess risk bound for output perturbation that is independent of the dimensionality of the problem. We also show that the existing analysis for objective perturbation may be extended to these risk minimization problems.

iCalendar csml_id_366.ics