Skip to content

In this project, we propose to investigate data-reconstruction attacks and the effectiveness of Differentially Private Stochastic Gradient Descent (DP-SGD) in mitigating privacy risks in the Facial Expression Recognition (FER) 2013 dataset

Notifications You must be signed in to change notification settings

rahulnk09/DataReconstructionAttack_and_DPSGD_FER2013

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Differentially Private Data-Reconstruction Attacks on FER2013

This repository explores data-reconstruction attacks and the efficacy of Differentially Private Stochastic Gradient Descent (DP-SGD) in mitigating privacy risks on the FER2013 dataset. Our project assesses how neural networks can leak private information through implicit bias and examines the impact of DP-SGD on model accuracy and privacy protection.

Project Overview

Research Objectives

  1. Data-Reconstruction Attack: Investigate the feasibility of reconstructing training samples from a neural network trained on the FER2013 dataset.
  2. Differential Privacy (DP) Evaluation: Apply DP-SGD with varying privacy parameters ((\epsilon, \delta) = (2, 10^{-5})) to ((500, 10^{-5})) and measure its impact on model accuracy.

Key Findings

  • Reconstruction Success: Data-reconstruction attacks on FER2013 effectively approximate training data, with reconstructed images showing 0.4 SSIM to original samples.
  • DP-SGD Accuracy: Using DP-SGD with a batch size of 256 and weight standardization, we achieved a maximum test accuracy of 47.5% on FER2013, compared to the non-private baseline accuracy of 56.98%.
  • Privacy-Utility Trade-off: Larger batch sizes and weight standardization significantly improve accuracy under privacy constraints.

Results

Below are key figures illustrating our findings:

Figure 1: Privacy-Utility Tradeoff for FER2013

DP-SGD Tradeoff Test accuracy versus privacy parameter (\epsilon) on FER2013. The highest accuracy under DP constraints (47.5%) is achieved with a batch size of 256 and weight standardization.

Figure 2: Data-Reconstruction Attack Results

Reconstructed Images Top: Original training images from FER2013; Bottom: Reconstructed images using data-reconstruction attacks. SSIM scores indicate high visual similarity.

Repository Structure

About

In this project, we propose to investigate data-reconstruction attacks and the effectiveness of Differentially Private Stochastic Gradient Descent (DP-SGD) in mitigating privacy risks in the Facial Expression Recognition (FER) 2013 dataset

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published