Differentially private training algorithms provide protection against one of
the most popular attacks in machine learning: the membership inference attack.
However, these privacy algorithms incur a loss of the model’s classification
accuracy, therefore creating a privacy-utility trade-off. The amount of noise
that differential privacy requires to provide strong theoretical protection
guarantees in deep learning typically renders the models unusable, but authors
have observed that even lower noise levels provide acceptable empirical
protection against existing membership inference attacks.

In this work, we look for alternatives to differential privacy towards
empirically protecting against membership inference attacks. We study the
protection that simply following good machine learning practices (not designed
with privacy in mind) offers against membership inference. We evaluate the
performance of state-of-the-art techniques, such as pre-training and
sharpness-aware minimization, alone and with differentially private training
algorithms, and find that, when using early stopping, the algorithms without
differential privacy can provide both higher utility and higher privacy than
their differentially private counterparts. These findings challenge the belief
that differential privacy is a good defense to protect against existing
membership inference attacks

By admin