Go back to homepage.
Execution Privacy (as a Side-Channel Defense)
In many contexts, security requires that no more than an execution's intended outputs can be inferred from the execution. Unfortunately, the last two decades have seen dramatic advances in side-channel attacks, by which an adversary infers additional information beyond what it is permitted to learn directly, simply by monitoring observable features of an execution. The concept of side-channel attacks was introduced in the mid-90s to refer to timing or power analysis against cryptographic keys in smartcards or embedded systems. Follow-up studies on side channels, therefore, mostly focused on cryptographic side channels, to explore how secret keys can be extracted from cryptographic systems through externally observable characteristics, such as encryption/decryption time, power consumption, or electromagnetic emanations. More recently, security researchers have broadened the scope of side channels and demonstrated attacks against a variety of sensitive information. For instance, users' secret inputs (e.g., passwords and PINs) typed on keyboards can be leaked through inter-keystroke timing analysis; messages transmitted in encrypted form can be inferred when the size of the corresponding network packets are revealed to the attacker; and users' browsing privacy can be violated when the attacker spies on the browser's internal memory footprint, which in some cases is released as public information within the same operating system. Despite their vastly different goals and techniques, these attacks are all considered side channels because they allow an adversary to infer sensitive data in an unexpected way.
A typical approach to limiting the information leakage is to render the adversary's observations of the features less accurate through the addition of noise. Though this approach has been explored for decades, its application has typically been application-specific and heuristic, in the sense of being ungrounded in any formal design that quantifies its effectiveness. In our projects, we aim to build a foundation on which a heretofore heuristic approach to side-channel defense can be placed on a firm footing. We aim to design an execution privacy framework to provably eliminate side channels that permit the introduction of noise by leveraging statistical privacy protections. Our framework derives from advances in another domain that has advanced in parallel to side-channel attacks, namely privacy in statistical databases. Specifically, at the core of our framework is a notion called d-privacy that is a generalization of differential privacy. Informally, d-privacy ensures a property akin to differential privacy, for databases that are within a specified distance of one another; differential privacy is a special case in which the databases are allowed to differ in one element (i.e., a Hamming distance of one). We extend this notion to side channels, showing that it serves as a general paradigm for quantifying how to add noise to mitigate them.
This is a joint research effort with Prof. Michael Reiter
from UNC-Chapel Hill that is still on-going. So far we have made the following progress:
- (NDSS'19) Statistical Privacy for Streaming Traffic
This project explores the adaption of techniques previously used in the domains of adversarial machine learning and differential privacy to mitigate the machine-learning-powered analysis of streaming traffic. Our findings are twofold. First, constructing adversarial samples effectively confounds an adversary with a predetermined classifier but is less effective when the adversary can adapt to the defense by using alternative classifiers or training the classifier with adversarial samples. Second, differential-privacy guarantees are very effective against such statistical-inference-based traffic analysis, while remaining agnostic to the machine learning classifiers used by the adversary. We propose two mechanisms for enforcing differential privacy for encrypted streaming traffic, and evaluate their security and utility.
- (INFOCOM'18) Differentially Private Access Patterns for Searchable Symmetric Encryption
In this project, we propose a framework to protect systems using searchable symmetric encryption from access-pattern leakage. Our technique is based on d-privacy, a generalized version of differential privacy that provides provable security guarantees against adversaries with arbitrary background knowledge.
- (CCS'15) Mitigating Storage Side Channels Using Statistical Privacy Mechanisms
In this project, we bring advances in privacy for statistical databases to bear on side-channel defense, and specifically demonstrate the feasibility of applying differentially private mechanisms to mitigate side channels in procfs, a pseudo file system broadly used in Linux and Android kernels.