Execution Privacy (as a Side-Channel Defense)


In many contexts, security requires that no more than an execution's intended outputs can be inferred from the execution. Unfortunately, the last two decades have seen dramatic advances in side-channel attacks, by which an adversary infers additional information beyond what it is permitted to learn directly, simply by monitoring observable features of an execution. The concept of side-channel attacks was introduced in the mid-90s to refer to timing or power analysis against cryptographic keys in smartcards or embedded systems. Follow-up studies on side channels, therefore, mostly focused on cryptographic side channels, to explore how secret keys can be extracted from cryptographic systems through externally observable characteristics, such as encryption/decryption time, power consumption, or electromagnetic emanations. More recently, security researchers have broadened the scope of side channels and demonstrated attacks against a variety of sensitive information. For instance, users' secret inputs (e.g., passwords and PINs) typed on keyboards can be leaked through inter-keystroke timing analysis; messages transmitted in encrypted form can be inferred when the size of the corresponding network packets are revealed to the attacker; and users' browsing privacy can be violated when the attacker spies on the browser's internal memory footprint, which in some cases is released as public information within the same operating system. Despite their vastly different goals and techniques, these attacks are all considered side channels because they allow an adversary to infer sensitive data in an unexpected way.

A typical approach to limiting the information leakage is to render the adversary's observations of the features less accurate through the addition of noise. Though this approach has been explored for decades, its application has typically been application-specific and heuristic, in the sense of being ungrounded in any formal design that quantifies its effectiveness. In our projects, we aim to build a foundation on which a heretofore heuristic approach to side-channel defense can be placed on a firm footing. We aim to design an execution privacy framework to provably eliminate side channels that permit the introduction of noise by leveraging statistical privacy protections. Our framework derives from advances in another domain that has advanced in parallel to side-channel attacks, namely privacy in statistical databases. Specifically, at the core of our framework is a notion called d-privacy that is a generalization of differential privacy. Informally, d-privacy ensures a property akin to differential privacy, for databases that are within a specified distance of one another; differential privacy is a special case in which the databases are allowed to differ in one element (i.e., a Hamming distance of one). We extend this notion to side channels, showing that it serves as a general paradigm for quantifying how to add noise to mitigate them.

This is a joint research effort with Prof. Michael Reiter from UNC-Chapel Hill that is still on-going. So far we have made the following progress:

Go back to homepage.