Research Projects

Differential Privacy, Machine Learning, and Information Theory

Istanbul, Turkey
Ongoing
Potential Privacy Risk in Machine Learning

December 2024 – Present

We consider the problem of advanced estimation of privacy risk of adding a data point to the training set of a machine learning model with standard optimisation methods. The goal is to develop principled tools that help practitioners assess the privacy cost before committing to training decisions.

Differential Privacy Machine Learning Privacy Risk
Completed
Secure Data Scoring and Selection for ML Models with Zero-Knowledge Proofs

December 2024 – April 2025

We describe secure protocols that make use of a small subset of the data to perform an assessment and output a score that can be used to determine the final price of a data transaction. The protocols are based on zero-knowledge proofs of properties of the data and model inference, ensuring both correctness and confidentiality of the underlying data.

Zero-Knowledge Proofs Data Markets Secure Computation
Completed
Continual Observation with Differential Privacy

March 2024 – September 2024

We consider the problem of counting under continual observation and present a new generalization of the binary tree mechanism that uses a k-ary number system with negative digits to improve the privacy-accuracy trade-off. This leads to improved error bounds compared to previous approaches.

Differential Privacy Continual Observation Counting Mechanisms
Completed
Sparse MultiDecoder Recursive Projection Aggregation for Reed-Muller Codes

March 2022 – September 2022

The Sparse Recursive Projection Aggregation (SRPA) decoder for Reed-Muller codes achieves performance close to the maximum likelihood decoder for short-length RM codes. We simulated an algorithm based on a neural network to lower the computational budget while keeping performance close to that of the SRPA decoder by performing a better selection of projections in each sparsified decoder.

Reed-Muller Codes Neural Networks Coding Theory
Completed
Algorithms and Differential Privacy via Graphs

March 2021 – February 2024

In this project, we generalized the previous framework for designing utility-optimal differentially private (DP) mechanisms via graphs in two main directions. First, heterogeneous mechanisms where the partial mechanism can have different probability distributions at the boundary. Secondly, we solved the problem in a general heterogeneous privacy setting on neighboring datasets to provide different levels of privacy for each data point.

Differential Privacy Graph Theory Utility Optimization Heterogeneous Privacy