Privacy-Preserving AI

Building cutting-edge technologies in privacy, AI and decentralization.

How it works

Advancing responsible AI — assessing fairness in AI models

Oasis Labs, in partnership with Meta, designed and built a groundbreaking system for the measurement of fairness and possible bias in AI models. The system uses highly sensitive demographic information on one side and AI model predictions, both classification and regression outputs, on the other such that user data is not revealed to any entity and model outputs and measurements are only known to the organization that owns the models.

The system uses a combination of technologies, including Multi-Party Computation (MPC), Homomorphic Encryption, Differential Privacy, and Zero-Knowledge range proofs, to achieve the highest privacy protection while still enabling measurements at the scale of millions of users.