Secure Multi-Party Computation for ML
Building MPC frameworks for privacy-preserving deep learning at JD.com.
Overview
During my internship at JD.com, I worked on building secure multi-party computation (MPC) frameworks for privacy-preserving deep learning. This project enables multiple parties to jointly train machine learning models without revealing their private data.
Technical Approach
MPC Framework
Developed an MPC framework that supports:
- Secret Sharing: Splitting data across multiple parties
- Secure Computation: Performing computations on encrypted data
- Deep Learning Operations: Supporting neural network training and inference
Privacy Guarantees
The framework provides:
- Cryptographic security against semi-honest adversaries
- No party learns any information beyond the computation output
- Efficient protocols for common ML operations
Applications
- Federated Learning: Training models across organizations without data sharing
- Financial Services: Joint risk modeling with customer privacy
- Healthcare: Collaborative medical research with patient protection
Experience
Research Intern, JD.com (Summer 2022)
- Designed and implemented MPC protocols for deep learning
- Optimized computation and communication efficiency
- Contributed to production-ready privacy infrastructure