Privacy-Preserving Federated Learning
Advancing trustworthy AI through federated learning frameworks that enable collaboration without centralizing sensitive data
A major focus of my recent work has been the development of privacy-preserving federated learning (FL) frameworks that enable collaborative AI model development without centralizing sensitive data. I led the creation of federated learning capabilities that integrate differential privacy, secure aggregation, and policy-driven governance.
Key Innovation
By allowing models to be trained directly where data reside, this work has enabled new forms of collaboration while reducing privacy risks and improving trustworthiness of AI systems. These methods have been applied to biomedical and clinical data, where privacy, bias, and model robustness are critical concerns.
Software & Services
APPFL - Argonne Privacy Preserving Federated Learning
Open-source framework with 141+ pull requests, 38 GitHub stars, and active community contributions. Successfully applied across diverse scientific fields from smart grid to COVID prediction.
APPFLx - Federated Learning as a Service
Privacy-preserving platform deployed on AWS, used by NIH-funded Bridge2AI and multiple research groups.
Impact
This contribution provides a practical foundation for deploying federated AI in real-world biomedical and healthcare settings, including national lab, VA, and academic collaborations.
Resources
- APPFL GitHub
- APPFLx Service
- Certified robustness for LLMs
- Model attack frameworks