The Human Analysis Lab in the Computer Science and Engineering Department at Michigan State University primarily works on trustworthy AI systems. AI systems are becoming increasingly powerful, but their deployment raises fundamental alignment questions:
- How do we ensure AI models respect data privacy (both input and output)?
- How do we align AI outputs with human values across diverse populations?
- How do we verify safety properties of AI models?
HAL addresses these challenges through three interconnected safety mechanisms.
- We develop privacy-preserving inference methods that provide cryptographic guarantees, not just empirical privacy, enabling encrypted computation at practical speeds for sensitive applications from healthcare to biometrics.
- We create value alignment frameworks that move beyond ad-hoc bias mitigation to characterize fundamental fairness-utility trade-offs, with tools for detecting and erasing harmful stereotypes in foundation models.
- We advance robust and verifiable AI through red-teaming approaches that expose failures in safety mechanisms, compositional methods for controllable generation, and physics-informed models for scientific applications where undetected failures can lead to catastrophic outcomes—from structural collapse to patient harm.
Our work provides both rigorous theoretical foundations and practical tools for building AI systems with safety guarantees that scale with their capabilities.