ML Resilience & Open-World Recognition
Quantifying and engineering the resilience of ML systems — particularly intrusion detectors — under distribution shift, novel classes, and adversarial inputs.
Our lab works at the intersection of AI for Cybersecurity, Cybersecurity for AI, and software systems. We design methods for detecting threats and intrusion detection in critical systems, build models that stay reliable under distribution shift and adversarial pressure, and develop tools that make security and performance properties measurable across the software lifecycle.
Our current projects revolve around AI for Cybersecurity. For more details see Research.
Quantifying and engineering the resilience of ML systems — particularly intrusion detectors — under distribution shift, novel classes, and adversarial inputs.
SQL query intent modeling, query similarity, and detection of data exfiltration and insider attacks against enterprise databases.
Continuous performance prediction (PACE), microarchitectural side-channel detection, and tooling that catches regressions before they ship.
Image attribute estimation and reconstruction from fragments, and post-incident forensic protocols for autonomous and connected systems.
NSF-funded programs through UMassD's NSA Center of Academic Excellence — including the Building Blocks of Microprocessors SFS supplement.