Located at Miami University and led by Dr. Samer Khamaiseh. LAiSR research group is passionate about exploring and addressing the evolving challenges within the realm of AI security. Our AI Research Laboratory is at the forefront of cutting-edge research to fortify AI models against adversarial attacks, enhance their robustness, and ensure their reliability in real-world scenarios.
- Emerging Threats: As AI systems become more pervasive, they introduce new vulnerabilities through adversarial attacks such as privacy breaches. Our research group plays a pivotal role in uncovering vulnerabilities and devising robust defenses.
- Safeguarding Critical Systems: AI is increasingly integrated into critical infrastructure, healthcare, finance, and defense. Ensuring the security of these systems is non-negotiable. Rigorous research helps prevent catastrophic failures and protects lives and livelihoods.
- Ethical Implications: AI decisions impact individuals and societies. Bias, fairness, and transparency are therefore of ethical concern. Research informs guidelines and policies that promote responsible AI deployment, minimizing harm and maximizing benefits.
- Adversarial Attacks: we explore new adversarial attacks against AI models. The LAiSR group has introduced 3 novel adversarial attacks, including Target-X, Fool-X, and T2I-Nightmare.
- Adversarial Training: we explore defense methods against adversarial attacks. Recently, LAiSR introduced the VA: Various Attacks Framework for Robust Adversarial Training and ADT++: Advanced Adversarial Distributional Training with Class Robustness adversarial training methods which promote clean accuracy, roboust accuracy, and robusteness generalization more that the baseline defense methods.
- GEN-AI: we investigate SOTA methods to protect user images from being edited by diffusion models. For example, the LAiSR group proposed ImagePatroit(Under_Review) that prevents diffusion models from maliciously adjusting images.
- GEN-AI Robustness: we explore pre and post-generation filters to prevent Diffusion models from generating Not-Safe-for-Work (NSFW) content with T2I-Vanguard: Post Generation Filter for Safe Text-2-Image Diffusion Models Contents.
Below, we list some of the published and ongoing research project at LAiSR lab. Please note, some of repo of projects are still private since the project papers are under-review
AI Robustness Testing Kit (AiR-TK) is an AI testing framework built on PyTorch that enables the AI security community to evaluate existing AI image recognition models against adversarial attacks easily and comprehensively. Air-TK supports adversarial training, the de-facto technique to improve the robustness of AI models against adversarial attacks. Having easy access to state-of-the-art adversarial attacks and the baseline adversarial training method in one place will help the AI security community to replicate, re-use, and improve the upcoming attacks and defense methods
By adding magic to a few pixels, ImagePatriot protects your image from being manipulated by diffusion models.
JPA, NightShade, and MMA are recent attacks against Text-2-Image diffusion models which generate Not Safe for Work(NSFW) images despite the pre/post filters. T2I-Vanguard is an ongoing project that aims to provide a shield for T2I models from being compromised by such attacks.
Fool-X, an algorithm to generate effective adversarial examples with the smallest perturbations is able to fool state-of-the-art image classification neural networks. More details are avaiable on project site. (under-Review of IEEE-BigData 2024)
Target-X, a novel and fast method for constructing adversarial targeted images on large-scale datasets that can fool the state-of-the-art image classification neural networks. More info available on the project site.
ADT++ is a fast adversarial training method for AI models which increases their generalization robustness against more adaptive adversarial attacks such as Target-X and AutoAttack (AA).
VA can be used to increase the robustness of AI models against variety of adversarial gradient-based attacks by exploring class robustness.
Interested researchers and scientist can refer to the following resources to start their journey in AI security.
- Adversarial Deep Learning: A Survey on Adversarial Attacks and Defense Mechanisms on Image Classification
- Target-X: An Efficient Algorithm for Generating Targeted Adversarial Images to Fool Neural Networks
- Adversarial Robustness - Theory and Practice
- Dr. Samer Khamaiseh - Director of LAiSR Research Group
- Deirdre Jost - Research Assistant
- Steven Chiacchira - Research Assistant
- Aibak Aljadayah - Research Assistant
- Azib Farooq - Research Assistant
This GitHub organization serves as a hub for our ongoing projects, publications, and collaborations. We welcome your engagement and encourage you to explore the exciting frontiers of AI security with us! Contact us here