Skip to content
@LAiSR-SK

Laboratory of AI Security Research (LAiSR)

LAiSR Logo Attempt 2

👋 Welcome to the Laboratory of AI Security Research (LAiSR)

🎤 Who are we?

Located at Miami University and led by Dr. Samer Khamaiseh. LAiSR research group is passionate about exploring and addressing the evolving challenges within the realm of AI security. Our AI Research Laboratory is at the forefront of cutting-edge research to fortify AI models against adversarial attacks, enhance their robustness, and ensure their reliability in real-world scenarios.

❓ Why AI security?

  • Emerging Threats: As AI systems become more pervasive, they introduce new vulnerabilities through adversarial attacks such as privacy breaches. Our research group plays a pivotal role in uncovering vulnerabilities and devising robust defenses.
  • Safeguarding Critical Systems: AI is increasingly integrated into critical infrastructure, healthcare, finance, and defense. Ensuring the security of these systems is non-negotiable. Rigorous research helps prevent catastrophic failures and protects lives and livelihoods.
  • Ethical Implications: AI decisions impact individuals and societies. Bias, fairness, and transparency are therefore of ethical concern. Research informs guidelines and policies that promote responsible AI deployment, minimizing harm and maximizing benefits.

🔎 Research Focus

  • Adversarial Attacks: we explore new adversarial attacks against AI models. The LAiSR group has introduced 3 novel adversarial attacks, including Target-X, Fool-X, and T2I-Nightmare.
  • Adversarial Training: we explore defense methods against adversarial attacks. Recently, LAiSR introduced the VA: Various Attacks Framework for Robust Adversarial Training and ADT++: Advanced Adversarial Distributional Training with Class Robustness adversarial training methods which promote clean accuracy, roboust accuracy, and robusteness generalization more that the baseline defense methods.
  • GEN-AI: we investigate SOTA methods to protect user images from being edited by diffusion models. For example, the LAiSR group proposed ImagePatroit(Under_Review) that prevents diffusion models from maliciously adjusting images.
  • GEN-AI Robustness: we explore pre and post-generation filters to prevent Diffusion models from generating Not-Safe-for-Work (NSFW) content with T2I-Vanguard: Post Generation Filter for Safe Text-2-Image Diffusion Models Contents.

🚀 Research Projects

Below, we list some of the published and ongoing research project at LAiSR lab. Please note, some of repo of projects are still private since the project papers are under-review

AI Robustness Testing Kit (AiR-TK) is an AI testing framework built on PyTorch that enables the AI security community to evaluate existing AI image recognition models against adversarial attacks easily and comprehensively. Air-TK supports adversarial training, the de-facto technique to improve the robustness of AI models against adversarial attacks. Having easy access to state-of-the-art adversarial attacks and the baseline adversarial training method in one place will help the AI security community to replicate, re-use, and improve the upcoming attacks and defense methods

By adding magic to a few pixels, ImagePatriot protects your image from being manipulated by diffusion models.

JPA, NightShade, and MMA are recent attacks against Text-2-Image diffusion models which generate Not Safe for Work(NSFW) images despite the pre/post filters. T2I-Vanguard is an ongoing project that aims to provide a shield for T2I models from being compromised by such attacks.

Fool-X, an algorithm to generate effective adversarial examples with the smallest perturbations is able to fool state-of-the-art image classification neural networks. More details are avaiable on project site. (under-Review of IEEE-BigData 2024)

Target-X, a novel and fast method for constructing adversarial targeted images on large-scale datasets that can fool the state-of-the-art image classification neural networks. More info available on the project site.

ADT++ is a fast adversarial training method for AI models which increases their generalization robustness against more adaptive adversarial attacks such as Target-X and AutoAttack (AA).

VA can be used to increase the robustness of AI models against variety of adversarial gradient-based attacks by exploring class robustness.

** Introduction to AI Security

Interested researchers and scientist can refer to the following resources to start their journey in AI security.

👥 Our Team

📫 Reach us

This GitHub organization serves as a hub for our ongoing projects, publications, and collaborations. We welcome your engagement and encourage you to explore the exciting frontiers of AI security with us! Contact us here

Pinned Loading

  1. target-x target-x Public

    This research explores a novel targeted attack for neural network classifiers. This research has been led by Dr.Samer Khamaiseh with ongoing efforts of Deirdre Jost and Steven Chiacchira

    Python 1

  2. AiRobustnessTestingKit-AiR-TK- AiRobustnessTestingKit-AiR-TK- Public

    Toolbox implementing state of the art adversarial training methods.

    Python 5

Repositories

Showing 6 of 6 repositories
  • jailbreak_llms Public Forked from verazuo/jailbreak_llms

    [CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).

    LAiSR-SK/jailbreak_llms’s past year of commit activity
    Jupyter Notebook 0 MIT 254 0 0 Updated Oct 8, 2024
  • AiRobustnessTestingKit-AiR-TK- Public

    Toolbox implementing state of the art adversarial training methods.

    LAiSR-SK/AiRobustnessTestingKit-AiR-TK-’s past year of commit activity
    Python 5 MIT 0 0 0 Updated Sep 12, 2024
  • .github Public
    LAiSR-SK/.github’s past year of commit activity
    0 0 0 0 Updated Sep 3, 2024
  • LAiSR-SK Public
    LAiSR-SK/LAiSR-SK’s past year of commit activity
    0 1 0 0 Updated Jun 20, 2024
  • target-x Public

    This research explores a novel targeted attack for neural network classifiers. This research has been led by Dr.Samer Khamaiseh with ongoing efforts of Deirdre Jost and Steven Chiacchira

    LAiSR-SK/target-x’s past year of commit activity
    Python 0 1 0 0 Updated Jun 11, 2024
  • fool-X-Attack Public

    This research exploring [Research Idea in a few words]. This work [Specific benefit of research] holds promise for [Positive impact].

    LAiSR-SK/fool-X-Attack’s past year of commit activity
    Python 0 0 0 0 Updated Feb 7, 2022

Top languages

Loading…

Most used topics

Loading…