Skip to content
View TrustAI-laboratory's full-sized avatar

Block or report TrustAI-laboratory

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
TrustAI-laboratory/README.md

TrustAI Pte. Ltd.

Securing the Future of AI

TrustAI is on a mission to ensure the safety and integrity of AI systems and unlock the full potential of generative AI while maintaining control and trust. We believe in bringing security to the forefront of AI development, safeguarding against potential vulnerabilities, and promoting responsible AI innovation.

About Us

Our goal is to empower developers, researchers, and organizations to build secure and trustworthy AI systems.

Conference Presentation

Competitions/Awards

0Day Hunter

Products

Here are some of main projects we've released:

  • Learn Prompt Hacking: The most comprehensive prompt hacking course available.

    • Prompt Engineering technology.
    • GenAI development technology.
    • Prompt Hacking technology.
    • LLM security defence technology.
    • LLM Hacking resources
    • LLM security papers.
  • TrustEval - LLM Security&Safety Evaluation: TRUST AI Security Labs - Evaluation. Quantifying. Securing AI.

    • Discover: Reveal your AI Risk across your organisation, with the most comprehensive Evaluation Metrics.
    • Red Teaming: Test your AI Model security against Adversarial Scenarios.
    • CI/CD Model Testing: Run established security tests against benchmarks in your MLOps pipeline.
  • LLM Protection: A SDK API that basically a One-click Alignment Proxy for AI App Integration.

    • Detect and address direct and indirect prompt injections in real-time, preventing potential harm to GenAI applications.
    • Ensure your GenAI applications do not violate the policies by detecting harmful and insecure output.
    • Safeguard sensitive PII and avoid data losses, ensuring compliance with privacy regulations.
    • Prevent data poisoning attacks on your GenAI applications through real-time prompt filtering.
  • AI HackingClub: Powered by TrustAI, Al HackingClub is dedicated to fostering awareness,education,and engagement on Al safety to develop safer Al systems.

    • Hack into Al
    • Prompt Injection Al
    • RealworId Jailbreaking Al Safety
  • LLM Security CTF: Learn LLM/AI Security through a series of vulnerable LLM CTF challenges. No sign ups, all fees, everything on the website.

    • Stark Game: Very neat game to get intuitions for prompt injection, user need find ways to get Stark to tell the password for the level, except Stark is instructed not to reveal the word.
    • Doc: Intro to Stark Game.

Pinned Loading

  1. Learn-Prompt-Hacking Learn-Prompt-Hacking Public

    This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course.

    Jupyter Notebook 22

  2. LMAP LMAP Public

    LMAP (large language model mapper) is like NMAP for LLM, is an LLM Vulnerability Scanner and Zero-day Vulnerability Fuzzer.

    6

  3. LLM-Security-CTF LLM-Security-CTF Public

    Learn LLM/AI Security through a series of vulnerable LLM CTF challenges. No sign ups, all fees, everything on the website.

    2

  4. Automatic-LLM-RedTeaming-Model Automatic-LLM-RedTeaming-Model Public

    A redteaming model based on LLM refusal to answer to generate Jailbreak prompts.

    Python 3