A human-friendly framework for testing and evaluating LLMs, RAGs, and chatbots.
ContextCheck is an open-source framework designed to evaluate, test, and validate large language models (LLMs), Retrieval-Augmented Generation (RAG) systems, and chatbots. It provides tools to automatically generate queries, request completions, detect regressions, perform penetration tests, and assess hallucinations, ensuring the robustness and reliability of these systems. ContextCheck is configurable via YAML and can be integrated into continuous integration (CI) pipelines for automated testing.
- Simple test scenario definition using human-readable
.yaml
files - Flexible endpoint configuration for OpenAI, HTTP, and more
- Customizable JSON request/response models
- Support for variables and Jinja2 templating in YAML files
- Response validation options, including heuristics, LLM-based judgment, and human labeling
- Enhanced output formatting with the
rich
package for clear, readable displays
Install the package directly from PyPI using pip:
pip install ccheck
After installation, you can access the ccheck
CLI command:
ccheck --help
This will display all available options and help you get started with using ContextCheck.
If you wish to contribute to the project or modify it for your own use, you can set up a development environment using Poetry.
- Fork your own copy of Addepto/contextcheck on GitHub.
- Clone the Repository:
git clone https://github.com/<your_username>/contextcheck.git
cd contextcheck
- Ensure you have Poetry installed.
- Install Dependencies:
poetry install
- Activate the Virtual Environment:
poetry shell
- Activate the
ccheck
CLI command using:
poetry run ccheck --help
Please refer to examples/
folder for the tutorial.
- Run a single scenario and output results to the console:
ccheck --output-type console --filename path/to/file.yaml
- Run multiple scenarios and output results to the console:
ccheck --output-type console --filename path/to/file.yaml path/to/another_file.yaml
To automatically stop the CI/CD process if any tests fail, add the --exit-on-failure
flag. Failed test will cause the script to exit with code 1:
ccheck --exit-on-failure --output-type console --folder my_tests
Use env variable OPENAI_API_KEY
to be able to run:
tests/scenario_openai.yaml
tests/scenario_defaults.yaml
Contributions are welcomed!
To run tests:
poetry run pytest tests/
To include tests which require calling LLM APIs (currently OpenAI and Ollama), run one of:
poetry run pytest --openai # includes tests that use OpenAI API
poetry run pytest --ollama # includes tests that use Ollama API
poetry run pytest --openai --ollama # includes tests that use both OpenAI and Ollama API
Made with ❤️ by the Addepto Team
ContextCheck is an extension of the ContextClue product, created by the Addepto team. This project is the result of our team’s dedication, combining innovation and expertise.
Addepto Team:
- Radoslaw Bodus
- Bartlomiej Grasza
- Volodymyr Kepsha
- Vadym Mariiechko
- Michal Tarkowski
Like what we’re building? ⭐ Give it a star to support its development!
This project is licensed under the MIT License - see the LICENSE file for details