Implementation of Chain-Of-Verification
- Clone the Repository
- Install Dependencies:
python3 -m pip install -r requirements.txt
- Set Up OpenAI API Key:
export OPENAI_API_KEY='sk-...'
- Run the Program:
cd src/ python3 main.py --question "Who are some politicians born in Boston?"
python3 main.py --question "Name all countries in Asia starting with K" --llm-name "gpt-3.5-turbo-0613" --temperature 0.1 --max-tokens 500 --show-intermediate-steps
- --question: This is the original query/question asked by the user
- --llm-name: The OpenAI model name the user wants to use
- --temperature: define the randomness of the output
- --max-tokens: maximum tokens to be consumed
- --show-intermediate-steps: Activating this will alow printing of the intermediate results such as
baseline response
,verification questions and answers
.
This guide offers a robust foundation for customization. Here are key areas for potential improvements:
-
Prompt Engineering: Enhance performance by optimizing prompts. Refer to [prompts.py] for examples.
-
External Tools: The output relies heavily on verification question answers. Consider using advanced search tools like Google Search or SERP API for factual Q&A. For custom scenarios, consider retrieval techniques or RAG methods.
-
Chain Expansion: The current implementation includes three chains (Wiki Data, Multi-Span QA, Long-Form QA). Expand this by creating chains for other QA types to increase variability.
-
Human In Loop (HIL): Incorporate HIL in the pipeline for generating or answering verification questions to enhance the CoVe pipeline.