Example code to use langchain library to run queries against an ollama models.
See Ollama examples for reference code.
To use OpenAI (ollama mistral seems to produce better results)
- Register with OpenAI: https://platform.openai.com/
- Create an API key
- export the key:
export OPENAI_API_KEY=<KEY>
- export the flag:
export USE_OPEN_AI=true
(to clear openai flag, rununset USE_OPEN_AI
)
Once the above steps have been completed, move to the Setup Env steps
To use Ollama:
- See install instructions for ollama app: https://ollama.ai/download
- From commandline run:
ollama pull mistral
(4GB on disk, ~8GB in RAM)ollama pull llama2
(4GB on disk, ~8GB in RAM)
- First time you run this it will be slow as the models needs to be downloaded.
- Models are stored under ~/.ollama/models/blobs/
- Ollama runs a local API : you can browse to
http://localhost:11434/
- To stop ollama - see top right of mac toolbar near the clock: ollama/ollama#690 (comment)
- Re-start ollama -
ollama serve mistral
orollama serve llama2
Once the setup steps above have been run, you should see a server running on http://localhost:11434/
- create new venv:
python3 -m venv env
- enable venv:
source env/bin/activate
- install requirements:
pip install -r requirements.txt
- if you install anymore packages run:
pip freeze > requirements.txt
Next, move to the Run Examples steps
- Instruction processing:
- Run the UI, from the commandline run:
streamlit run src/instruct/ui.py
- Or Run the same prompts via the command line:
python src/instruct/cli.py
- Run the UI, from the commandline run:
- Document Summaries:
- Run the UI, from the commandline run:
streamlit run src/summarise/ui.py
- Run the UI, from the commandline run:
- Document Comparison:
- Run the UI, from the commandline run:
streamlit run src/comparison/ui.py
- Run the UI, from the commandline run: