This repository contains utility scripts to help you start the Ollama server and interact with Llama models.
src/ollama_utils.py
: Contains utility functions for starting the Ollama service and checking if it's enabled.src/llama_utils.py
: Contains functions for interacting with the Llama models.src/main.py
: Demonstrates how to use these utilities.Dockerfile
: Docker configuration for the project.docker-compose.yml
: Docker Compose configuration for the project.requirements.txt
: Python dependencies for the project.
-
Build and run the Docker container:
docker-compose up --build
-
Stop the Docker container:
docker-compose down
-
Check and start the Ollama service:
from src.ollama_utils import start_ollama_service if start_ollama_service(): print("Ollama service is running.") else: print("Failed to start Ollama service.")
-
Use Llama models:
from src.llama_utils import ask_llama llama_model = "llama2" # Replace with your specific Llama model prompt = "Please provide an example prompt for the Llama model." response = ask_llama(prompt, llama_model) print("Llama Model Response:") print(response)
-
Run the example:
python src/main.py
- Python 3.6+
- `subprocess` module (built-in)
- Docker and Docker Compose installed
- Ollama and Llama models installed and configured on your system.
-
Clone the repository:
git clone https://github.com/yourusername/ollama-llama-repo.git
-
Navigate to the repository directory:
cd ollama-llama-repo
-
Run the example locally:
python src/main.py
-
Build and run with Docker:
docker-compose up --build