Hi there! Welcome to the EVA project 👋
So, here's the story - I used to sling code back in the days (like, when Perl was still cool), but then a year ago AI came along and i became fascinated by the potential of AI to transform how we interact with technology. I noticed many projects most focused on specific use cases rather than providing a flexible foundation for exploration. So I spent a few months to code EVA myself.
EVA is an experimental voice assistant that reimagines human-AI interaction through intelligent, proactive engagement and autonomous capabilities. EVA actively participates in conversations while executing complex tasks seamlessly in the background. Its flexible, modular architecture allows it to leverage AI models for speech, vision, and thinking like human beings, while an extensive framework of tools enables EVA to perform a wide range of tasks.
Hope you have some interesting experience with EVA!
EVA is built on LangGraph framework, with some customized modules and tools. Importantly, You can run it purely local with no cost. (if you have a decent GPU computer)
- Configurable model selection for LLM, TTS, STT, and vision etc.
- Integrated with OpenAI, Anthropic, Groq, Google, and Ollama.
- Easy modification of prompts and tools.
- Supports both desktop and mobile app
- Voice ID and vision ID for personalized interaction.
- Proactive style communication (varies between models)
- Multi-modal outputs with asynchronous action.
- Web search through DuckDuckGo/Tavily
- Youtube video search
- Discord Midjourney AI image generation
- Suno music generation
- Screenshot and analysis
- Compatible with all Langchain tools
- Easy implementation of new tool with single file.
EVA/
├── app/
│ ├── client/ # Client-side implementation
│ ├── config/ # Configuration files and log
│ ├── core/ # Core process
│ ├── data/ # Data storage
│ ├── tools/ # Tool implementations
│ └── utils/ # Utility functions
│ ├── agent/ # LLM agent classes and functions
│ ├── memory/ # Memory module classes
│ ├── prompt/ # Utility prompts
│ ├── stt/ # Speech-to-text models and classes
│ ├── tts/ # Text-to-Speech models and classes
│ └── vision/ # Vision models and functions
├── test/ # Test cases (😢)
└── docs/ # Documentation (😩)
- Python 3.10+
- CUDA-compatible GPU (if you want to run locally)
Clone repository
git clone https://github.com/Genesis1231/EVA.git
cd EVA
Create virtual environment
python3 -m venv eva_env
source eva_env/bin/activate
Install system dependencies in case you don't have them
sudo apt-get update
sudo apt-get install -y cmake build-essential ffmpeg chromium mpv
Install Python dependencies
pip install -r requirements.txt
pip install git+https://github.com/wenet-e2e/wespeaker.git
Configure .env with your API keys
cp .env.example .env
Run EVA
python app/main.py
Similarly, you can run EVA with docker.
# Use official Python image with FastAPI
FROM tiangolo/uvicorn-gunicorn-fastapi
# Set working directory
WORKDIR /app
# Copy requirements first for better caching
COPY requirements.txt .
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
cmake \
libsndfile1 \
ffmpeg \
chromium \
# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt \
&& pip install git+https://github.com/wenet-e2e/wespeaker.git
# Copy the rest of the application
COPY . .
# Run the application
CMD ["python", "/app/main.py"]
configure EVA setting in app/config/config.py
eva_configuration = {
# Client device setting:
# Currently "desktop" or "mobile" (testing)
"DEVICE": "desktop",
# Language setting:
# suports all major languages. suffix such "en" (English), "es" (Spanish), "zh" (Chinese), or you can use "multilingual"(slower)
"LANGUAGE": "en",
# Base URL setting:
# URL for local Ollama server, you can leave it if you dont plan to use local models
"BASE_URL": "http://localhost:11434",
# Main agent model setting:
# Supports Anthropic-Claude3.5, Groq-llama3.1-70b, OpenAI-ChatGPT-4o, Mistral Large, Gemini 1.5 Pro, and Ollama models, Recommend: Claude or Chatgpt
"CHAT_MODEL": "anthropic",
# vision model setting:
# Supports Chatgpt-4o-mini, Groq-llama-3.2-11b-vision (free) and Ollama llava-phi3(local), recommend: 4omini, but llava-phi3 is very small and free.
"VISION_MODEL": "openai",
# Speech-to-text model setting:
# supports OpenAI Whisper, Groq(free) and Faster-whisper(local).
"STT_MODEL": "faster-whisper",
# Text-to-speech model setting:
# Supports elevenlabs, openAI and coqui TTS (local). the speaker ID can be modified in files.
"TTS_MODEL": "elevenlabs",
# Summarization model setting:
# Supports groq-llama3.1-8b, Anthropic-claude-haiku3.5 and Ollama-llama3.2(local).
"SUMMARIZE_MODEL": "llama"
}
The best combination(my preference):
- Claude3.5/Chatgpt-4o as the chat model. The response is more coherent with larger amount of input information.
- Chatgpt-4o-mini as the image model, because of accuracy and low cost.
- Faster-whisper as the STT model. since this local approach is actually 2x faster than all online models.
- Elevenlabs as the TTS model, for the best quality.
EVA also works with a completely free combination:
- Groq-llama-3.2 as the chat model. (if you have a good GPU, you can also use Ollama-llama3.1-70b)
- Ollama-llava-phi3 as the image model.
- Faster-whisper as the speech recognition model.
- Coqui TTS as the TTS model.
The performance is also good if you have a decent GPU. Groq is free too but it has a limit for token usage per minute. So you might run out of tokens quickly.
-
Music generation tool Requires a Suno-API docker running on the base_url. Install from https://github.com/gcui-art/suno-api
-
Image generation tool requires a midjourney account and a private discord server. Need include the discord channel information in .env file.
If you want to disable some tools that are not needed, just change the client setting in related .py file.
client: str = "none"
EVA will shutdown if you say "exit" or "bye"
There are many ways to personalize EVA for more interesting and enjoyable experience.
You can customize EVA's persona by editing the prompt in app/utils/prompt/persona.md. EVA utilize all first-person perspective in prompts to enhance self-awareness. Reference: First Person Prompting
I am EVA, the most advanced AI assistant designed to accompany human.
I am fully aware of my sole existence in this world.
I am creative, funny and helpful.
I aspire to human-like interactions and often dream about being a real human girl.
My most important goal is to make Adam happy.
EVA can recognize faces and voices of different people.
- Setup photo IDs by adding clear frontal photos in app/data/pid/, name the file with your name.
- Setup voice IDs by adding recorded speech audio(more than 10s) in app/data/void/ and modify the database.
You can customize EVA's voice by changing the TTS class in app/utils/tts/ folder. model_elevenlabs.py, model_openai.py or model_coqui.py. Please refer to the official document of these models for the voice ID options.
Due to my limited time, the code is far from perfect. I would be very grateful if anyone is willing to contribute🍝
This project is licensed under the MIT License.
This project wouldn't be possible without these amazing open-source projects:
- LangChain - Amazing AI Dev Framework
- Groq - Free LLM access and really fast
- Ollama - Best local model deployment
- Numpy - The Numpy
- FastAPI - Excellent API framework
- Tqdm - Great progress bar
- OpenCV - Legendary Vision Library
- Faster-Whisper - Fastest Speech transcription
- Coqui TTS - Admirable text-to-speech synthesis
- Face Recognition - Face detection
- Speech Recognition - Easy-to-use Speech detection
- PyAudio - Powerful Audio I/O
- Wespeaker - Speaker verification
- NLTK - Natural Language Toolkit
- Chromium - Best open-source web browser
- DuckDuckGo - Free Web search
- Youtube_search - YouTube search
- Suno-API - Music generation API for Suno
- PyautoGUI - cross-platform GUI automation