Your own voice personal assistant: Voice to Text to LLM to Speech, displayed in a web interface.
- 🎤 The user speaks into the microphone
- ⌨️ Voice is converted to text using Deepgram
- 🤖 Text is sent to OpenAI's GPT-3 API to generate a response
- 📢 Response is converted to speech using ElevenLabs
- 🔊 Speech is played using Pygame
- 💻 Conversation is displayed in a webpage using Taipy
Python 3.8 - 3.11
Make sure you have the following API keys:
- Clone the repository
git clone https://github.com/AlexandreSajus/JARVIS.git
- Install the requirements
pip install -r requirements.txt
- Create a
.env
file in the root directory and add the following variables:
DEEPGRAM_API_KEY=XXX...XXX
OPENAI_API_KEY=sk-XXX...XXX
ELEVENLABS_API_KEY=XXX...XXX
- Run
display.py
to start the web interface
python display.py
- In another terminal, run
jarvis.py
to start the voice assistant
python main.py
- Once ready, both the web interface and the terminal will show
Listening...
- You can now speak into the microphone
- Once you stop speaking, it will show
Stopped listening
- It will then start processing your request
- Once the response is ready, it will show
Speaking...
- The response will be played and displayed in the web interface.
Here is an example:
Listening...
Done listening
Finished transcribing in 1.21 seconds.
Finished generating response in 0.72 seconds.
Finished generating audio in 1.85 seconds.
Speaking...
--- USER: good morning jarvis
--- JARVIS: Good morning, Alex! How can I assist you today?
Listening...
...