This is the code for our paper EDEN: Empathetic Dialogues for English learning.
Our paper improves upon our previous work, Using Adaptive Empathetic Responses for Teaching English. We detail our specific improvements in the following figure, where our improvements are highlighted in green:
Our repository is organized as the following:
eden_api/
: The API for EDEN for you to run on your own GPU server. Since the grammar correction code is proprietary, we do not include the grammar correction component in this repository. Instead, you can check out thegrammar_model
repository for how we trained our grammar model.local_ui/
: A barebone UI for using EDEN locally! It is a streaming chatbot UI that you can adopt for your own purposes as well!dialogue_model/
: Code and data for training our LLama-2 conversation model.grammar_model/
: Data for training our Llama-2 spoken grammar correction model (under construction).experimental_data/
: Data from our user study. Specifically, we include the informed consent form, as well as the data we used for analyses, for reproduction purposes.
Please refer to eden_api/
for how to get your EDEN backend spun up on a GPU server; just supply your OpenAI Key!
Check out local_ui/
for running our minimalistic local user interface; make sure that you update the files with the link to your GPU server so that everything calls each other correctly!
More specifically,
Everything here is done within the eden_api/
directory.
- Replace all the <OPENAI_API_KEY> with your own API keys
- Create a conda environment with the
.yml
file - Create two directories:
audio_cache
andmodel_storage
, since they are needed in the server running process - Download the
pytorch_model.bin
file from this huggingface wav2vec2 model, and place it in themodel_storage
directory
python3 app.py --serving_port=<PORT_NUMBER>
Make sure to update the URL for the GPU server and the port you are running the Flask application on in the front-end UI code, located under local_ui/
.
Here, we move to the local_ui/
directory.
- Install all dependencies by creating a conda environment by using the
environment_ui.yml
file - Write your preferences for having Mandarin translations and for the style of adaptive empathetic feedback in
pre_survey.json
mandarin_translation
: do you want Mandarin translations of chatbot utterances? If yes, put downtrue
, otherwisefalse
feedback_pref
:short
- do you prefer short and succinct utterances?example
- do you want your feedback to contain specific examples?
- Write your desired topic and empathy mode in
settings.json
empathy_mode
:0
- no empathetic feedback,1
- fixed empathetic feedback,2
- adaptive empathetic feedbacktopic
: see the appendix in our paper for a complete list of topic that the chatbot can discuss, defaults to "Favorite movie"
python3 app_stream.py
The app defaults to running on port 5023, but you can set it to run on any port by modifying app_stream.py
and static/js/recorder.js
, as long as the two ports are consistent to enable sockets. Then you can go to localhost:5023
to access the UI!
Make sure to have your API (eden_api/
) running on your GPU server first, and make sure you have updated the corresponding URLs in your app_stream.py
file.
We supply a spoken grammar correction model and a conversation model tailored for English as a second language learners. To use these models, please refer to their respective directories for usage examples. If you have any questions, please don't hesitate to make an issue!
Thank you for your interest :D