The config
file is a configuration file that the WhatsApp AI uses to determine which models to use, what their prefixes are, and whether they are enabled or not.
- This file can be found in the src directory, and its name is
whatsapp-ai.config.ts.
default config
By default, the configuration file contains four models, ChatGPT
, DALLE
, StableDiffusion
, and Custom
. Each of these models has a prefix field that indicates the prefix message should have to get a reply from that model. The enable field determines whether the model is enabled or not. Additionally, the Custom
model has a context field, which specifies the context that the model uses to generate responses.
/* Models config files */
import { Config } from './types/Config';
const config: Config = {
chatGPTModel: 'gpt-3.5-turbo', // learn more about GPT models https://platform.openai.com/docs/models
sendWelcomeMessage: false, // Whether to send a welcome message to the user (located at /src/services/welcomeUser.ts)
models: {
ChatGPT: {
prefix: '!chatgpt', // Prefix for the ChatGPT model
enable: true // Whether the ChatGPT model is enabled or not
},
DALLE: {
prefix: '!dalle', // Prefix for the DALLE model
enable: true // Whether the DALLE model is enabled or not
},
StableDiffusion: {
prefix: '!stable', // Prefix for the StableDiffusion model
enable: true // Whether the StableDiffusion model is enabled or not
},
GeminiVision: {
prefix: '!gemini-vision', // Prefix for the GeminiVision model
enable: true // Whether the GeminiVision model is enabled or not
},
Gemini: {
prefix: '!gemini', // Prefix for the Gemini model
enable: true // Whether the Gemini model is enabled or not
},
Custom: [
{
/** Custom Model */
modelName: 'whatsapp-ai-bot', // Name of the custom model
prefix: '!bot', // Prefix for the custom model
enable: true, // Whether the custom model is enabled or not
/**
* context: "file-path (.txt, .text, .md)",
* context: "text url",
* context: "text"
*/
context: './static/whatsapp-ai-bot.md' // Context for the custom model
}
]
},
enablePrefix: {
/** if enable, reply to those messages start with prefix */
enable: true, // Whether prefix messages are enabled or not
/** default model to use if message not starts with prefix and enable is false */
defaultModel: 'ChatGPT' // Default model to use if no prefix is present in the message
},
sessionStorage: {
/** Enable or disable session storage */
enable: true, // Whether session storage is enabled or not
/** Session storage path */
wwjsPath: './' // Path for the session storage
},
selfMessage: {
/** Skip prefix for self messages */
skipPrefix: false // Whether to skip the prefix for self messages or not
}
};
export default config;
To create a custom model, you need to add a new field in models.Custom
. The modelName field specifies the name of the model, and the prefix field specifies the prefix that the model uses. The enable
field determines whether the model is enabled or not, and the context field specifies the context that the model uses to generate responses. The context can be a string of text, a file path, or a URL.
modelName
: a string that represents the name of your custom modelprefix
: a string that represents the prefix that messages should have to get a reply from your custom modelenable
: a boolean that indicates whether your custom model should be enabled or disabledcontext
: a string that represents the context of your custom model. This can be one of the following options:"your_context"
: a string that represents the context directly"path to file (.md,.txt)"
: a string that represents the path to a file containing the context"url"
: a string that represents the URL to a website containing the context.
{
modelName: "your_model_name",
prefix: "!your_prefix",
enable: true,
context: "your_context" | "path to file (.md,.txt)" | "url",
}
Test your model
- run server
yarn dev
- type message start with
!your_prefix
.
It's important to note that content generated by AI may not be 100% accurate, so it's advisable to provide more text to help the AI understand the context of the conversation better.