pnpm i @charisma-ai/sdk
// main.js
import {
Playthrough,
createPlaythroughToken,
createConversation,
} from "@charisma-ai/sdk";
let conversation;
async function start() {
// Get a unique token for the playthrough.
const { token } = await createPlaythroughToken({ storyId: 4 });
// Create a new conversation.
const { conversationUuid } = await createConversation(token);
// Create a new playthrough.
const playthrough = new Playthrough(token);
// Join the conversation.
conversation = playthrough.joinConversation(conversationUuid);
// Handle messages in the conversation.
conversation.on("message", (message) => {
console.log(message.message.text);
});
conversation.on("problem", console.warn);
// Prepare the listener to start the conversation when the playthrough is connected.
playthrough.on("connection-status", (status) => {
if (status === "connected") {
conversation.start();
}
});
await playthrough.connect();
}
// Send the reply to charisma.
function reply(message) {
conversation.reply({ text: message });
}
There are two ways to use the API directly, either by importing api
, which includes all the API methods, or you can import API methods individually, like createPlaythroughToken
.
import { api, createPlaythroughToken } from "@charisma-ai/sdk";
api.createPlaythroughToken();
createPlaythroughToken();
Most API methods are also callable using an instance of the Playthrough
class, which automatically scopes the API calls to the playthrough token
passed when creating the instance:
const playthrough = new Playthrough(token);
// No need to pass `token` here!
playthrough.createConversation();
Use this to set up a new playthrough.
storyId
(number
): Theid
of the story that you want to create a new playthrough for. The story must be published, unless a Charisma.ai user token has been passed and the user matches the owner of the story.version
(number
, optional): Theversion
of the story that you want to create a new playthrough for. If omitted, it will default to the most recent published version. To get the draft version of a story, pass-1
and anapiKey
.apiKey
(string
, optional): To access draft, test or unpublished versions of your story, pass anapiKey
. The API key can be found on the story overview page.languageCode
(string
, optional): To play a story in a language other than English (en
, the default), pass a BCP-47languageCode
. For example, to play in Italian, useit
.
Returns a promise that resolves with the token.
const { token } = await createPlaythroughToken({
storyId: 12,
version: 4,
apiKey: "...",
languageCode: "en",
});
A playthrough can have many simultaneous conversations. In order to start interacting, a conversation needs to be created, which can then be joined.
playthroughToken
(string
): The token generated withcreatePlaythroughToken
.
const { conversationUuid } = await createConversation(token);
Create a new Playthrough
instance to connect to a playthrough and interact with the chat engine.
playthroughToken
(string
): Thetoken
generated increatePlaythroughToken
.
This makes the Playthrough
instance listen out for events for a particular conversation, and returns a Conversation
that events can be called on and event listeners attached.
conversationUuid
(string
): The conversation UUID generated withcreateConversation
.
Returns a Conversation
, which can be used to send and receive events bound to that conversation.
playthrough.joinConversation(conversationUuid);
This is what kicks off the connection to the chat engine. Call this once you're ready to start sending and receiving events.
Returns an object with a playerSessionId
property.
await playthrough.connect();
If you want to end the connection to the playthrough, you can call playthrough.disconnect()
.
playthrough.disconnect();
To interact with the story, events are sent to and from the server that the WebSocket is connected to.
{
// For Pro stories, start the story at a particular subplot with the `startGraphReferenceId`.
// It can be found by clicking '...' next to the subplot in the sidebar, and clicking 'Edit details'.
// For Web Comic stories do not provide `startGraphReferenceId`, the story will start automatically from the first scene
"startGraphReferenceId": "my-id", // Optional, default undefined
}
{
"text": "Please reply to this!"
}
This event has no fields.
{
"action": "pick-up-book"
}
This event has no fields.
{
"message": {
"text": "Greetings and good day.",
"character": {
"id": 20,
"name": "Ted Baker",
"avatar": "https://s3.charisma.ai/..."
},
"speech": {
"duration": 203,
"audio": /* either a buffer, or a URL */,
}
"metadata": {
"myMetadata": "someValue"
},
"media": null
},
"endStory": false,
"path": [{ "id": 1, "type": "edge" }, { "id": 2, "type": "node" }]
}
This event has no additional data.
This event has no additional data.
When another player sends specific events to a Charisma playthrough, they are sent back to all other connected players, so that other players can perform actions based on the events, such as displaying their messages in UI.
The events that are currently echoed to all clients are action
, reply
, resume
, start
and tap
.
Important: These events are not emitted for the player that sent the original corresponding event!
Each event includes its committed eventId
and timestamp
as well as the original payload (excluding the speechConfig
).
If a problem occurs during a conversation, such as a pathway not being found after submitting a player message, problem
will be emitted.
This sets the speech configuration to use for all events in the conversation until set otherwise:
{
"encoding": ["ogg", "mp3"],
"output": "buffer"
}
encoding
is the file format of the resulting speech: mp3
, ogg
, wav
or pcm
. If an array, Charisma will use the first encoding that the voice supports, useful for cases where a voice synthesis service of a particular voice does not support the "default" encoding you wish to use.
output
determines whether the speech received back is a buffer
(a byte array) or whether it should instead be a url
pointing to the audio file.
The audio manager will handle the audio from characters, media and speech-to-text functionality.
import { AudioManager } from "@charisma-ai/sdk";
const audio = new AudioManager({
// AudioManager options
handleTranscript: (transcript: string) => {
console.log(transcript)
},
})
Option | Type | Default | Description |
---|---|---|---|
duckVolumeLevel |
number |
0 | Volume level when ducking (0 to 1) |
normalVolumeLevel |
number |
1 | Regular volume level (0 to 1) |
sttService |
"charisma/deepgram" | "browser" |
"charisma/deepgram" |
Speech-to-text service to use (see below). |
sttUrl |
string |
"https://stt.charisma.ai" |
Speech-to-text service URL. |
streamTimeslice |
number |
100 | The number of milliseconds to record into each Blob. See https://developer.mozilla.org/en-US/docs/Web/API/MediaRecorder/start#timeslice |
handleTranscript |
(transcript: string) => void |
Callback to handle transcripts. | |
handleStartSTT |
() => void |
Callback to handle when speech-to-text starts. Can be used to update the UI. | |
handleStopSTT |
() => void |
Callback to handle when speech-to-text stops. | |
handleError |
(error: string) => void |
console.error(error) |
Callback to handle errors. |
handleDisconnect |
(message: string) => void |
console.error(message) |
Callback to handle when the transcription service disconnects. |
handleConnect |
(message: string) => void |
console.log(message) |
Callback to handle when the transcription service connects. |
There are currently two speech-to-text services available:
charisma/deepgram
: Deepgram is a neural network based speech-to-text service that that can be accessed through Charsima.ai.browser
: Some browsers have built-in speech recognition, which can be used to provide speech-to-text functionality. This is only available in browsers that supportSpeechRecognition
. Please refer to this browser compatibility table for more details.
Starts listening for speech. This will call handleStartSTT() when the speech-to-text service starts.
Stops listening for speech. This will call handleStopSTT() when the speech-to-text service stops.
Connects the to the speech-to-text service using the playthrough token and player session id to validate. This is only needed when using the charisma/deepgram
speech-to-text service.
The playerSessionId
is returned from playthrough.connect()
. See the deepgram-stt
demo for an example.
Resets the timeout for the speech-to-text service to timeout
in milliseconds. If this is not run, the speech-to-text service will default to a timeout of 10 seconds.
After the timeout, the speech-to-text service will automatically stop listening.
Returns true
if the browser supports the browser
speech recognition service.
Initialises the audio for characters and media. This method must be called before attempting to play audio from media nodes or character speech.
This method must also be called from a user interaction event, such as a click or a keypress. This is due to a security restriction in some browsers. We recommend adding it to the "start" button the sets up your playthrough. See the demos for an example.
This plays the generated speech in the message event. Typically, you would want to use this in combination with a message
conversation handler.
Returns a Promise that resolves once the speech has ended.
options
is an object with two properties:
type SpeakerPlayOptions = {
/**
* Whether to interrupt the same track as the `trackId` passed (`track`), all currently playing audio (`all`), or not to interrupt anything (`none`). Default is `none`.
*/
interrupt?: "track" | "all" | "none";
/**
* If you want to prevent a particular character to speak over themselves, a `trackId` can be set to a unique string. When playing another speech clip, if the same `trackId` is passed and `interrupt` is set to `true`, then the previous clip will stop playing. Default is unset.
*/
trackId?: string;
};
Sets the volume of the character speech. Must be a number between 0 and 1.
Will play the audio tracks in a message event. An empty array can also be passed here so it can be called on every message event.
Sets the volume of all media audio tracks. Must be a number between 0 and 1.
The volume set here will be multiplied by the volume set in the graph editor for each track. For example, if you set the graph editor volume to 0.5 and the SDK volume to 1, the final volume will be 0.5. If you set the graph editor volume to 0.5 and the SDK volume to 0.5, the final volume will be 0.25.
Will mute and unmute all media audio tracks.
Will stop all media audio tracks.
For further details or any questions, feel free to get in touch at hello@charisma.ai, or head to the Charisma docs!