Skip to content

Latest commit

 

History

History
1205 lines (990 loc) · 42.1 KB

README.md

File metadata and controls

1205 lines (990 loc) · 42.1 KB

groq-go

Go Reference Go Report Card Coverage Status PhormAI

Powered by Groq for fast inference.

Features

  • Supports all models from Groq in a type-safe way.
  • Supports streaming.
  • Supports moderation.
  • Supports audio transcription.
  • Supports audio translation.
  • Supports Tool Use.
  • Supports Function Calling.
  • JSON Schema Generation from structs.
  • Supports Toolhouse function calling. Extention
  • Supports E2b function calling. Extention
  • Supports Composio function calling. Extention
  • Supports Jigsaw Stack function calling. Extention

Installation

go get github.com/conneroisu/groq-go

Examples

For introductory examples, see the examples directory.

External Repositories using groq-go:

Development

To run the tests:

Make sure you have a groq key set in the environment variable GROQ_KEY.

task test

To run the linter:

task lint

If you fork the repository, you should set up the following environment variables in ci/cd:

export E2B_API_KEY=your-e2b-api-key
export GROQ_KEY=your-groq-key
export TOOLHOUSE_API_KEY=your-toolhouse-api-key

Documentation

The following documentation is generated from the source code using gomarkdoc.

groq

import "github.com/conneroisu/groq-go"

Package groq provides a unofficial client for the Groq API.

With specially designed hardware, the Groq API is a super fast way to query open source llms.

API Documentation: https://console.groq.com/docs/quickstart

Index

Constants

const (
    // ChatMessageRoleSystem is the system chat message role.
    ChatMessageRoleSystem Role = "system"
    // ChatMessageRoleUser is the user chat message role.
    ChatMessageRoleUser Role = "user"
    // ChatMessageRoleAssistant is the assistant chat message role.
    ChatMessageRoleAssistant Role = "assistant"
    // ChatMessageRoleFunction is the function chat message role.
    ChatMessageRoleFunction Role = "function"
    // ChatMessageRoleTool is the tool chat message role.
    ChatMessageRoleTool Role = "tool"

    // ImageURLDetailHigh is the high image url detail.
    ImageURLDetailHigh ImageURLDetail = "high"
    // ImageURLDetailLow is the low image url detail.
    ImageURLDetailLow ImageURLDetail = "low"
    // ImageURLDetailAuto is the auto image url detail.
    ImageURLDetailAuto ImageURLDetail = "auto"

    // ChatMessagePartTypeText is the text chat message part type.
    ChatMessagePartTypeText ChatMessagePartType = "text"
    // ChatMessagePartTypeImageURL is the image url chat message part type.
    ChatMessagePartTypeImageURL ChatMessagePartType = "image_url"
)

func AudioMultipartForm(request AudioRequest, b builders.FormBuilder) error

AudioMultipartForm creates a form with audio file contents and the name of the model to use for audio processing.

AudioRequest represents a request structure for audio API.

type AudioRequest struct {
    // Model is the model to use for the transcription.
    Model models.AudioModel
    // FilePath is either an existing file in your filesystem or a
    // filename representing the contents of Reader.
    FilePath string
    // Reader is an optional io.Reader when you do not want to use
    // an existing file.
    Reader io.Reader
    // Prompt is the prompt for the transcription.
    Prompt string
    // Temperature is the temperature for the transcription.
    Temperature float32
    // Language is the language for the transcription. Only for
    // transcription.
    Language string
    // Format is the format for the response.
    Format Format
}

AudioResponse represents a response structure for audio API.

type AudioResponse struct {
    // Task is the task of the response.
    Task string `json:"task"`
    // Language is the language of the response.
    Language string `json:"language"`
    // Duration is the duration of the response.
    Duration float64 `json:"duration"`
    // Segments is the segments of the response.
    Segments Segments `json:"segments"`
    // Words is the words of the response.
    Words Words `json:"words"`
    // Text is the text of the response.
    Text string `json:"text"`

    Header http.Header // Header is the header of the response.
}

func (*AudioResponse) SetHeader

func (r *AudioResponse) SetHeader(header http.Header)

SetHeader sets the header of the response.

ChatCompletionChoice represents the chat completion choice.

type ChatCompletionChoice struct {
    Index int `json:"index"` // Index is the index of the choice.
    // Message is the chat completion message of the choice.
    Message ChatCompletionMessage `json:"message"`
    // FinishReason is the finish reason of the choice.
    FinishReason FinishReason `json:"finish_reason"`
    // LogProbs is the log probs of the choice.
    //
    // This is basically the probability of the model choosing the
    // token.
    LogProbs *LogProbs `json:"logprobs,omitempty"`
}

ChatCompletionMessage represents the chat completion message.

type ChatCompletionMessage struct {
    // Name is the name of the chat completion message.
    Name string `json:"name"`
    // Role is the role of the chat completion message.
    Role Role `json:"role"`
    // Content is the content of the chat completion message.
    Content string `json:"content"`
    // MultiContent is the multi content of the chat completion
    // message.
    MultiContent []ChatMessagePart `json:"-"`
    // FunctionCall setting for Role=assistant prompts this may be
    // set to the function call generated by the model.
    FunctionCall *tools.FunctionCall `json:"function_call,omitempty"`
    // ToolCalls setting for Role=assistant prompts this may be set
    // to the tool calls generated by the model, such as function
    // calls.
    ToolCalls []tools.ToolCall `json:"tool_calls,omitempty"`
    // ToolCallID is setting for Role=tool prompts this should be
    // set to the ID given in the assistant's prior request to call
    // a tool.
    ToolCallID string `json:"tool_call_id,omitempty"`
}

func (ChatCompletionMessage) MarshalJSON

func (m ChatCompletionMessage) MarshalJSON() ([]byte, error)

MarshalJSON method implements the json.Marshaler interface.

func (*ChatCompletionMessage) UnmarshalJSON

func (m *ChatCompletionMessage) UnmarshalJSON(bs []byte) (err error)

UnmarshalJSON method implements the json.Unmarshaler interface.

ChatCompletionRequest represents a request structure for the chat completion API.

type ChatCompletionRequest struct {
    // Model is the model of the chat completion request.
    Model models.ChatModel `json:"model"`
    // Messages is the messages of the chat completion request.
    //
    // These act as the prompt for the model.
    Messages []ChatCompletionMessage `json:"messages"`
    // MaxTokens is the max tokens of the chat completion request.
    MaxTokens int `json:"max_tokens,omitempty"`
    // Temperature is the temperature of the chat completion
    // request.
    Temperature float32 `json:"temperature,omitempty"`
    // TopP is the top p of the chat completion request.
    TopP float32 `json:"top_p,omitempty"`
    // N is the n of the chat completion request.
    N   int `json:"n,omitempty"`
    // Stream is the stream of the chat completion request.
    Stream bool `json:"stream,omitempty"`
    // Stop is the stop of the chat completion request.
    Stop []string `json:"stop,omitempty"`
    // PresencePenalty is the presence penalty of the chat
    // completion request.
    PresencePenalty float32 `json:"presence_penalty,omitempty"`
    // ResponseFormat is the response format of the chat completion
    // request.
    ResponseFormat *ChatCompletionResponseFormat `json:"response_format,omitempty"`
    // Seed is the seed of the chat completion request.
    Seed *int `json:"seed,omitempty"`
    // FrequencyPenalty is the frequency penalty of the chat
    // completion request.
    FrequencyPenalty float32 `json:"frequency_penalty,omitempty"`
    // LogitBias is must be a token id string (specified by their
    // token ID in the tokenizer), not a word string. incorrect: `"logit_bias":{ "You": 6}`, correct: `"logit_bias":{"1639": 6}` refs: https://platform.openai.com/docs/api-reference/chat/create#chat/create-logit_bias
    LogitBias map[string]int `json:"logit_bias,omitempty"`
    // LogProbs indicates whether to return log probabilities of the
    // output tokens or not. If true, returns the log probabilities
    // of each output token returned in the content of message.
    //
    // This option is currently not available on the
    // gpt-4-vision-preview model.
    LogProbs bool `json:"logprobs,omitempty"`
    // TopLogProbs is an integer between 0 and 5 specifying the
    // number of most likely tokens to return at each token
    // position, each with an associated log probability. Logprobs
    // must be set to true if this parameter is used.
    TopLogProbs int `json:"top_logprobs,omitempty"`
    // User is the user of the chat completion request.
    User string `json:"user,omitempty"`
    // Tools is the tools of the chat completion request.
    Tools []tools.Tool `json:"tools,omitempty"`
    // This can be either a string or an ToolChoice object.
    ToolChoice any `json:"tool_choice,omitempty"`
    // Options for streaming response. Only set this when you set stream: true.
    StreamOptions *StreamOptions `json:"stream_options,omitempty"`
    // Disable the default behavior of parallel tool calls by setting it: false.
    ParallelToolCalls any `json:"parallel_tool_calls,omitempty"`
    // RetryDelay is the delay between retries.
    RetryDelay time.Duration `json:"-"`
}

ChatCompletionResponse represents a response structure for chat completion API.

type ChatCompletionResponse struct {
    // ID is the id of the response.
    ID  string `json:"id"`
    // Object is the object of the response.
    Object string `json:"object"`
    // Created is the created time of the response.
    Created int64 `json:"created"`
    // Model is the model of the response.
    Model models.ChatModel `json:"model"`
    // Choices is the choices of the response.
    Choices []ChatCompletionChoice `json:"choices"`
    // Usage is the usage of the response.
    Usage Usage `json:"usage"`
    // SystemFingerprint is the system fingerprint of the response.
    SystemFingerprint string `json:"system_fingerprint"`
    // Header is the header of the response.
    http.Header
}

func (*ChatCompletionResponse) SetHeader

func (r *ChatCompletionResponse) SetHeader(h http.Header)

SetHeader sets the header of the response.

ChatCompletionResponseFormat is the chat completion response format.

type ChatCompletionResponseFormat struct {
    // Type is the type of the chat completion response format.
    Type Format `json:"type,omitempty"`
    // JSONSchema is the json schema of the chat completion response
    // format.
    JSONSchema *ChatCompletionResponseFormatJSONSchema `json:"json_schema,omitempty"`
}

ChatCompletionResponseFormatJSONSchema is the chat completion response format json schema.

type ChatCompletionResponseFormatJSONSchema struct {
    // Name is the name of the chat completion response format json
    // schema.
    //
    // it is used to further identify the schema in the response.
    Name string `json:"name"`
    // Description is the description of the chat completion
    // response format json schema.
    Description string `json:"description,omitempty"`
    // Schema is the schema of the chat completion response format
    // json schema.
    Schema schema.Schema `json:"schema"`
    // Strict determines whether to enforce the schema upon the
    // generated content.
    Strict bool `json:"strict"`
}

ChatCompletionStream is a stream of ChatCompletionStreamResponse.

type ChatCompletionStream struct {
    // contains filtered or unexported fields
}

ChatCompletionStreamChoice represents a response structure for chat completion API.

type ChatCompletionStreamChoice struct {
    // Index is the index of the choice.
    Index int `json:"index"`
    // Delta is the delta of the choice.
    Delta ChatCompletionStreamChoiceDelta `json:"delta"`
    // FinishReason is the finish reason of the choice.
    FinishReason FinishReason `json:"finish_reason"`
}

ChatCompletionStreamChoiceDelta represents a response structure for chat completion API.

type ChatCompletionStreamChoiceDelta struct {
    // Content is the content of the response.
    Content string `json:"content,omitempty"`
    // Role is the role of the creator of the completion.
    Role string `json:"role,omitempty"`
    // FunctionCall is the function call of the response.
    FunctionCall *tools.FunctionCall `json:"function_call,omitempty"`
    // ToolCalls are the tool calls of the response.
    ToolCalls []tools.ToolCall `json:"tool_calls,omitempty"`
}

ChatCompletionStreamResponse represents a response structure for chat completion API.

type ChatCompletionStreamResponse struct {
    // ID is the identifier for the chat completion stream response.
    ID  string `json:"id"`
    // Object is the object type of the chat completion stream
    // response.
    Object string `json:"object"`
    // Created is the creation time of the chat completion stream
    // response.
    Created int64 `json:"created"`
    // Model is the model used for the chat completion stream
    // response.
    Model models.ChatModel `json:"model"`
    // Choices is the choices for the chat completion stream
    // response.
    Choices []ChatCompletionStreamChoice `json:"choices"`
    // SystemFingerprint is the system fingerprint for the chat
    // completion stream response.
    SystemFingerprint string `json:"system_fingerprint"`
    // PromptAnnotations is the prompt annotations for the chat
    // completion stream response.
    PromptAnnotations []PromptAnnotation `json:"prompt_annotations,omitempty"`
    // PromptFilterResults is the prompt filter results for the chat
    // completion stream response.
    PromptFilterResults []struct {
        Index int `json:"index"`
    }   `json:"prompt_filter_results,omitempty"`
    // Usage is an optional field that will only be present when you
    // set stream_options: {"include_usage": true} in your request.
    //
    // When present, it contains a null value except for the last
    // chunk which contains the token usage statistics for the
    // entire request.
    Usage *Usage `json:"usage,omitempty"`
}

ChatMessageImageURL represents the chat message image url.

type ChatMessageImageURL struct {
    // URL is the url of the image.
    URL string `json:"url,omitempty"`
    // Detail is the detail of the image url.
    Detail ImageURLDetail `json:"detail,omitempty"`
}

ChatMessagePart represents the chat message part of a chat completion message.

type ChatMessagePart struct {
    // Text is the text of the chat message part.
    Text string `json:"text,omitempty"`
    // Type is the type of the chat message part.
    Type ChatMessagePartType `json:"type,omitempty"`
    // ImageURL is the image url of the chat message part.
    ImageURL *ChatMessageImageURL `json:"image_url,omitempty"`
}

ChatMessagePartType is the chat message part type.

string

type ChatMessagePartType string

type Client

Client is a Groq api client.

type Client struct {
    // contains filtered or unexported fields
}

func NewClient(groqAPIKey string, opts ...Opts) (*Client, error)

NewClient creates a new Groq client.

func (*Client) CreateChatCompletion

func (c *Client) CreateChatCompletion(ctx context.Context, request ChatCompletionRequest) (response ChatCompletionResponse, err error)

CreateChatCompletion method is an API call to create a chat completion.

Example:

func run(
        ctx context.Context,
) error {
        key := os.Getenv("GROQ_KEY")
        client, err := groq.NewClient(key)
        if err != nil {
                return err
        }
        response, err := client.CreateChatCompletion(
                ctx,
                groq.ChatCompletionRequest{
                        Model: models.ModelLlavaV157B4096Preview,
                        Messages: []groq.ChatCompletionMessage{
                                {
                                        Role: groq.ChatMessageRoleUser,
                                        MultiContent: []groq.ChatMessagePart{
                                                {
                                                        Type: groq.ChatMessagePartTypeText,
                                                        Text: "What is the contents of the image?",
                                                },
                                                {
                                                        Type: groq.ChatMessagePartTypeImageURL,
                                                        ImageURL: &groq.ChatMessageImageURL{
                                                                URL:    "https://cdnimg.webstaurantstore.com/images/products/large/87539/251494.jpg",
                                                                Detail: "auto",
                                                        },
                                                }},
                                },
                        },
                        MaxTokens: 2000,
                },
        )
        if err != nil {
                return err
        }
        fmt.Println(response.Choices[0].Message.Content)
        return nil
}

func (c *Client) CreateChatCompletionJSON(ctx context.Context, request ChatCompletionRequest, output any) (err error)

CreateChatCompletionJSON method is an API call to create a chat completion w/ object output.

Example:

// Responses is a response from the models endpoint.
type Responses []struct {
        Title string `json:"title" jsonschema:"title=Poem Title,description=Title of the poem, minLength=1, maxLength=20"`
        Text  string `json:"text" jsonschema:"title=Poem Text,description=Text of the poem, minLength=10, maxLength=200"`
}

func run(
        ctx context.Context,
) error {
        client, err := groq.NewClient(os.Getenv("GROQ_KEY"))
        if err != nil {
                return err
        }
        resp := &Responses{}
        err = client.CreateChatCompletionJSON(ctx, groq.ChatCompletionRequest{
                Model: models.ModelLlama3Groq70B8192ToolUsePreview,
                Messages: []groq.ChatCompletionMessage{
                        {
                                Role:    groq.ChatMessageRoleUser,
                                Content: "Create 5 short poems in json format with title and text.",
                        },
                },
                MaxTokens: 2000,
        }, resp)
        if err != nil {
                return err
        }

        jsValue, err := json.MarshalIndent(resp, "", "  ")
        if err != nil {
                return err
        }
        fmt.Println(string(jsValue))

        return nil
}

func (c *Client) CreateChatCompletionStream(ctx context.Context, request ChatCompletionRequest) (stream *ChatCompletionStream, err error)

CreateChatCompletionStream method is an API call to create a chat completion w/ streaming support.

If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.

Example:

func run(
        ctx context.Context,
        r io.Reader,
        w io.Writer,
) error {
        key := os.Getenv("GROQ_KEY")
        client, err := groq.NewClient(key)
        if err != nil {
                return err
        }
        for {
                err = input(ctx, client, r, w)
                if err != nil {
                        return err
                }
        }
}
func input(
        ctx context.Context,
        client *groq.Client,
        r io.Reader,
        w io.Writer,
) error {
        fmt.Println("")
        fmt.Print("->")
        reader := bufio.NewReader(r)
        writer := w
        var lines []string
        select {
        case <-ctx.Done():
                return ctx.Err()
        default:
                line, err := reader.ReadString('\n')
                if err != nil {
                        return err
                }
                if len(strings.TrimSpace(line)) == 0 {
                        break
                }
                lines = append(lines, line)
                break
        }
        history = append(history, groq.ChatCompletionMessage{
                Role:    groq.ChatMessageRoleUser,
                Content: strings.Join(lines, "\n"),
        })
        output, err := client.CreateChatCompletionStream(
                ctx,
                groq.ChatCompletionRequest{
                        Model:     models.ModelGemma29BIt,
                        Messages:  history,
                        MaxTokens: 2000,
                },
        )
        if err != nil {
                return err
        }
        fmt.Fprintln(writer, "\nai: ")
        for {
                response, err := output.Recv()
                if err != nil {
                        return err
                }
                if response.Choices[0].FinishReason == groq.ReasonStop {
                        break
                }
                fmt.Fprint(writer, response.Choices[0].Delta.Content)
        }
        return nil
}

func (*Client) CreateTranscription

func (c *Client) CreateTranscription(ctx context.Context, request AudioRequest) (AudioResponse, error)

CreateTranscription calls the transcriptions endpoint with the given request.

Returns transcribed text in the response_format specified in the request.

func (*Client) CreateTranslation

func (c *Client) CreateTranslation(ctx context.Context, request AudioRequest) (AudioResponse, error)

CreateTranslation calls the translations endpoint with the given request.

Returns the translated text in the response_format specified in the request.

func (*Client) Moderate

func (c *Client) Moderate(ctx context.Context, messages []ChatCompletionMessage, model models.ModerationModel) (response Moderation, err error)

Moderate performs a moderation api call over a string. Input can be an array or slice but a string will reduce the complexity.

Endpoint is an endpoint for the groq api.

type Endpoint string

FinishReason is the finish reason.

string

type FinishReason string

const (
    // ReasonStop is the stop finish reason for a chat completion.
    ReasonStop FinishReason = "stop"
    // ReasonLength is the length finish reason for a chat completion.
    ReasonLength FinishReason = "length"
    // ReasonFunctionCall is the function call finish reason for a chat
    // completion.
    ReasonFunctionCall FinishReason = "function_call"
    // ReasonToolCalls is the tool calls finish reason for a chat
    // completion.
    ReasonToolCalls FinishReason = "tool_calls"
    // ReasonContentFilter is the content filter finish reason for a chat
    // completion.
    ReasonContentFilter FinishReason = "content_filter"
    // ReasonNull is the null finish reason for a chat completion.
    ReasonNull FinishReason = "null"
)

func (FinishReason) MarshalJSON

func (r FinishReason) MarshalJSON() ([]byte, error)

MarshalJSON implements the json.Marshaler interface.

type Format

Format is the format of a response. string

type Format string

const (
    // FormatText is the text format. It is the default format of a
    // response.
    FormatText Format = "text"
    // FormatJSON is the JSON format. There is no support for streaming with
    // JSON format selected.
    FormatJSON Format = "json"
    // FormatSRT is the SRT format. This is a text format that is only
    // supported for the transcription API.
    // SRT format selected.
    FormatSRT Format = "srt"
    // FormatVTT is the VTT format. This is a text format that is only
    // supported for the transcription API.
    FormatVTT Format = "vtt"
    // FormatVerboseJSON is the verbose JSON format. This is a JSON format
    // that is only supported for the transcription API.
    FormatVerboseJSON Format = "verbose_json"
    // FormatJSONObject is the json object chat
    // completion response format type.
    FormatJSONObject Format = "json_object"
    // FormatJSONSchema is the json schema chat
    // completion response format type.
    FormatJSONSchema Format = "json_schema"
)

ImageURLDetail is the detail of the image at the URL.

string

type ImageURLDetail string

LogProbs is the top-level structure containing the log probability information.

type LogProbs struct {
    // Content is a list of message content tokens with log
    // probability information.
    Content []struct {
        // Token is the token of the log prob.
        Token string `json:"token"`
        // LogProb is the log prob of the log prob.
        LogProb float64 `json:"logprob"`
        // Omitting the field if it is null
        Bytes []byte `json:"bytes,omitempty"`
        // TopLogProbs is a list of the most likely tokens and
        // their log probability, at this token position. In
        // rare cases, there may be fewer than the number of
        // requested top_logprobs returned.
        TopLogProbs []TopLogProbs `json:"top_logprobs"`
    } `json:"content"`
}

Moderation represents the response of a moderation request.

type Moderation struct {
    // Categories is the categories of the result.
    Categories []moderation.HarmfulCategory `json:"categories"`
    // Flagged is the flagged status of the result.
    Flagged bool `json:"flagged"`
}

type Opts

Opts is a function that sets options for a Groq client.

type Opts func(*Client)

func WithBaseURL(baseURL string) Opts

WithBaseURL sets the base URL for the Groq client.

func WithClient(client *http.Client) Opts

WithClient sets the client for the Groq client.

func WithLogger(logger *slog.Logger) Opts

WithLogger sets the logger for the Groq client.

PromptAnnotation represents the prompt annotation.

type PromptAnnotation struct {
    PromptIndex int `json:"prompt_index,omitempty"`
}

RateLimitHeaders struct represents Groq rate limits headers.

type RateLimitHeaders struct {
    // LimitRequests is the limit requests of the rate limit
    // headers.
    LimitRequests int `json:"x-ratelimit-limit-requests"`
    // LimitTokens is the limit tokens of the rate limit headers.
    LimitTokens int `json:"x-ratelimit-limit-tokens"`
    // RemainingRequests is the remaining requests of the rate
    // limit headers.
    RemainingRequests int `json:"x-ratelimit-remaining-requests"`
    // RemainingTokens is the remaining tokens of the rate limit
    // headers.
    RemainingTokens int `json:"x-ratelimit-remaining-tokens"`
    // ResetRequests is the reset requests of the rate limit
    // headers.
    ResetRequests ResetTime `json:"x-ratelimit-reset-requests"`
    // ResetTokens is the reset tokens of the rate limit headers.
    ResetTokens ResetTime `json:"x-ratelimit-reset-tokens"`
}

ResetTime is a time.Time wrapper for the rate limit reset time. string

type ResetTime string

func (ResetTime) String

func (r ResetTime) String() string

String returns the string representation of the ResetTime.

func (ResetTime) Time

func (r ResetTime) Time() time.Time

Time returns the time.Time representation of the ResetTime.

type Role

Role is the role of the chat completion message.

string

type Role string

Segments is the segments of the response.

type Segments []struct {
    // ID is the ID of the segment.
    ID  int `json:"id"`
    // Seek is the seek of the segment.
    Seek int `json:"seek"`
    // Start is the start of the segment.
    Start float64 `json:"start"`
    // End is the end of the segment.
    End float64 `json:"end"`
    // Text is the text of the segment.
    Text string `json:"text"`
    // Tokens is the tokens of the segment.
    Tokens []int `json:"tokens"`
    // Temperature is the temperature of the segment.
    Temperature float64 `json:"temperature"`
    // AvgLogprob is the avg log prob of the segment.
    AvgLogprob float64 `json:"avg_logprob"`
    // CompressionRatio is the compression ratio of the segment.
    CompressionRatio float64 `json:"compression_ratio"`
    // NoSpeechProb is the no speech prob of the segment.
    NoSpeechProb float64 `json:"no_speech_prob"`
    // Transient is the transient of the segment.
    Transient bool `json:"transient"`
}

StreamOptions represents the stream options.

type StreamOptions struct {
    // IncludeUsage is the include usage option of the stream
    // options.
    //
    // If set, an additional chunk will be streamed before the data:
    // [DONE] message.
    // The usage field on this chunk shows the token usage
    // statistics for the entire request, and the choices field will
    // always be an empty array.
    //
    // All other chunks will also include a usage field, but with a
    // null value.
    IncludeUsage bool `json:"include_usage,omitempty"`
}

TopLogProbs represents the top log probs.

type TopLogProbs struct {
    // Token is the token of the top log probs.
    Token string `json:"token"`
    // LogProb is the log prob of the top log probs.
    LogProb float64 `json:"logprob"`
    // Bytes is the bytes of the top log probs.
    Bytes []byte `json:"bytes,omitempty"`
}

TranscriptionTimestampGranularity is the timestamp granularity for the transcription.

string

type TranscriptionTimestampGranularity string

const (
    // TranscriptionTimestampGranularityWord is the word timestamp
    // granularity.
    TranscriptionTimestampGranularityWord TranscriptionTimestampGranularity = "word"
    // TranscriptionTimestampGranularitySegment is the segment timestamp
    // granularity.
    TranscriptionTimestampGranularitySegment TranscriptionTimestampGranularity = "segment"
)

type Usage

Usage Represents the total token usage per request to Groq.

type Usage struct {
    PromptTokens     int `json:"prompt_tokens"`
    CompletionTokens int `json:"completion_tokens"`
    TotalTokens      int `json:"total_tokens"`
}

type Words

Words is the words of the audio response.

type Words []struct {
    // Word is the textual representation of a word in the audio
    // response.
    Word string `json:"word"`
    // Start is the start of the words in seconds.
    Start float64 `json:"start"`
    // End is the end of the words in seconds.
    End float64 `json:"end"`
}

Generated by gomarkdoc