From 69162bfe4214d9ad35cc9e446412f112ed6b3258 Mon Sep 17 00:00:00 2001 From: liivw <164842155+liivw@users.noreply.github.com> Date: Fri, 15 Nov 2024 11:48:45 +0800 Subject: [PATCH] update supported types --- docs/developer-guides/building-workflows.mdx | 4 ++-- docs/reference-docs/ai-tasks/llm-get-document.md | 2 +- docs/reference-docs/ai-tasks/llm-index-document.md | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/developer-guides/building-workflows.mdx b/docs/developer-guides/building-workflows.mdx index 259bb5d1..d10369c4 100644 --- a/docs/developer-guides/building-workflows.mdx +++ b/docs/developer-guides/building-workflows.mdx @@ -71,11 +71,11 @@ In most common cases, you can make use of existing Conductor features instead of | Orchestrate human input in the loop | [Human](../reference-docs/operators/human) | | Query data from Conductor Search API or Metrics | [Query Processor](../reference-docs/system-tasks/query-processor) | | Send alerts to Opsgenie | [Opsgenie](../reference-docs/system-tasks/opsgenie) | -| Retrieve text or media content from a URL | [Get Document](../reference-docs/system-tasks/opsgenie) | +| Retrieve text or JSON content from a URL | [Get Document](../reference-docs/system-tasks/opsgenie) | | Generate text embeddings | [Generate Embeddings](../reference-docs/ai-tasks/llm-generate-embeddings) | | Store text embeddings in a vector database | [Store Embeddings](../reference-docs/ai-tasks/llm-store-embeddings) | | Generate and store text embeddings in a vector database | [Index Text](../reference-docs/ai-tasks/llm-index-text) | -| Chunk, generate, and store text or media embeddings in a vector database | [Index Document](../reference-docs/ai-tasks/llm-index-document) | +| Chunk, generate, and store text embeddings in a vector database | [Index Document](../reference-docs/ai-tasks/llm-index-document) | | Retrieve data from a vector database | [Get Embeddings](../reference-docs/ai-tasks/llm-get-embeddings) | | Retrieve data from a vector database based on a search query | [Search Index](../reference-docs/ai-tasks/llm-search-index) | | Generate text from an LLM based on a defined prompt | [Text Complete](../reference-docs/ai-tasks/llm-text-complete) | diff --git a/docs/reference-docs/ai-tasks/llm-get-document.md b/docs/reference-docs/ai-tasks/llm-get-document.md index f7fd584b..09b3b053 100644 --- a/docs/reference-docs/ai-tasks/llm-get-document.md +++ b/docs/reference-docs/ai-tasks/llm-get-document.md @@ -17,7 +17,7 @@ Configure these parameters for the LLM Get Document task. | Parameter | Description | Required/Optional | | --------- | ----------- | ----------------- | | inputParameters.**url** | The URL of the file to be retrieved. | Required. | -| inputParameters.**mediaType** | The media type of the file to be retrieved. Supported media types: | Optional. | +| inputParameters.**mediaType** | The media type of the file to be retrieved. Supported media types: | Optional. | ## Task configuration diff --git a/docs/reference-docs/ai-tasks/llm-index-document.md b/docs/reference-docs/ai-tasks/llm-index-document.md index 8961eab2..fcc2d524 100644 --- a/docs/reference-docs/ai-tasks/llm-index-document.md +++ b/docs/reference-docs/ai-tasks/llm-index-document.md @@ -22,7 +22,7 @@ Configure these parameters for the LLM Index Document task. | inputParameters.**embeddingModelProvider** | The LLM provider for generating the embeddings.

**Note**: If you haven’t configured your AI/LLM provider on your Orkes console, navigate to the **Integrations** tab and configure your required provider. Refer to the documentation on [how to integrate the LLM providers with Orkes Conductor](https://orkes.io/content/category/integrations/ai-llm). | Required. | | inputParameters.**embeddingModel** | The embedding model provided by the selected LLM provider to generate the embeddings. | Required. | | inputParameters.**url** | The URL of the file to be indexed. | Required. | -| inputParameters.**mediaType** | The media type of the file to be indexed. Supported media types: | Optional. | +| inputParameters.**mediaType** | The media type of the file to be indexed. Supported media types:ul>
  • application/pdf
  • text/html
  • text/plain
  • application/json
  • | Optional. | | inputParameters.**chunkSize** | The length of each input text segment when divided for processing by the LLM. For example, if the document contains 2,000 words and the chunk size is set to 500, the document is divided into four chunks for processing. | Optional. | | inputParameters.**chunkOverlap** | The overlap between adjacent chunks. For example, if the chunk overlap is specified as 100, then the first 100 words of each chunk would overlap with the last 100 words of the previous chunk. | Optional. |