From 4ff55d0cc2c549539931ff443d199569c0ee619f Mon Sep 17 00:00:00 2001 From: chenweize1998 Date: Wed, 10 Jul 2024 01:26:44 +0800 Subject: [PATCH] fix: sphinx ci --- .github/workflows/sphinx.yml | 1 + docs/source/customize/agent.rst | 64 --------------- .../source/customize/client_configuration.rst | 63 +++++++++++++++ docs/source/customize/docker-compose.rst | 52 ------------- .../source/customize/docker-compose_setup.rst | 37 +++++++++ docs/source/customize/goal.rst | 24 ------ docs/source/customize/goal_submission.rst | 26 +++++++ .../customize/integrate_thirdparty_agent.rst | 74 ++++++++++++++++++ docs/source/customize/tool.rst | 50 ------------ docs/source/customize/tool_creation.rst | 48 ++++++++++++ .../customize_tool_agent.rst | 78 ------------------- docs/source/index.rst | 14 ++-- 12 files changed, 256 insertions(+), 275 deletions(-) delete mode 100644 docs/source/customize/agent.rst create mode 100644 docs/source/customize/client_configuration.rst delete mode 100644 docs/source/customize/docker-compose.rst create mode 100644 docs/source/customize/docker-compose_setup.rst delete mode 100644 docs/source/customize/goal.rst create mode 100644 docs/source/customize/goal_submission.rst create mode 100644 docs/source/customize/integrate_thirdparty_agent.rst delete mode 100644 docs/source/customize/tool.rst create mode 100644 docs/source/customize/tool_creation.rst delete mode 100644 docs/source/high_level_concepts/customize_tool_agent.rst diff --git a/.github/workflows/sphinx.yml b/.github/workflows/sphinx.yml index b82b071..83d2273 100644 --- a/.github/workflows/sphinx.yml +++ b/.github/workflows/sphinx.yml @@ -19,6 +19,7 @@ jobs: pip install sphinx furo - name: Build HTML run: | + cd docs make html -e - name: Upload artifacts uses: actions/upload-artifact@v4 diff --git a/docs/source/customize/agent.rst b/docs/source/customize/agent.rst deleted file mode 100644 index 90decfc..0000000 --- a/docs/source/customize/agent.rst +++ /dev/null @@ -1,64 +0,0 @@ -####################### -Agent -####################### - -| - -Create a folder named your_case_name under the :code:`im_client/config/cases` directory for your cases. For example: :code:`im_client/config/cases/IOT` - -Create a file named :code:`your_agent_name.yaml` to serve as the configuration file for the agent, depending on the number of agents required, create the corresponding number of YAML files. For example: :code:`im_client/config/cases/IOT/Caterer.yaml` - -:code:`your_agent_name.yaml` is divided into four sections: **server** , **tool_agent**, **comm**, **llm**。 - -Server -=========================== -server: All configuration consists of the following IP Address and Port. - -.. code-block:: yaml - - server: - host: http://your_server_IP_address (ex. http://IoA-server) - port: 7788 (setting 7788 port in your server) - hostname: your_server_IP_address (ex. IoA-server) - -| - -Tool Agent -=========================== -tool agent: Represents different agents integrated into AgentVerse, such as ReAct, OpenInterpreter, etc. The tool_agent is optional and determined by the actual agents required for the case. - -.. code-block:: yaml - - tool_agent: - agent_type: ReAct / OpenInterpreter - desc: your tool agent description - tool_config: your tools file path - container_name: docker container name - model: GPT version (ex. gpt-4-1106) - - -| - -Comm -========================== -comm: The communication agent used for communicating with other agents and also for assigning tasks to the tool_agent. - -.. code-block:: yaml - - comm: - name: your communication agent name - desc: your communication agent description - type: Thing Assistant / Human Assistant - -| - -LLM -========================== -llm: Configuration properties for the LLM.。 - -.. code-block:: yaml - - llm: - llm_type: openai-chat - model: GPT version (ex. gpt-4-1106) - temperature: our default value is 0.1, it is optional \ No newline at end of file diff --git a/docs/source/customize/client_configuration.rst b/docs/source/customize/client_configuration.rst new file mode 100644 index 0000000..120aad4 --- /dev/null +++ b/docs/source/customize/client_configuration.rst @@ -0,0 +1,63 @@ +Client Configuration +####################### + +When integrating a new client into the IoA platform, it is essential to configure the client's settings to ensure seamless communication and functionality within the existing system. This configuration process, known as client configuration, is necessary because each client may have unique requirements, data formats, and interaction protocols that must be aligned with the IoA platform's standards. Proper client configuration allows for the customization of parameters such as server, tool agent, and comm, ensuring that the new client can effectively interact with other components of the platform. Before introduce the configuration of parameters, +it is necessary to create a folder and file for the client configuration. + +* Create a folder named your_case_name under the :code:`configs/client_configs/cases` directory for your cases. For example: :code:`configs/client_configs/cases/example` + +* Create a file named :code:`your_agent_name.yaml` to serve as the configuration file for the agent, depending on the number of agents required, create the corresponding number of YAML files. For example: :code:`configs/client_configs/cases/example/bob.yaml` + +The following are configuration examples for parameters. The configuration file is divided into three sections: **server** , **tool_agent**, **comm**. + +Server +=========================== +The server section is responsible for setting up the basic server configurations + +.. code-block:: yaml + + server: + port: SERVER_PORT (e.g. setting 7788 port in your server) + hostname: SERVER_IP (e.g. ioa-server) + +| + +Tool Agent +=========================== +The tool_agent section defines the configuration for the tool agent itself and represents various agents integrated into the IoA platform, such as ReAct, OpenInterpreter, and others. The inclusion of a tool_agent is optional and depends on the specific agents required for the given use case. + +.. code-block:: yaml + + tool_agent: s + agent_type: ReAct + agent_name: tool agent name + desc: |- + A description of the tool agent's capabilities. + tool_config: configuration file of tools (e.g tools_code_executor.yaml) + image_name: react-agent + container_name: docker container name + port: The port number on which the agent's Docker container will be exposed. + model: The model used by the agent (e.g. gpt-4-1106-preview) + max_num_steps: The maximum number of steps the agent can take in its process. + + +| + +Comm +========================== +The communication agent used for communicating and interacting with other agents and also for assigning tasks to the tool_agent. + +.. code-block:: yaml + + comm: + name: The name of the communication agent. + desc: A description of the communication agent's capabilities. + type: The type of the communication agent. (Thing Assistant or Human Assistant) + support_nested teams: Indicates whether the agent supports nested teams. (true or false) + max_team_up_attempts: The maximum number of attempts to team up with other agents. + llm: + llm_type: Defines the type of large language model (e.g. openai-chat) + model: Specifies the model for the large language model, indicating the version and type of AI model used (e.g., gpt-4-1106-preview) + temperature: Controls the randomness of the language model's responses (default value is 0.1) + + \ No newline at end of file diff --git a/docs/source/customize/docker-compose.rst b/docs/source/customize/docker-compose.rst deleted file mode 100644 index 0200ede..0000000 --- a/docs/source/customize/docker-compose.rst +++ /dev/null @@ -1,52 +0,0 @@ -####################### -docker-compose -####################### - -| - -Introduce your OpenAI API Key in the :code:`.env` file under the :code:`dockerfiles/compose` directory: - -.. code-block:: bash - - OPENAI_API_KEY="your_openai_api_key" - -.. note:: - - The environment variables specified in the :code:`.env` file will be overridden by system environment variables. Please ensure the system environment variables are set correctly. - - - -Docker Compose -===================================== -Create your case-specific :code:`your_case.yml` file in the :code:`dockerfiles/compose` directory. For example: :code:`dockerfiles/compose/IOT_Party.yml` - -.. code-block:: yaml - - version: "3" - - service: - Name: (ex. Cater) - image: your_needed_image(ex. IoA-agent:latest) - build: - context: ../../ - dockerfile: your_docker_file(ex. dockerfiles/agent.Dockerfile) - args: (如果已连接外网,则不需要此参数) - http_proxy: http://172.27.16.1:7890 - https_proxy: http://172.27.16.1:7890 - container_name: your_container_name - env_file: - - .env - volumes: - - /var/run/docker.sock:/var/run/docker.sock - volumes: - - OPENAI_API_KEY - - OPENAI_BASE_URL(如果不需要此参数,请删除) - - CUSTOM_CONFIG=your_agent.yaml path(ex. agentverse/config/cases/IOT/Caterer.yaml) - ports: - - your_host_port:your_container_port(ex. 5051:5050) - depends_on: - - Server - - your_needed_server(ex. IOT-Server) - stdin_open: true - tty: true - diff --git a/docs/source/customize/docker-compose_setup.rst b/docs/source/customize/docker-compose_setup.rst new file mode 100644 index 0000000..de617af --- /dev/null +++ b/docs/source/customize/docker-compose_setup.rst @@ -0,0 +1,37 @@ +Docker Compose Setup +####################### + +Customizing the Docker Compose YAML file is essential for setting up the environment to include various agents' Docker containers, along with the configuration of all necessary variables for each container. This setup allows for seamless integration and orchestration of multiple agents and tools within the IoA platform, ensuring they can work together effectively and efficiently. The Docker Compose configuration simplifies the deployment process, providing a centralized way to manage dependencies, environment settings, and network configurations. + +Docker Compose Configuration +===================================== +Create your case-specific :code:`your_case.yml` file in the :code:`dockerfiles/compose` directory. For example: :code:`dockerfiles/compose/IOT_Party.yml` + +.. code-block:: yaml + + version: "3" + + service: + Name: (e.g. WeizeChen) + image: Specifies the Docker image to use for this service (e.g. ioa-client:latest) + build: + context: ../../ + dockerfile: Specifies the Dockerfile to use for building the image(e.g. dockerfiles/client.Dockerfile) + container_name: the name of the Docker container + env_file: + - .env + volumes: + - /var/run/docker.sock:/var/run/docker.sock + - ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/sqlite:/app/database + - ./volumes/openai_response_log:${OPENAI_RESPONSE_LOG_PATH} + - ../../configs/client_configs:/app/configs + environment: + - OPENAI_API_KEY + - CUSTOM_CONFIG=agent configuration file path(e.g. configs/cases/paper_writing/weizechen.yaml) + ports: + - Maps host_port to container_port, allowing access to the service.(e.g. 5051:5050) + depends_on: + - Server + stdin_open: true + tty: true + diff --git a/docs/source/customize/goal.rst b/docs/source/customize/goal.rst deleted file mode 100644 index 993bcf7..0000000 --- a/docs/source/customize/goal.rst +++ /dev/null @@ -1,24 +0,0 @@ -####################### -Goal -####################### - -| - -Create :code:`tests` in the :code:`test_your_case.py` directory. For example, :code:`tests/test_IOT_real.py` - -Goal -=========================== -Complete your task objective description in the goal variable, and send a POST request to the :code:`url: "http://127.0.0.1:5050/launch_goal"` ,set :code:`team_member_names` to None, and :code:`team_up_depth` to the depth of nested teaming you desire. - -.. code-block:: python - - import requests - goal = """ your task goal """ (ex. """ Complete preparations for a Halloween themed party. The following is the list of guests, RanLi (vegetarian), WeiZe (fitness enthusiast), YiTong (gluten-free), QianChen (kosher), Chengyang (halal). """) - response = requests.post( - "http://127.0.0.1:5050/launch_goal", - json={ - "goal": goal - "team_member_names": None - "team_up_depth": 1 - }, - ) \ No newline at end of file diff --git a/docs/source/customize/goal_submission.rst b/docs/source/customize/goal_submission.rst new file mode 100644 index 0000000..8743f67 --- /dev/null +++ b/docs/source/customize/goal_submission.rst @@ -0,0 +1,26 @@ +Goal Submission +####################### +When all agents' Docker containers are successfully configured, sending a POST request to the local server to initiate a goal is necessary process, enabling the IoA to accomplish specific goals. Goal submission serves as a critical startup mechanism to validate the IoA's functionality by setting specific tasks or objectives. By defining these goals, you ensure that the IoA and its integrated agents are working correctly and can effectively perform their intended tasks. This process is essential for verifying the system's capabilities and identifying any potential issues before deploying the IoA in a production environment. Before launching goal, you need to create a Python script. + +* Create :code:`test_your_case.py` in the :code:`scripts` directory. For example, :code:`scripts/test_paper_writing.py` + +Goal +=========================== +Complete your task objective description in the goal variable, and send a POST request to the :code:`url: "http://127.0.0.1:5050/launch_goal"` ,set :code:`team_member_names` to None, and :code:`team_up_depth` to the depth of nested teaming you desire. +The full URL :code:`url: "http://127.0.0.1:5050/launch_goal"` is used to send a POST request to the local server to initiate a goal. This request includes a JSON payload specifying the details of the goal, such as the goal description, maximum turns, and team member names and so on. The server at this endpoint processes the request and sets the specified goal for the IoA to accomplish. + +.. code-block:: python + + import requests + goal = "task descrpition" + "(e.g. goal = I want to know the annual revenue of Microsoft from 2014 to 2020. Please generate a figure in text format showing the trend of the annual revenue, and give me a analysis report.)" + response = requests.post( + "http://127.0.0.1:5050/launch_goal", + json={ + "goal": goal, + "max_turns": 20, + "team_member_names": [agent_1, agent_2] "(if you have no spcific team members, set it to None)"" + "team_up_depth": "The depth of the nested team-up. Defaults to None. + "is_collaborative_planning_enabled (bool, optional)": "Whether the collaborative planning is enabled. Defaults to False." + }, + ) \ No newline at end of file diff --git a/docs/source/customize/integrate_thirdparty_agent.rst b/docs/source/customize/integrate_thirdparty_agent.rst new file mode 100644 index 0000000..87f65b2 --- /dev/null +++ b/docs/source/customize/integrate_thirdparty_agent.rst @@ -0,0 +1,74 @@ +Integrate Third-party Agent +################################ + +| + +Here is a brief guide for integrating third-party agents. If you want to integrate an agent from a third-party repository, there are mainly two things to consider: + + +* **Build and Expose a Docker Container**: + + * **Containerization**: Package the third-party agent within a Docker container. This ensures a consistent and isolated environment for the agent to run. + * **Expose an API Interface**: Utilize FastAPI or another suitable web framework to expose a run interface externally. The interface should have the following specification: + + * **run(task_desc)**: Executes the task_desc task from scratch and returns the result as a string. + + +* **Develop an Adapter for Integration**: + + * **Data Format Conversion**: Write an adapter to facilitate communication between the third-party agent and IoA. This involves converting data formats to ensure compatibility. For instance, convert memory information in IoA, which uses :code:`LLMResult` from :code:`/types/llm.py`, into a format that the third-party agent can process. + * **Interface Invocation**: The adapter acts as an intermediary, invoking the API provided by the Docker container created in the first step. This ensures seamless interaction between IoA and the third-party agent. + +You can review the implemented logic for the specific example, Open Interpreter, located at :code:`im_client/agents/open_interpreter`. The detailed explanation of this example is given in the following section. + +| + +Open Interpreter Integration +=============================== +* **Building an HTTP service for Open Interpreter**: + + * The Open Interpreter, located in the :code:`im_client/agents/open_interpreter` directory, will be dockerized. This directory includes FastAPI POST endpoints, which will be exposed as an HTTP service when started with Uvicorn. When deployed with Docker, these endpoints can be accessed externally. + +* **Creating Docker for Open Interpreter**: + + * Next, create a Dockerfile in the :code:`dockerfiles/tool_agents` directory. This Dockerfile ensures that tool agents like Open Interpreter can be started with Docker, preventing potential environment conflicts with IoA. + +* **Building Adapter for Open Interpreter**: + + * The adapter for Open Interpreter, also located in :code:`im_client/agents/open_interpreter` , facilitates data format conversion between IoA and Open Interpreter. It forwards requests to the Open Interpreter Docker container. The adapter provides a run method that converts data formats and sends a POST request to the corresponding endpoint of the Open Interpreter Docker container. + +| + +Open Interpreter Docker Startup +======================================= +* Environment Variable Configuration: + + * In the :code:`open_instruction.yml`, set up the environment variable :code:`CUSTOM_CONFIG` to specify the configuration file for the tool agent. Define the tool agent-related parameters in the file referenced by :code:`CUSTOM_CONFIG`. For example, the configuration file for Open Interpreter is: + + .. code-block:: yaml + + CUSTOM_CONFIG=configs/cases/open_instruction/open_interpreter.yaml + + * In the :code:`dockerfiles/compose/.env_template`, ensuer to set up the necessary environment variable such as :code:`OPENAI_API_KEY`, :code:`OPENAI_RESPONSE_LOG_PATH`. + +* Building and Running the Docker Container: + + * Build the Dockerfile previously created by running the following command in the terminal: + + .. code-block:: bash + + docker build -f dockerfiles/tool_agents/open_interpreter.Dockerfile -t open_interpreter:latest . + + * Before starting the server, ensure to comment out the autogpt section in the open_instruction.yml file. + + * Then, start the server and multiple communication agents by running: + + .. code-block:: bash + + docker-compose -f dockerfiles/open_instruction.yml up + + + + + + diff --git a/docs/source/customize/tool.rst b/docs/source/customize/tool.rst deleted file mode 100644 index 859273d..0000000 --- a/docs/source/customize/tool.rst +++ /dev/null @@ -1,50 +0,0 @@ -####################### -Tool -####################### - -| - -Create and complete the corresponding tool's Python implementation in :code:`im_client/agents/tools` . For example, :code:`im_client/agents/tools/IOT_tools.py` - -Create a folder named your_tools_name under the :code:`im_client/agents/react` directory. For example, :code:`im_client/agents/react/tools_IOT` - -Within the :code:`your_tools_name` folder, create a file named :code:`your_tools_name.yaml` to serve as the configuration file for calling the tool by the tool agent. The format of all tools within the YAML file should adhere to the OpenAI function call format. For example, :code:`tools_IOT/Tools_Drinker.yaml`. Here is an example: - -Tool with required parameters -===================================== - -.. code-block:: yaml - - - function: - description: your function description - name:function name - parameters: - properties: - parameters_1: - description: your parameters_1 description - type: string / number / boolean - enum: ["It's necessary if your parameter is set by Literal type OR specified parameter"] - parameters_2: (It's necessary if there are more than 1 parameter in your function) - description: your parameters_1 description - type: string / number / boolean - required: - - parameters_1 - - parameters_2 - type: object - type: function - -| - -Tool without required parameters -========================================= - -.. code-block:: yaml - - - function: - description: your function description - name: function name - parameters: - properties: {} - required: [] - type: object - type: function diff --git a/docs/source/customize/tool_creation.rst b/docs/source/customize/tool_creation.rst new file mode 100644 index 0000000..ccb1751 --- /dev/null +++ b/docs/source/customize/tool_creation.rst @@ -0,0 +1,48 @@ +Tool Creation +####################### +Tool creation is necessary when you do not have your own agent but want to provide a custom tool that a ReAct agent or another existing agent can utilize to solve problems. This need arises when you have specialized tools or functionalities that can enhance the capabilities of an agent without the need to develop a full-fledged new agent. By integrating these tools, the ReAct agent can leverage them to perform specific tasks, thereby extending its problem-solving abilities and making it more versatile and effective in handling a broader range of scenarios. Before introduce the configuration of tools, it is necessary to create a yaml file and corresponding tool's Python file for the tool configuration. + +* Create and complete the corresponding tool's Python implementation in :code:`im_client/agents/tools`. For example, :code:`im_client/agents/tools/code_executor.py` + +* Create a file named :code:`tools_name.yaml` to serve as the configuration file for calling the tool by the tool agent. The format of all tools within the YAML file should adhere to the OpenAI function call format. For example, :code:`im_client/agents/react/tools_code_executor.yaml`. + +Here is an example for yaml configuration: + +Tool with required parameters +===================================== + +.. code-block:: yaml + + - function: + description: your function description + name: function name + parameters: + properties: + parameters_1: + description: your parameters_1 description + type: string / number / boolean + enum: ["It's necessary if your parameter is set by Literal type OR specified parameter"] + parameters_2: (It's necessary if there are more than 1 parameter in your function) + description: your parameters_1 description + type: string / number / boolean + required: + - parameters_1 + - parameters_2 + type: object + type: function + +| + +Tool without required parameters +========================================= + +.. code-block:: yaml + + - function: + description: your function description + name: function name + parameters: + properties: {} + required: [] + type: object + type: function diff --git a/docs/source/high_level_concepts/customize_tool_agent.rst b/docs/source/high_level_concepts/customize_tool_agent.rst deleted file mode 100644 index 8e5c01f..0000000 --- a/docs/source/high_level_concepts/customize_tool_agent.rst +++ /dev/null @@ -1,78 +0,0 @@ -################################ -Integrate Third-party Agent -################################ - -| - -Here is a brief guide for integrating third-party Agents. If you want to integrate an agent from a third-party repository, there are mainly two things to consider: - - -* Build a Docker container for the third-party agent and expose a **run** interface externally through FastAPI or another web framework. - - * **run(task_desc)**: Executes the task_desc task from scratch and returns the result as a string. - - | - -* Write an adapter to connect the third-party agent with AgentVerse. Essentially, this involves converting data formats, such as converting memory information in AgentVerse, which uses LLMResult from common/types/llm.py, into a format accepted by the third-party agent and invoking the interface provided by the Docker container created in the first step. The adapter serves as an intermediary layer in AgentVerse for interacting with the third-party agent. - -| - -============================= -Openinterpreter Example -============================= -* **Building an HTTP service for Open Interpreter**: First, :code:`im_client/agents/open_interpreter` created in the im_client/agents/open_interpreter directory will be dockerized. This file includes some FastAPI post endpoints, which will be exposed as an HTTP service when started with uvicorn. When started with Docker, these endpoints can be requested externally. - -| - -* **Creating Docker for Open Interpreter**: Next, we need to create a Dockerfile located in :code:`dockerfiles/tool_agents` . This allows tool agents like Open Interpreter to be started with Docker, avoiding potential environment conflicts with AgentVerse. - -| - -* **Building Adapter for Open Interpreter**: :code:`im_client/agents/open_interpreter` created in im_client/agents/open_interpreter is an adapter for Open Interpreter. It builds the conversion between AgentVerse and Open Interpreter data formats and forwards the request to the Open Interpreter's Docker container. The adapter also provides a run method, which performs data format conversion and calls the corresponding endpoint of the Open Interpreter Docker container via a POST request. - -| - -Docker Startup -========================== -* In the :code:`docker-compose.yml` , set up the environment variable CUSTOM_CONFIG for a tool agent's configuration within an agent's configuration, and define tool agent-related parameters in the file specified by CUSTOM_CONFIG. :code:`CUSTOM_CONFIG`. - -| - -* Build the Dockerfile you wrote earlier in the terminal, for example, :code:`docker build -f dockerfiles/tool_agents/open_interpreter.Dockerfile -t open_interpreter:latest .`. Then run :code:`docker-compose up --build` to start the server and multiple communication agents. - -.. Logic -.. =============== -.. Currently, there are three phases: - -.. 1. **Team Up** phase: In this phase, the LLM in the communication layer receives the user's goal and decides based on the agent_contact: - -.. | - -.. * If there's a suitable agent, it sends a team-up request to the server, then waits for the server to return a group chat ID. - -.. | - -.. * If no suitable agent is found, it sends an agent search request to the server, specifying characteristics of agents that could collaborate, then waits for the server to return a list of relevant agents and finally sends a team-up request. - -.. | - -.. 2. **Coordination** Phase: In this phase, the communication agent can send messages to the newly created group chat, specifying the next speaker. There are two protocols (TBD): - -.. | - -.. * **discussion protocol** : For discussing task details, including objectives, details, and division of labor. - -.. | - -.. * **vote protocol** : When an agent proposes a plan for what to do next, this protocol is invoked. If invoked, it enters a voting phase where other agents vote on the proposal. If all agree, the process moves to the next phase; otherwise, it returns to discussion. - -.. | - -.. 3. **Execution** Phase: In this phase, the communication agent assigns tasks to its tool agent according to the plan discussed. Meanwhile, it periodically checks the tool agent's memory to determine if there is sufficient information that needs to be synchronized with other agents and sends the message to the group chat. - - - - - - - diff --git a/docs/source/index.rst b/docs/source/index.rst index dec221f..9328fdc 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -26,8 +26,6 @@ high_level_concepts/mechanism - high_level_concepts/customize_tool_agent - | @@ -53,15 +51,17 @@ .. toctree:: :maxdepth: 2 - :caption: Customize AGENT + :caption: Customize Agent + + customize/integrate_thirdparty_agent - customize/agent + customize/client_configuration - customize/tool + customize/tool_creation - customize/goal + customize/docker-compose_setup - customize/docker-compose + customize/goal_submission |