Skip to content

Commit

Permalink
fix: sphinx ci
Browse files Browse the repository at this point in the history
  • Loading branch information
chenweize1998 authored and ASL-r committed Jul 10, 2024
1 parent 368ef27 commit 4ff55d0
Show file tree
Hide file tree
Showing 12 changed files with 256 additions and 275 deletions.
1 change: 1 addition & 0 deletions .github/workflows/sphinx.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ jobs:
pip install sphinx furo
- name: Build HTML
run: |
cd docs
make html -e
- name: Upload artifacts
uses: actions/upload-artifact@v4
Expand Down
64 changes: 0 additions & 64 deletions docs/source/customize/agent.rst

This file was deleted.

63 changes: 63 additions & 0 deletions docs/source/customize/client_configuration.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
Client Configuration
#######################

When integrating a new client into the IoA platform, it is essential to configure the client's settings to ensure seamless communication and functionality within the existing system. This configuration process, known as client configuration, is necessary because each client may have unique requirements, data formats, and interaction protocols that must be aligned with the IoA platform's standards. Proper client configuration allows for the customization of parameters such as server, tool agent, and comm, ensuring that the new client can effectively interact with other components of the platform. Before introduce the configuration of parameters,
it is necessary to create a folder and file for the client configuration.

* Create a folder named your_case_name under the :code:`configs/client_configs/cases` directory for your cases. For example: :code:`configs/client_configs/cases/example`

* Create a file named :code:`your_agent_name.yaml` to serve as the configuration file for the agent, depending on the number of agents required, create the corresponding number of YAML files. For example: :code:`configs/client_configs/cases/example/bob.yaml`

The following are configuration examples for parameters. The configuration file is divided into three sections: **server** , **tool_agent**, **comm**.

Server
===========================
The server section is responsible for setting up the basic server configurations

.. code-block:: yaml
server:
port: SERVER_PORT (e.g. setting 7788 port in your server)
hostname: SERVER_IP (e.g. ioa-server)
|
Tool Agent
===========================
The tool_agent section defines the configuration for the tool agent itself and represents various agents integrated into the IoA platform, such as ReAct, OpenInterpreter, and others. The inclusion of a tool_agent is optional and depends on the specific agents required for the given use case.

.. code-block:: yaml
tool_agent: s
agent_type: ReAct
agent_name: tool agent name
desc: |-
A description of the tool agent's capabilities.
tool_config: configuration file of tools (e.g tools_code_executor.yaml)
image_name: react-agent
container_name: docker container name
port: The port number on which the agent's Docker container will be exposed.
model: The model used by the agent (e.g. gpt-4-1106-preview)
max_num_steps: The maximum number of steps the agent can take in its process.
|
Comm
==========================
The communication agent used for communicating and interacting with other agents and also for assigning tasks to the tool_agent.

.. code-block:: yaml
comm:
name: The name of the communication agent.
desc: A description of the communication agent's capabilities.
type: The type of the communication agent. (Thing Assistant or Human Assistant)
support_nested teams: Indicates whether the agent supports nested teams. (true or false)
max_team_up_attempts: The maximum number of attempts to team up with other agents.
llm:
llm_type: Defines the type of large language model (e.g. openai-chat)
model: Specifies the model for the large language model, indicating the version and type of AI model used (e.g., gpt-4-1106-preview)
temperature: Controls the randomness of the language model's responses (default value is 0.1)
52 changes: 0 additions & 52 deletions docs/source/customize/docker-compose.rst

This file was deleted.

37 changes: 37 additions & 0 deletions docs/source/customize/docker-compose_setup.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
Docker Compose Setup
#######################

Customizing the Docker Compose YAML file is essential for setting up the environment to include various agents' Docker containers, along with the configuration of all necessary variables for each container. This setup allows for seamless integration and orchestration of multiple agents and tools within the IoA platform, ensuring they can work together effectively and efficiently. The Docker Compose configuration simplifies the deployment process, providing a centralized way to manage dependencies, environment settings, and network configurations.

Docker Compose Configuration
=====================================
Create your case-specific :code:`your_case.yml` file in the :code:`dockerfiles/compose` directory. For example: :code:`dockerfiles/compose/IOT_Party.yml`

.. code-block:: yaml
version: "3"
service:
Name: (e.g. WeizeChen)
image: Specifies the Docker image to use for this service (e.g. ioa-client:latest)
build:
context: ../../
dockerfile: Specifies the Dockerfile to use for building the image(e.g. dockerfiles/client.Dockerfile)
container_name: the name of the Docker container
env_file:
- .env
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/sqlite:/app/database
- ./volumes/openai_response_log:${OPENAI_RESPONSE_LOG_PATH}
- ../../configs/client_configs:/app/configs
environment:
- OPENAI_API_KEY
- CUSTOM_CONFIG=agent configuration file path(e.g. configs/cases/paper_writing/weizechen.yaml)
ports:
- Maps host_port to container_port, allowing access to the service.(e.g. 5051:5050)
depends_on:
- Server
stdin_open: true
tty: true
24 changes: 0 additions & 24 deletions docs/source/customize/goal.rst

This file was deleted.

26 changes: 26 additions & 0 deletions docs/source/customize/goal_submission.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
Goal Submission
#######################
When all agents' Docker containers are successfully configured, sending a POST request to the local server to initiate a goal is necessary process, enabling the IoA to accomplish specific goals. Goal submission serves as a critical startup mechanism to validate the IoA's functionality by setting specific tasks or objectives. By defining these goals, you ensure that the IoA and its integrated agents are working correctly and can effectively perform their intended tasks. This process is essential for verifying the system's capabilities and identifying any potential issues before deploying the IoA in a production environment. Before launching goal, you need to create a Python script.

* Create :code:`test_your_case.py` in the :code:`scripts` directory. For example, :code:`scripts/test_paper_writing.py`

Goal
===========================
Complete your task objective description in the goal variable, and send a POST request to the :code:`url: "http://127.0.0.1:5050/launch_goal"` ,set :code:`team_member_names` to None, and :code:`team_up_depth` to the depth of nested teaming you desire.
The full URL :code:`url: "http://127.0.0.1:5050/launch_goal"` is used to send a POST request to the local server to initiate a goal. This request includes a JSON payload specifying the details of the goal, such as the goal description, maximum turns, and team member names and so on. The server at this endpoint processes the request and sets the specified goal for the IoA to accomplish.

.. code-block:: python
import requests
goal = "task descrpition"
"(e.g. goal = I want to know the annual revenue of Microsoft from 2014 to 2020. Please generate a figure in text format showing the trend of the annual revenue, and give me a analysis report.)"
response = requests.post(
"http://127.0.0.1:5050/launch_goal",
json={
"goal": goal,
"max_turns": 20,
"team_member_names": [agent_1, agent_2] "(if you have no spcific team members, set it to None)""
"team_up_depth": "The depth of the nested team-up. Defaults to None.
"is_collaborative_planning_enabled (bool, optional)": "Whether the collaborative planning is enabled. Defaults to False."
},
)
74 changes: 74 additions & 0 deletions docs/source/customize/integrate_thirdparty_agent.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
Integrate Third-party Agent
################################

|
Here is a brief guide for integrating third-party agents. If you want to integrate an agent from a third-party repository, there are mainly two things to consider:


* **Build and Expose a Docker Container**:

* **Containerization**: Package the third-party agent within a Docker container. This ensures a consistent and isolated environment for the agent to run.
* **Expose an API Interface**: Utilize FastAPI or another suitable web framework to expose a run interface externally. The interface should have the following specification:

* **run(task_desc)**: Executes the task_desc task from scratch and returns the result as a string.


* **Develop an Adapter for Integration**:

* **Data Format Conversion**: Write an adapter to facilitate communication between the third-party agent and IoA. This involves converting data formats to ensure compatibility. For instance, convert memory information in IoA, which uses :code:`LLMResult` from :code:`/types/llm.py`, into a format that the third-party agent can process.
* **Interface Invocation**: The adapter acts as an intermediary, invoking the API provided by the Docker container created in the first step. This ensures seamless interaction between IoA and the third-party agent.

You can review the implemented logic for the specific example, Open Interpreter, located at :code:`im_client/agents/open_interpreter`. The detailed explanation of this example is given in the following section.

|
Open Interpreter Integration
===============================
* **Building an HTTP service for Open Interpreter**:

* The Open Interpreter, located in the :code:`im_client/agents/open_interpreter` directory, will be dockerized. This directory includes FastAPI POST endpoints, which will be exposed as an HTTP service when started with Uvicorn. When deployed with Docker, these endpoints can be accessed externally.

* **Creating Docker for Open Interpreter**:

* Next, create a Dockerfile in the :code:`dockerfiles/tool_agents` directory. This Dockerfile ensures that tool agents like Open Interpreter can be started with Docker, preventing potential environment conflicts with IoA.

* **Building Adapter for Open Interpreter**:

* The adapter for Open Interpreter, also located in :code:`im_client/agents/open_interpreter` , facilitates data format conversion between IoA and Open Interpreter. It forwards requests to the Open Interpreter Docker container. The adapter provides a run method that converts data formats and sends a POST request to the corresponding endpoint of the Open Interpreter Docker container.

|
Open Interpreter Docker Startup
=======================================
* Environment Variable Configuration:

* In the :code:`open_instruction.yml`, set up the environment variable :code:`CUSTOM_CONFIG` to specify the configuration file for the tool agent. Define the tool agent-related parameters in the file referenced by :code:`CUSTOM_CONFIG`. For example, the configuration file for Open Interpreter is:

.. code-block:: yaml
CUSTOM_CONFIG=configs/cases/open_instruction/open_interpreter.yaml
* In the :code:`dockerfiles/compose/.env_template`, ensuer to set up the necessary environment variable such as :code:`OPENAI_API_KEY`, :code:`OPENAI_RESPONSE_LOG_PATH`.

* Building and Running the Docker Container:

* Build the Dockerfile previously created by running the following command in the terminal:

.. code-block:: bash
docker build -f dockerfiles/tool_agents/open_interpreter.Dockerfile -t open_interpreter:latest .
* Before starting the server, ensure to comment out the autogpt section in the open_instruction.yml file.

* Then, start the server and multiple communication agents by running:

.. code-block:: bash
docker-compose -f dockerfiles/open_instruction.yml up
Loading

0 comments on commit 4ff55d0

Please sign in to comment.