Skip to content

Agent Services

Antonio Fin edited this page Nov 10, 2024 · 1 revision

The LL-Mesh platform's agents service offers two architectural options: Star and Snowflake.

In the Star architecture, a central Reasoning Engine powered by an LLM orchestrates various tools. This setup enables a single chatbot agent to manage and execute tasks using a suite of individual tools, providing centralized control and streamlined management.

The Snowflake architecture expands on the Star model by introducing multiple LLM-equipped agents. These agents collaborate, sharing resources and tasks, which enhances the system's ability to handle complex operations. This distributed approach significantly improves performance through cooperative task execution.

agents_1

Agent Services handle

All these services are implemented following the Factory Design Pattern. Configuration settings and details of the general service can be found in the abstract base class, while instance-specific settings and results are documented within each specific implementation file.

Tool Repository

The tool repository service in LL-Mesh allows for the storage and retrieval of tools along with their associated metadata. This service is crucial for managing a collection of tools that can be used by various agents within the platform. It supports adding tools with metadata and retrieving them based on specific criteria. The service is built using the Factory Design Pattern, ensuring flexibility and ease of use.

Example: Tool Repository with LangChainStructured

Here’s an example of how you can use the LangChainStructuredToolRepository class to add tools to the repository and retrieve them using metadata filtering.

from langchain.tools import tool
from athon.agents import ToolRepository

# Example configuration for the Tool Repository
REPO_CONFIG = {
    'type': 'LangChainStructured'
}

# Initialize the Tool Repository with the provided configuration
tool_repository = ToolRepository.create(REPO_CONFIG)

# Example tool and metadata to be added to the repository
@tool
def text_summarizer(text: str) -> str:
    """A simple text summarizer function"""
    return text[:50] 

metadata = {
    'category': 'NLP',
    'version': '1.0',
    'author': 'John Doe'
}

# Add the tool to the repository
add_result = tool_repository.add_tool(text_summarizer, metadata)

if add_result.status == "success":
    print("Tool added successfully.")
else:
    print(f"ERROR:\n{add_result.error_message}")

# Retrieve tools with a metadata filter
metadata_filter = {'category': 'NLP'}
get_result = tool_repository.get_tools(metadata_filter)

if get_result.status == "success":
    print(f"RETRIEVED TOOLS:\n{get_result.tools}")
else:
    print(f"ERROR:\n{get_result.error_message}")

Reasoning Engine

The reasoning engine in LL-Mesh extends the chat capabilities by orchestrating tools and managing the interaction between the LLM and various tools. It is central to creating intelligent, context-aware responses by integrating LLMs with a dynamically loaded suite of tools. This service supports running, clearing, and configuring the reasoning engine's memory and tools.

agents_2

Example: Reasoning Engine with LangChainForOpenAI

Here’s an example of how you can use the LangChainForOpenAIEngine class to run a reasoning engine that orchestrates tools for advanced chatbot functionality.

from athon.agents import ReasoningEngine

# Example configuration for the Reasoning Engine
ENGINE_CONFIG = {
    'type': 'LangChainForOpenAI',
    'system_prompt': 'You are a helpful assistant that uses tools to answer questions.',
    'model': {
        'type': 'LangChainChatOpenAI',
        'api_key': 'your-api-key-here',
        'model_name': 'gpt-4o'
    },
    'memory': {
        'type': 'LangChainBufferMemory',
        'memory_key': 'chat_history'
    },
    'tools': {
        'type': 'LangChainStructured'
    },
    'verbose': True
}

# Initialize the Reasoning Engine with the provided configuration
reasoning_engine = ReasoningEngine.create(ENGINE_CONFIG)

# Add tools in the Tool Repository like in the previous example

# Run the engine with an input message
input_message = "What's the latest news on climate change?"
result = reasoning_engine.run(input_message)

# Handle the response
if result.status == "success":
    print(f"COMPLETION:\n{result.completion}")
else:
    print(f"ERROR:\n{result.error_message}")

# Clear the engine's memory
clear_result = reasoning_engine.clear_memory()

if clear_result.status == "success":
    print("Memory cleared successfully.")
else:
    print(f"ERROR:\n{clear_result.error_message}")

Task Force

The Task Force Multi-Agents service in LL-Mesh allows for the orchestration of complex tasks through a network of agents. You can define a planning methodology—such as using an LLM to plan a sequence of tasks—and specify what each task needs to accomplish through prompting. Each task can be assigned to an agent, which can leverage multiple tools, and the behavior of each agent can be further refined through prompts.

Example: Task Force with CrewAIMultiAgent

Here’s an example of how you can use the CrewAIMultiAgentTaskForce class to orchestrate a series of tasks using multiple agents.

from athon.agents import TaskForce
from fake_tools import DataFetcher, SalesSummarizer, PresentationBuilder # Import your real tools

# Example configuration for the Task Force Multi-Agents
TASK_FORCE_CONFIG = {
    'type': 'CrewAIMultiAgent',
    'plan_type': 'Sequential',
    'tasks': [
        {
            'description': 'Analyze the recent sales data.',
            'expected_output': 'A summary report of sales trends.',
            'agent': {
                'role': 'Data Analyst',
                'goal': 'Summarize sales data',
                'backstory': 'Experienced in sales data analysis',
                'tools': ['DataFetcher', 'SalesSummarizer']
            }
        },
        {
            'description': 'Prepare a presentation based on the report.',
            'expected_output': 'A presentation deck summarizing the sales report.',
            'agent': {
                'role': 'Presentation Specialist',
                'goal': 'Create a presentation',
                'backstory': 'Expert in creating engaging presentations',
                'tools': ['PresentationBuilder']
            }
        }
    ],
    'llm': {
        'type': 'LangChainChatOpenAI',
        'api_key': 'your-api-key-here',
        'model_name': 'gpt-4o-mini'
    },
    'verbose': True,
    'memory': False
}

# Initialize the Task Force with the provided configuration
task_force = TaskForce.create(TASK_FORCE_CONFIG)

# Run the task force with an input message
input_message = "Generate a sales analysis report and prepare a presentation."
result = task_force.run(input_message)

# Handle the response
if result.status == "success":
    print(f"COMPLETION:\n{result.completion}")
else:
    print(f"ERROR:\n{result.error_message}")