-
Notifications
You must be signed in to change notification settings - Fork 1
Factory Design Pattern
The Factory Design Pattern is a crucial part of the LL-Mesh platform's architecture, enabling a standardized and scalable approach to creating instances of various services. This pattern abstracts the complexity of object creation, making it easier to manage and extend the platform's capabilities.
The Factory Design Pattern is a creational design pattern that provides an interface for creating objects in a superclass but allows subclasses to alter the type of objects that will be created. This approach is particularly useful in a platform like LL-Mesh, where multiple types of services (e.g., chat models, agents, RAG services) need to be instantiated based on configuration parameters.
The ChatModel
class is a prime example of the Factory Design Pattern in action within Athon. It allows the creation of different chat model instances based on the configuration provided.
The project structure is organized as follows:
self_serve_platform/
│
└── chat/
│
├── model.py # The factory method that creates chat model instances
│
└── models/
│
├── base.py # The abstract base class for chat models
├── langchain_chat_openai.py # Implementation of OpenAI chat model
└── langchain_azure_chat_openai.py # Implementation of Azure chat model
└── langchain_azure_chat_google_genai.py # Implementation of Google Gemini chat model
└── langchain_azure_chat_antrophic.py # Implementation of Antrophic chat model
└── langchain_azure_chat_mistralai.py # Implementation of Mistral chat model
└── langchain_azure_chat_nvidia.py # Implementation of Nvidia chat model
└── llamaindex_openai.py # Implementation of OpenAI chat model with LLama Index
The ChatModel
class acts as the factory, returning instances of specific chat models based on the type
specified in the configuration.
class ChatModel:
"""
A chat model class that uses a factory pattern to return
the selected chat model.
"""
_models: Dict[str, Type] = {
'LangChainChatOpenAI': LangChainChatOpenAIModel,
'LangChainAzureChatOpenAI': LangChainAzureChatOpenAIModel,
'LangChainChatGoogleGenAI': LangChainChatGoogleGenAIModel,
'LangChainChatAnthropic': LangChainChatAnthropicModel,
'LangChainChatMistralAI': LangChainChatMistralAIModel,
'LangChainChatNvidia': LangChainChatNvidiaModel,
'LlamaIndexOpenAI': LlamaIndexOpenAIModel,
}
@staticmethod
def create(config: dict) -> Any:
"""
Return the appropriate Chat Model based on the provided configuration.
:param config: Configuration dictionary containing the type of model.
:return: An instance of the selected chat model.
:raises ValueError: If 'type' is not in config or an unsupported type is provided.
"""
model_type = config.get('type')
if not model_type:
raise ValueError("Configuration must include 'type'.")
model_class = ChatModel._models.get(model_type)
if not model_class:
raise ValueError(f"Unsupported model type: {model_type}")
return model_class(config)
The BaseChatModel
serves as the abstract base class that defines the common interface and shared configuration parameters for all chat models. Subclasses must implement the get_model
and invoke
methods.
class BaseChatModel(abc.ABC):
"""
Abstract base class for chat models.
"""
class Config(BaseModel):
"""
Configuration for the Chat Model class.
"""
type: str = Field(..., description="Type of the model deployment.")
api_key: str = Field(..., description="API key for accessing the model.")
model_name: Optional[str] = Field(None, description="Name of the model deployment.")
temperature: Optional[float] = Field(None, description="Temperature setting for the model.")
seed: Optional[int] = Field(None, description="Seed for model randomness.")
class Result(BaseModel):
"""
Result of the Chat Model invocation.
"""
status: str = Field(default="success", description="Status of the operation, e.g., 'success' or 'failure'.")
error_message: Optional[str] = Field(default=None, description="Detailed error message if the operation failed.")
content: Optional[str] = Field(None, description="LLM completion content.")
metadata: Optional[str] = Field(None, description="LLM response metadata.")
model: Optional[Any] = Field(None, description="Instance of the Chat model.")
@abc.abstractmethod
def get_model(self) -> Any:
"""
Return the LLM model instance.
"""
@abc.abstractmethod
def invoke(self, message) -> 'BaseChatModel.Result':
"""
Invoke the LLM to create content.
"""
LangChainAzureChatOpenAIModel
is a specific implementation of the BaseChatModel
. It includes additional configuration parameters required for Azure deployments.
class LangChainAzureChatOpenAIModel(BaseChatModel):
"""
Class for LangChainAzureChatOpenAI Model.
"""
class Config(BaseChatModel.Config):
"""
Configuration for the Chat Model class.
"""
azure_deployment: str = Field(..., description="Name of the deployment instance.")
endpoint: str = Field(..., description="Endpoint for the model API.")
api_version: str = Field(..., description="API version if applicable.")
def __init__(self, config: Dict[str, Any]) -> None:
"""
Initialize the LangChainAzureChatOpenAIModel with the given configuration.
:param config: Configuration dictionary for the model.
"""
self.config = LangChainAzureChatOpenAIModel.Config(**config)
self.result = LangChainAzureChatOpenAIModel.Result()
self.model = self._init_model()
def _init_model(self):
# Initialization logic for the model instance
...
- The
ChatModel
class uses the factory pattern to determine which specific chat model to instantiate based on thetype
provided in the configuration. - The
BaseChatModel
defines the structure and necessary methods that all chat models must implement, ensuring consistency across different implementations. - Specific chat models like
LangChainAzureChatOpenAIModel
extendBaseChatModel
to provide additional functionality and configuration tailored to specific use cases.
Below is an example of how you might use the ChatModel
factory to create and utilize a chat model instance.
from athon.chat import ChatModel
from langchain.schema import HumanMessage, SystemMessage
# Example configuration for the Chat Model
LLM_CONFIG = {
'type': 'LangChainAzureChatOpenAI',
'api_key': 'your-api-key-here',
'azure_deployment': 'your-deployment-name',
'endpoint': 'your-endpoint-url',
'api_version': 'your-api-version',
'model_name': 'text-davinci-003',
'temperature': 0.7
}
# Initialize the Chat Model with the provided configuration
chat = ChatModel.create(LLM_CONFIG)
# Define the prompts
prompts = [
SystemMessage(content="Convert the message to pirate language"),
HumanMessage(content="Today is a sunny day and the sky is blue")
]
# Invoke the model with the prompts
result = chat.invoke(prompts)
# Handle the response
if result.status == "success":
print(f"COMPLETION:\n{result.content}")
else:
print(f"ERROR:\n{result.error_message}")
In this example:
- The
ChatModel.create()
method is used to create an instance ofLangChainAzureChatOpenAIModel
based on the provided configuration. - The
invoke()
method processes the prompts, and the result is handled accordingly.