LLexeM: Design and Execute Cognitive Processes for LLMs and Agentic Systems Using Natural Language and Markdown Syntax
LLexeM is a next-generation framework designed to revolutionize interaction with Large Language Models (LLMs) and agentic systems. This framework facilitates programming of complex, self-replicating systems and dynamic task processes using plain text and natural language.
With LLexeM, managing hierarchical knowledge structures as context for LLMs becomes effortless, enabling the creation of intricate execution flows with ease. The custom Markdown-based syntax streamlines the integration of LLMs into workflows, empowering these models to update and control their own context, knowledge, and execution in real-time.
This results in smarter, self-replicating systems that are easily monitored by humans and continuously adapt and evolve.
- Intuitive Control: Manage LLMs using natural language, minimizing the need for advanced programming skills.
- Markdown-Based Workflow: Leverage a custom markdown syntax to define knowledge contexts and execution steps with precision.
- Real-Time Context Adaptation: Dynamically modify and optimize the LLM's accessible context during runtime.
- Fine-Grained Knowledge Management: Exercise precise control over the LLM's visible context, allowing for targeted information access.
- Hierarchical Process Architecture: Construct complex, multi-tiered processes with unlimited nesting capabilities.
- Distributed Task Execution: Orchestrate parallel task distribution across multiple independent agents using LLexeM's syntax.
- Comprehensive Git-based Audit Trail: Maintain full transparency and observability with Git-based change tracking, enabling temporal audits and replays.
- Python 3.x
- pip
- Clone the repository:
git clone <repository_url>
cd <repository_directory>
- Create a virtual environment:
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
- Install dependencies:
pip install -r requirements.txt
- Set Up Environment Variables: Ensure you have set your OpenAI API key:
export OPENAI_API_KEY='your-openai-api-key'
Note: Additional providers will be added in future releases, including provider and model selection for each call using our custom syntax.
You can run the script from the command line with various arguments. Here is the general format:
python llexem.py input_file.md [--task_file task_file.md] [--api_key your_api_key] [--operation your_operation] [--param_input_user_request your_part_path]
input_file.md
: (Required) Path to the input markdown file.--task_file task_file.md
: (Optional) Path to the task markdown file. If not provided, the script will only parse and print the structure.--api_key your_api_key
: (Optional) OpenAI API key. If not provided, it will use theOPENAI_API_KEY
environment variable.--operation your_operation
: (Optional) Default operation to perform. Defaults to "append".--param_input_user_request your_part_path
: (Optional) Part path forParamInput-UserRequest
. Defaults to "ParamInput-UserRequest".
Let's run basic test:
python llexem.py tests/Basic_Tests/main-test-0002-shell.md
Original main-test-0002-shell.md
file:
@run macos-bash-assistant.md("get current user location using curl to some geo ip service") => target-block
# Here is block that would replace by output {id=target-block}
Some text on level 1, 1
Some text on level 1, 2
@run macos-bash-assistant.md("what is date and time now?")
@run macos-bash-assistant.md("get current user weather using curl")
As you can see, we are calling macos-bash-assistant.md
with different tasks for an agent that generates shell commands (using LLM) to accomplish the task, runs them, summarizes results, and returns them to the main context. After all calls are executed, you can find created context .ctx files which store execution results of the context. Let's compare main-test-0002-shell.md
and main-test-0002-shell.ctx
:
All results are by default pasted after the corresponding @run operation, except the first one, which has the syntax: => target-block
. This tells the interpreter to replace the block with {id=target-block}
.
Now, let's check how the last executed context for the macos-bash-assistant.md
(file named macos-bash-assistant.ctx
) looks like:
A few things happen here:
- Our task is appended as a markdown block to the context of the agent.
- It's being referenced in context for LLM to generate the proper
@shell
call. - After generation, the
@shell
call is executed by the interpreter. - Results are summarized by a call to the LLM for convenience.
- The summary is returned as a result using
@return
to the main file execution context.
Previous execution states of macos-bash-assistant.md
for the task of finding geo IP can be accessed directly from the git repo (a more convenient and easy-to-access method will be implemented in future releases. The roadmap is TBD, subscribe and follow to keep track of upcoming releases).
! More examples and tutorials will be added soon
Run the script with an input file, task file, and API key:
python main.py input_file.md --task_file task_file.md --api_key your_api_key
Run the script with an input file, task file, API key, operation, and a specific part path for ParamInput-UserRequest
:
python main.py input_file.md --task_file task_file.md --api_key your_api_key --operation your_operation --param_input_user_request your_part_path
You can set the OPENAI_API_KEY
environment variable to avoid passing the API key every time:
export OPENAI_API_KEY="your_api_key"
LLexeM enables the definition and execution of complex workflows using a markdown-based syntax. It utilizes an Abstract Syntax Tree (AST) to parse and execute tasks, supporting operations such as importing other files, running shell commands, and interacting with Large Language Models (LLMs) like OpenAI's GPT (more to come soon!).
- Nodes and AST: The document is parsed into a hierarchical structure of nodes representing headings and operations.
- Operations and Parsers: Various operations (
@import
,@run
,@llm
,@shell
,@goto
,@return
) are parsed and executed. - Flow Control and Execution: Direct the flow of execution using operations and dynamic context handling.
Each operation follows a specific structure:
@operation source_path (parameter) operand target_path
@operation
: Type of operation.source_path
: Path to the source block or file.parameter
: Optional parameters for the operation.operand
: Action to perform (=>
for replace,+>
for append,.>
for prepend).target_path
: Path to the target block.
=>
: Replace the target block with the result.+>
: Append the result to the target block..>
: Prepend the result to the target block.
The @import
operation loads and parses content from an external markdown file or block, integrating it into the current context's Abstract Syntax Tree (AST). If any operand or target block is omitted, the default action is to append all content immediately after the operation in the current execution context's AST.
Example:
@import /path/to/source/file.md
You can import a specific block by using the block ID after the filename:
@import agents/tools.md/web-access-tools
To address specific nested blocks:
@import agents/tools.md/Results-Validation/goals
To get all nested blocks:
@import agents/tools.md/Results-Validation/*
Specify where and how the results should be applied to the current context:
@import agents/tools.md/Results-Validation/* => Target-Block-To-Replace-Id/*
The @llm
operation calls the API of the selected LLM-provider to execute a prompt using the LLM model, adding part of the context you want the LLM to access.
Basic syntax:
@llm ("Who is the author of Harry Potter?", # Harry Potter Info)
In the round brackets, specify two parameters: (Block or Prompt, Heading of output block). If the first parameter is a prompt (surrounded by single or double quotes), the call to the LLM will concatenate all previous heading blocks (excluding operations) in the current Abstract Syntax Tree (AST), attach your prompt, and pass it to the LLM chat completion endpoint. The return result will be inserted, appended, or prepended to the target specified by the operand and target syntax (e.g., =>
, +>
, .>
).
Example:
@llm ('Provide your analysis now', # AGENT RESULTS {id=Results})
// You'll get:
# AGENT RESULTS {id=Results}
Answer of LLM depending on the previous markdown blocks
or
@llm (test-block2/*, # Nested Blocks Analysis Results {id=AI-Results})
The @llm operation sends the content of test-block2
and all its child blocks to the LLM and creates a new block with the result titled # Nested blocks analysis results {id=AI-Results}
.
The @run
operation executes a markdown script in its own context, passing parameters and retrieving the returned result (via the @return
operation).
Syntax:
@run [script_path] (parameters) [operand] [target_path]
Example:
@run macos-bash-assistant.md("get current user location using curl to some geo IP service") => target-block
Parameter passing is accomplished by appending a block or collection of nested blocks to the Abstract Syntax Tree (AST) of the target file at the start. By default, the head block is marked with {id=InputParameters}
and can be accessed from the context of the executed file as part of the whole context or directly by addressing this block with {id=InputParameters}
.
For more details, refer to the @return
operation. The returned block collection is placed in the current context by default after the @run
operation that received the returned result. Alternatively, you can use the operand and target to specify any location in the context (including using the /*
selector to address a group of nested blocks).
The @shell
operation executes a shell command and captures the output from the command's stdout.
Syntax:
@shell ("command") [operand] [target_path]
Example:
@shell ("ls -lt --color=never")
This executes the ls
command and captures its output.
List files and save output:
@shell ('ls -l') => file_list_block
Run OS find command and append the result:
@shell ('find . -name "*.md" | xargs wc -l') +> markdown_stats
Note: It's recommended to suppress any ANSI codes on output if possible, e.g., using --color=never
(for macOS).
The @return
operation returns a block or set of blocks to the caller.
Syntax:
@return InputParameters/*
When the interpreter meets @return
, execution of the current context stops, and control is passed to the parent script (if present).
The @goto
operation passes execution to any block with an identifier.
Syntax:
@goto SendAlert
This moves the execution point to the block with id "SendAlert".
We welcome contributions! We are working on creating a Contributing Guide.
LLexeM is released under the MIT License.