This project provides an example of a very basic agent connected to Semantic Workbench.
The agent doesn't do anything real, it simply echoes back messages sent by the user. The code here is only meant to show the basics, to familiarize with code structure and integration with Semantic Workbench.
- Use GitHub Codespaces for a quick, turn-key dev environment: /.devcontainer/README.md
- VS Code is recommended for development
- Set up your dev environment
- SUGGESTED: Use GitHub Codespaces for a quick, easy, and consistent dev environment: /.devcontainer/README.md
- ALTERNATIVE: Local setup following the main README
- Set up and verify that the workbench app and service are running using the semantic-workbench.code-workspace
- Use VS Code >
Run and Debug
(ctrl/cmd+shift+d) >semantic-workbench
to start the app and service from this workspace - Use VS Code >
Run and Debug
(ctrl/cmd+shift+d) >launch assistant
to start the assistant. - If running in a devcontainer, follow the instructions in .devcontainer/POST_SETUP_README.md for any additional steps.
- Return to the workbench app to interact with the assistant
- Add a new assistant from the main menu of the app, choose the assistant name as defined by the
service_name
in chat.py - Click the newly created assistant to configure and interact with it
If you're not using VS Code and/or Codespaces, you can also work from the
command line, using uv
:
cd <PATH TO THIS FOLDER>
uv run start-assistant
Copy the contents of this folder to your project.
- The paths are already set if you put in the same repo root and relative path of
/<your_projects>/<your_assistant_name>
- If placed in a different location, update the references in the
pyproject.toml
to point to the appropriate locations for thesemantic-workbench-*
packages
It's important to highlight how Semantic Workbench is a development tool, and it's not designed to host agents in a production environment. The workbench helps with testing and debugging, in a development and isolated environment, usually your localhost.
The core of your assistant/AI application, e.g. how it reacts to users, how it invokes tools, how it stores data, can be
developed with any framework, such as Semantic Kernel, Langchain, OpenAI assistants, etc. That is typically the code
you will add to chat.py
.
Semantic Workbench is not a framework. Dependencies on semantic-workbench-assistant
package are used only to test and debug your code in Semantic Workbench. When an assistant is fully developed and ready for production, configurable settings should be hard coded, dependencies on semantic-workbench-assistant
and similar should be removed.