This repository contains a collection of ready-to-deploy GPU application templates for Spheron. These templates are designed to make it easier for users to leverage Spheron's GPU capabilities for various AI and Web3 projects.
-
- Run a validator node for the Flock protocol
- Requires GPU resources
-
- Deploy a worker node for the Kuzco protocol
- Utilizes Spheron's GPU infrastructure
-
- Pre-installed PyTorch environment
- Ready for data science and machine learning tasks
-
- Test and interact with various LLMs supported by Ollama
- User-friendly web interface
-
Ollama with Pre-installed Model
- Ollama server pre-configured with LLaMA 3.2
- Easily customizable to use any model from the Ollama registry
-
- Remote development environment with GPU support
- Ideal for building AI applications using Ollama
- Can be adapted to host your own AI application
-
- Pre-installed Unsloth Jupyter notebook
- Helpful for trying out finetuning using the Unsloth library
To use these templates:
-
Clone this repository
-
Choose the template that fits your needs
-
Use the Spheron YAML configuration file and directly deploy it on the Spheron Console App
OR
Follow the guide at Spheron's Deploy Your App documentation to deploy using the
sphnctl
CLI
We welcome contributions! If you have a template you'd like to add or improvements to existing ones, please submit a pull request.
For questions or issues, please open an issue in this repository or contact Spheron support on Discord.
This project is licensed under the Apache License, Version 2.0. See the LICENSE file for details.