Skip to content

AI-powered tool for crafting and sending personalized messages using OpenAIs API... Created at https://coslynx.com

Notifications You must be signed in to change notification settings

coslynx/OpenAI-Request-Wrapper-Backend

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

10 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

OpenAI-Request-Wrapper-Backend

A Python backend for simplifying OpenAI API interactions

Developed with the software and tools below.

Framework-FastAPI-blue Language-Python-red Database-SQLite-blue LLMs-OpenAI-black
git-last-commit GitHub commit activity GitHub top language

πŸ“‘ Table of Contents

  • πŸ“ Overview
  • πŸ“¦ Features
  • πŸ“‚ Structure
  • πŸ’» Installation
  • πŸ—οΈ Usage
  • 🌐 Hosting
  • πŸ“„ License
  • πŸ‘ Authors

πŸ“ Overview

This repository contains a Python backend server designed to streamline interactions with OpenAI's powerful AI models. The "AI Wrapper OpenAI Request Responder" provides a user-friendly interface for developers and individuals to leverage OpenAI's technology for various applications. This MVP focuses on simplifying the process of sending requests to OpenAI and receiving responses, eliminating the need for complex manual API call management.

πŸ“¦ Features

Feature Description
βš™οΈ Architecture The backend utilizes a lightweight framework like Flask or FastAPI for efficient routing and API management.
πŸ“„ Documentation The repository includes a comprehensive README file detailing the MVP's features, usage, and deployment instructions.
πŸ”— Dependencies The project relies on essential packages such as FastAPI, Uvicorn, Pydantic, OpenAI, and Requests for its functionality.
🧩 Modularity The code is structured for modularity, with separate files for handling requests, API interaction, and response processing.
πŸ§ͺ Testing Unit tests are implemented to ensure the code's functionality and stability.
⚑️ Performance The backend optimizes communication with OpenAI APIs for swift responses, employing efficient request handling and response processing.
πŸ” Security Robust security measures protect API keys and user data, ensuring secure handling of sensitive information.
πŸ”€ Version Control Utilizes Git for version control, employing a branching model for efficient development and maintenance.
πŸ”Œ Integrations Seamless integration with various applications and platforms is achieved using a REST API.
πŸ“Ά Scalability The backend is designed to handle increasing request volumes efficiently.

πŸ“‚ Structure

[object Object]

πŸ’» Installation

πŸ”§ Prerequisites

  • Python 3.10+
  • pip package manager
  • OpenAI API Key

πŸš€ Setup Instructions

  1. Clone the repository:
    git clone https://github.com/coslynx/OpenAI-Request-Wrapper-Backend.git
    cd OpenAI-Request-Wrapper-Backend
  2. Install dependencies:
    pip install -r requirements.txt
  3. Set up environment variables:
    cp .env.example .env
    • Open .env and replace YOUR_OPENAI_API_KEY_HERE with your actual OpenAI API key.
    • You can optionally set DATABASE_URL if you want to use a different database.

πŸ—οΈ Usage

πŸƒβ€β™‚οΈ Running the Backend

uvicorn api.main:app --host 0.0.0.0 --port 8000

βš™οΈ Configuration

  • The utils/config.py file handles environment variables like OPENAI_API_KEY and DATABASE_URL. You can change them in the .env file.
  • The backend server listens on port 8000 by default. You can change this in startup.sh or by passing a different port to uvicorn when running the server.

πŸ“š Examples

Making a Text Generation Request

curl -X POST http://localhost:8000/generate_text \
    -H "Content-Type: application/json" \
    -d '{"model": "text-davinci-003", "prompt": "Write a short story about a cat", "temperature": 0.7}'

Response:

{
  "response": "Once upon a time, in a cozy little cottage nestled amidst rolling hills, there lived a mischievous tabby cat named Whiskers. Whiskers was known for his playful antics and his insatiable appetite for tuna."
}

🌐 Hosting

πŸš€ Deployment Instructions

  1. Create a virtual environment:
    python -m venv .venv
    source .venv/bin/activate
  2. Install dependencies:
    pip install -r requirements.txt
  3. Set up environment variables:
    cp .env.example .env
  4. Run the application:
    uvicorn api.main:app --host 0.0.0.0 --port 8000
  5. Use a deployment platform like Heroku or AWS:
    • Follow the specific instructions for your chosen platform.
    • Make sure to set up the environment variables (API keys, database credentials, etc.) as required by your chosen platform.

πŸ”‘ Environment Variables

  • OPENAI_API_KEY: Your OpenAI API key.
  • DATABASE_URL: Your database connection string (if using a database).

πŸ“œ API Documentation

πŸ” Endpoints

  • POST /generate_text
    • Description: Generates text using OpenAI's models.
    • Request Body:
      {
        "model": "text-davinci-003", // OpenAI model to use
        "prompt": "Write a short story about a cat", // Text prompt
        "temperature": 0.7 // Controls randomness of the generated text
      }
    • Response:
      {
        "response": "Once upon a time, in a cozy little cottage..." // The generated text
      }

πŸ”’ Authentication

  • This MVP does not implement user authentication. It is designed to be used with a single OpenAI API key stored in the environment variable.

πŸ“ Examples

[See examples above]

πŸ“œ License & Attribution

πŸ“„ License

This Minimum Viable Product (MVP) is licensed under the GNU AGPLv3 license.

πŸ€– AI-Generated MVP

This MVP was entirely generated using artificial intelligence through CosLynx.com.

No human was directly involved in the coding process of the repository: OpenAI-Request-Wrapper-Backend

πŸ“ž Contact

For any questions or concerns regarding this AI-generated MVP, please contact CosLynx at:

🌐 CosLynx.com

Create Your Custom MVP in Minutes With CosLynxAI!