Skip to content

Commit

Permalink
Adding files to deploy CodeTrans application on AMD GPU (#1138)
Browse files Browse the repository at this point in the history
Signed-off-by: Chingis Yundunov <YundunovCN@sibedge.com>
  • Loading branch information
chyundunovDatamonsters authored Nov 18, 2024
1 parent 152adf8 commit 7e62175
Show file tree
Hide file tree
Showing 4 changed files with 447 additions and 0 deletions.
121 changes: 121 additions & 0 deletions CodeTrans/docker_compose/amd/gpu/rocm/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
# Build and deploy CodeTrans Application on AMD GPU (ROCm)

## Build images

### Build the LLM Docker Image

```bash
### Cloning repo
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps

### Build Docker image
docker build -t opea/llm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/text-generation/tgi/Dockerfile .
```

### Build the MegaService Docker Image

```bash
### Cloning repo
git clone https://github.com/opea-project/GenAIExamples
cd GenAIExamples/CodeTrans

### Build Docker image
docker build -t opea/codetrans:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile .
```

### Build the UI Docker Image

```bash
cd GenAIExamples/CodeTrans/ui
### Build UI Docker image
docker build -t opea/codetrans-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile .
```

## Deploy CodeTrans Application

### Features of Docker compose for AMD GPUs

1. Added forwarding of GPU devices to the container TGI service with instructions:

```yaml
shm_size: 1g
devices:
- /dev/kfd:/dev/kfd
- /dev/dri/:/dev/dri/
cap_add:
- SYS_PTRACE
group_add:
- video
security_opt:
- seccomp:unconfined
```
In this case, all GPUs are thrown. To reset a specific GPU, you need to use specific device names cardN and renderN.
For example:
```yaml
shm_size: 1g
devices:
- /dev/kfd:/dev/kfd
- /dev/dri/card0:/dev/dri/card0
- /dev/dri/render128:/dev/dri/render128
cap_add:
- SYS_PTRACE
group_add:
- video
security_opt:
- seccomp:unconfined
```
To find out which GPU device IDs cardN and renderN correspond to the same GPU, use the GPU driver utility
### Go to the directory with the Docker compose file
```bash
cd GenAIExamples/CodeTrans/docker_compose/amd/gpu/rocm
```

### Set environments

In the file "GenAIExamples/CodeTrans/docker_compose/amd/gpu/rocm/set_env.sh " it is necessary to set the required values. Parameter assignments are specified in the comments for each variable setting command

```bash
chmod +x set_env.sh
. set_env.sh
```

### Run services

```
docker compose up -d
```

# Validate the MicroServices and MegaService

## Validate TGI service

```bash
curl http://${HOST_IP}:${CODETRANS_TGI_SERVICE_PORT}/generate \
-X POST \
-d '{"inputs":" ### System: Please translate the following Golang codes into Python codes. ### Original codes: '\'''\'''\''Golang \npackage main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n '\'''\'''\'' ### Translated codes:","parameters":{"max_new_tokens":17, "do_sample": true}}' \
-H 'Content-Type: application/json'
```

## Validate LLM service

```bash
curl http://${HOST_IP}:${CODETRANS_LLM_SERVICE_PORT}/v1/chat/completions \
-X POST \
-d '{"query":" ### System: Please translate the following Golang codes into Python codes. ### Original codes: '\'''\'''\''Golang \npackage main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n '\'''\'''\'' ### Translated codes:"}' \
-H 'Content-Type: application/json'
```

## Validate MegaService

```bash
curl http://${HOST_IP}:${CODEGEN_BACKEND_SERVICE_PORT}/v1/codetrans \
-H "Content-Type: application/json" \
-d '{"language_from": "Golang","language_to": "Python","source_code": "package main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n}"}'
```
97 changes: 97 additions & 0 deletions CodeTrans/docker_compose/amd/gpu/rocm/compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

services:
codetrans-tgi-service:
image: ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
container_name: codetrans-tgi-service
ports:
- "${CODETRANS_TGI_SERVICE_PORT:-8008}:80"
volumes:
- "/var/lib/GenAI/codetrans/data:/data"
shm_size: 1g
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
TGI_LLM_ENDPOINT: ${CODETRANS_TGI_LLM_ENDPOINT}
HUGGING_FACE_HUB_TOKEN: ${CODEGEN_HUGGINGFACEHUB_API_TOKEN}
HUGGINGFACEHUB_API_TOKEN: ${CODEGEN_HUGGINGFACEHUB_API_TOKEN}
devices:
- /dev/kfd:/dev/kfd
- /dev/dri/:/dev/dri/
cap_add:
- SYS_PTRACE
group_add:
- video
security_opt:
- seccomp:unconfined
ipc: host
command: --model-id ${CODETRANS_LLM_MODEL_ID}
codetrans-llm-server:
image: ${REGISTRY:-opea}/llm-tgi:${TAG:-latest}
container_name: codetrans-llm-server
ports:
- "${CODETRANS_LLM_SERVICE_PORT:-9000}:9000"
ipc: host
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
TGI_LLM_ENDPOINT: "http://codetrans-tgi-service"
HUGGINGFACEHUB_API_TOKEN: ${CODETRANS_HUGGINGFACEHUB_API_TOKEN}
restart: unless-stopped
codetrans-backend-server:
image: ${REGISTRY:-opea}/codetrans:${TAG:-latest}
container_name: codetrans-backend-server
depends_on:
- codetrans-tgi-service
- codetrans-llm-server
ports:
- "${CODETRANS_BACKEND_SERVICE_PORT:-7777}:7777"
environment:
no_proxy: ${no_proxy}
https_proxy: ${https_proxy}
http_proxy: ${http_proxy}
MEGA_SERVICE_HOST_IP: ${HOST_IP}
LLM_SERVICE_HOST_IP: "codetrans-llm-server"
ipc: host
restart: always
codetrans-ui-server:
image: ${REGISTRY:-opea}/codetrans-ui:${TAG:-latest}
container_name: codetrans-ui-server
depends_on:
- codetrans-backend-server
ports:
- "${CODETRANS_FRONTEND_SERVICE_PORT:-5173}:5173"
environment:
no_proxy: ${no_proxy}
https_proxy: ${https_proxy}
http_proxy: ${http_proxy}
BASE_URL: ${CODETRANS_BACKEND_SERVICE_URL}
BASIC_URL: ${CODETRANS_BACKEND_SERVICE_URL}
ipc: host
restart: always
codetrans-nginx-server:
image: ${REGISTRY:-opea}/nginx:${TAG:-latest}
container_name: codetrans-nginx-server
depends_on:
- codetrans-backend-server
- codetrans-ui-server
ports:
- "${CODETRANS_NGINX_PORT:-80}:80"
environment:
- no_proxy=${no_proxy}
- https_proxy=${https_proxy}
- http_proxy=${http_proxy}
- FRONTEND_SERVICE_IP=${CODETRANS_FRONTEND_SERVICE_IP}
- FRONTEND_SERVICE_PORT=${CODETRANS_FRONTEND_SERVICE_PORT}
- BACKEND_SERVICE_NAME=${CODETRANS_BACKEND_SERVICE_NAME}
- BACKEND_SERVICE_IP=${CODETRANS_BACKEND_SERVICE_IP}
- BACKEND_SERVICE_PORT=${CODETRANS_BACKEND_SERVICE_PORT}
ipc: host
restart: always

networks:
default:
driver: bridge
49 changes: 49 additions & 0 deletions CodeTrans/docker_compose/amd/gpu/rocm/set_env.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
#!/usr/bin/env bash

# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

### The IP address or domain name of the server on which the application is running
export HOST_IP=direct-supercomputer1.powerml.co

### Model ID
export CODETRANS_LLM_MODEL_ID="Qwen/Qwen2.5-Coder-7B-Instruct"

### The port of the TGI service. On this port, the TGI service will accept connections
export CODETRANS_TGI_SERVICE_PORT=18156

### The endpoint of the TGI service to which requests to this service will be sent (formed from previously set variables)
export CODETRANS_TGI_LLM_ENDPOINT="http://${HOST_IP}:${CODETRANS_TGI_SERVICE_PORT}"

### A token for accessing repositories with models
export CODETRANS_HUGGINGFACEHUB_API_TOKEN=''

### The port of the LLM service. On this port, the LLM service will accept connections
export CODETRANS_LLM_SERVICE_PORT=18157

### The IP address or domain name of the server for CodeTrans MegaService
export CODETRANS_MEGA_SERVICE_HOST_IP=${HOST_IP}

### The endpoint of the LLM service to which requests to this service will be sent
export CODETRANS_LLM_SERVICE_HOST_IP=${HOST_IP}

### The ip address of the host on which the container with the frontend service is running
export CODETRANS_FRONTEND_SERVICE_IP=192.165.1.21

### The port of the frontend service
export CODETRANS_FRONTEND_SERVICE_PORT=18155

### Name of GenAI service for route requests to application
export CODETRANS_BACKEND_SERVICE_NAME=codetrans

### The ip address of the host on which the container with the backend service is running
export CODETRANS_BACKEND_SERVICE_IP=192.165.1.21

### The port of the backend service
export CODETRANS_BACKEND_SERVICE_PORT=18154

### The port of the Nginx reverse proxy for application
export CODETRANS_NGINX_PORT=18153

### Endpoint of the backend service
export CODETRANS_BACKEND_SERVICE_URL="http://${HOST_IP}:${CODETRANS_BACKEND_SERVICE_PORT}/v1/codetrans"
Loading

0 comments on commit 7e62175

Please sign in to comment.