Skip to content

✨ A curated list of awesome community resources, integrations, and examples of Redis in the AI ecosystem.

License

Notifications You must be signed in to change notification settings

redis-developer/redis-ai-resources

Repository files navigation

AI Resources

License: MIT Language GitHub last commit

✨ A curated repository of code recipes, demos, tutorials and resources for basic and advanced Redis use cases in the AI ecosystem. ✨

Demos | Recipes | Tutorials | Integrations | Content | Benchmarks | Docs


Demos

No faster way to get started than by diving in and playing around with a demo.

Demo Description
Redis RAG Workbench Interactive demo to build a RAG-based chatbot over a user-uploaded PDF. Toggle different settings and configurations to improve chatbot performance and quality. Utilizes RedisVL, LangChain, RAGAs, and more.
ArxivChatGuru Streamlit demo of RAG over Arxiv documents with Redis & OpenAI
Redis VSS - Simple Streamlit Demo Streamlit demo of Redis Vector Search
ArXiv Search Full stack implementation of Redis with React FE
Product Search Vector search with Redis Stack and Redis Enterprise

Recipes

Need quickstarts to begin your Redis AI journey? Start here.

Getting started with Redis & Vector Search

Recipe Description
/redis-intro/00_redis_intro.ipynb The place to start if brand new to Redis
/vector-search/00_redispy.ipynb Vector search with Redis python client
/vector-search/01_redisvl.ipynb Vector search with Redis Vector Library
/vector-search/02_hybrid_search.ipynb Hybrid search techniques with Redis (BM25 + Vector)

Retrieval Augmented Generation (RAG)

Retrieval Augmented Generation (aka RAG) is a technique to enhance the ability of an LLM to respond to user queries. The retrieval part of RAG is supported by a vector database, which can return semantically relevant results to a user’s query, serving as contextual information to augment the generative capabilities of an LLM.

To get started with RAG, either from scratch or using a popular framework like Llamaindex or LangChain, go with these recipes:

Recipe Description
/RAG/01_redisvl.ipynb RAG from scratch with the Redis Vector Library
/RAG/02_langchain.ipynb RAG using Redis and LangChain
/RAG/03_llamaindex.ipynb RAG using Redis and LlamaIndex
/RAG/04_advanced_redisvl.ipynb Advanced RAG techniques
/RAG/05_nvidia_ai_rag_redis.ipynb RAG using Redis and Nvidia NIMs
/RAG/06_ragas_evaluation.ipynb Utilize the RAGAS framework to evaluate RAG performance

LLM Memory

LLMs are stateless. To maintain context within a conversation chat sessions must be stored and resent to the LLM. Redis manages the storage and retrieval of chat sessions to maintain context and conversational relevance.

Recipe Description
/llm-session-manager/00_session_manager.ipynb LLM session manager with semantic similarity
/llm-session-manager/01_multiple_sessions.ipynb Handle multiple simultaneous chats with one instance

Semantic Cache

An estimated 31% of LLM queries are potentially redundant (source). Redis enables semantic caching to help cut down on LLM costs quickly.

Recipe Description
/semantic-cache/doc2cache_llama3_1.ipynb Build a semantic cache using the Doc2Cache framework and Llama3.1
/semantic-cache/semantic_caching_gemini.ipynb Build a semantic cache with Redis and Google Gemini

Agents

Recipe Description
/agents/00_langgraph_redis_agentic_rag.ipynb Notebook to get started with lang-graph and agents
/agents/01_crewai_langgraph_redis.ipynb Notebook to get started with lang-graph and agents

Recommendation systems

Recipe Description
/recommendation-systems/00_content_filtering.ipynb Intro content filtering example with redisvl
/recommendation-systems/01_collaborative_filtering.ipynb Intro collaborative filtering example with redisvl

Tutorials

Need a deeper-dive through different use cases and topics?

Tutorial Description
RAG on VertexAI A RAG tutorial featuring Redis with Vertex AI
Agentic RAG A tutorial focused on agentic RAG with LlamaIndex and Cohere
Recommendation Systems w/ NVIDIA Merlin & Redis Three examples, each escalating in complexity, showcasing the process of building a realtime recsys with NVIDIA and Redis

Integrations

Redis integrates with many different players in the AI ecosystem. Here's a curated list below:

Integration Description
RedisVL A dedicated Python client lib for Redis as a Vector DB
AWS Bedrock Streamlines GenAI deployment by offering foundational models as a unified API
LangChain Python Popular Python client lib for building LLM applications powered by Redis
LangChain JS Popular JS client lib for building LLM applications powered by Redis
LlamaIndex LlamaIndex Integration for Redis as a vector Database (formerly GPT-index)
LiteLLM Popular LLM proxy layer to help manage and streamline usage of multiple foundation models
Semantic Kernel Popular lib by MSFT to integrate LLMs with plugins
RelevanceAI Platform to tag, search and analyze unstructured data faster, built on Redis
DocArray DocArray Integration of Redis as a VectorDB by Jina AI

Content

Benchmarks

Docs