Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to change order of input and prompt/instruction for prompt caching #1835

Open
anhnami opened this issue Nov 21, 2024 · 1 comment
Open

Comments

@anhnami
Copy link

anhnami commented Nov 21, 2024

Hi,

I would like to use dspy for extensive Q/A and information extraction on very long input texts. Since dspy builds the prompt based on the Signature and appends input fields, the resulting prompt will change for every question. I want to ask whether we can instruct dspy to place the instruction after the input. This way, we can reuse the KV cache in a self-hosted LLM or save money with OpenAI's prompt caching.

Thanks.

@okhat
Copy link
Collaborator

okhat commented Nov 21, 2024

Hey @anhnami ! Just create an input field after the context and pass any information you need to it.

dspy.Predict('context, task -> response')(context=LONG, task="Please do ....")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants