Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]: RAG with KernelMemory - #996

Open
biapar opened this issue Nov 26, 2024 · 3 comments
Open

[BUG]: RAG with KernelMemory - #996

biapar opened this issue Nov 26, 2024 · 3 comments

Comments

@biapar
Copy link

biapar commented Nov 26, 2024

Description

I've this error: llama_get_logits_ith: invalid logits id 214, reason: no logits
[1] 16721 segmentation fault

Reproduction Steps

I follow this example https://scisharp.github.io/LLamaSharp/0.11.2/Examples/KernelMemorySaveAndLoad/
and use I use this model Meta-Llama-3.1-8B-Instruct-Q5_K_M.gguf

Environment & Configuration

  • Operating system: MacOS Sequoia
  • .NET runtime version: 8
  • LLamaSharp version: 0.19
  • CUDA version (if you are using cuda backend):
  • CPU & GPU device: Apple M2 MAX

Known Workarounds

No response

@zsogitbe
Copy link
Contributor

Is this the Embeddings = true problem in the model parameters, do you need to set it to false?

@freefer
Copy link

freefer commented Nov 29, 2024

Do not use WithLLamaSharpDefaults as it will enable Embeddings causing exceptions, it can be used directly as an alternative WithLLamaSharpTextEmbeddingGeneration(new LLamaSharpTextEmbeddingGenerator(lsConfig, embWeights))
.WithLLamaSharpTextGeneration(new LlamaSharpTextGenerator(textWeights, context, executor, lsConfig.DefaultInferenceParams))
image

@biapar
Copy link
Author

biapar commented Dec 1, 2024

Have you a full example? thx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants