Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]: KernelMemory.AskAsync() does not work - exception: object reference not set to an instance of an object #891

Closed
aropb opened this issue Aug 3, 2024 · 30 comments

Comments

@aropb
Copy link

aropb commented Aug 3, 2024

Description

I use KernelMemory. LogiBits is empty.

The error occurs at the time of the call:
memory.AskAsync()

Screenshot_1

Debug with clone classes: BaseSamplingPipeline, DefaultSamplingPipeline
Screenshot_2

Reproduction Steps

The error occurs at the time of the call:

MemoryAnswer answer = await memory.AskAsync(question: question, filters: filters);

Environment & Configuration

  • Operating system: Windows 10/11
  • .NET runtime version: 8.0.7
  • LLamaSharp version: 0.15.0
  • KernelMemory: 0.70.240803.1, 0.69.240727.1
  • CUDA version (if you are using cuda backend): -
  • CPU & GPU device: CPU Intel Core Ultra 9

Known Workarounds

@jwangga
Copy link

jwangga commented Aug 8, 2024

I encountered the same issue when running the sample code: "Kernel Memory: Document Q&A" or "Kernel Memory: Save and Load" from the LLama.Examples project.

@tusharmevl
Copy link

@aropb @jwangga I am facing the same issue running the example 'Kernel Memory: Document Q&A'. Did you find a fix for this? I am trying to implement a RAG system using this. Is there any other way to implement it apart from Kernel Memory?

@jwangga
Copy link

jwangga commented Aug 13, 2024

@tusharmevl I have not found a fix for the Kernel Memory issue. It seems that the integration with Semantic Kernel Memory works. You may try using that as an alternative if your system only needs to support Text.

@tusharmevl
Copy link

@jwangga Ok Thanks! Yes I need to support text only for now, will try that.

@GalactixGod
Copy link

Thanks @jwangga !!

I'm seeing that you can use Semantic Kernel Memory (SKM) as well.

Doesn't appear that you can "chat" with SKM to discuss results unfortunately. Have you been able to figure out a way to "ask" questions of SKM?

@nicholusi2021
Copy link

I'm also having the same issue with the Kernel Memory: Document Q&A example.

@aropb
Copy link
Author

aropb commented Aug 21, 2024

Please, I really need to fix the error.

So far, I can only use such versions:
Microsoft.KernelMemory.Core = 0.62.240605.1
LLamaSharp = 0.13.0

Any newer versions do not work.

I think the mistake is here:

ISamplingPipelineExtensions.Sample()
...
var span = CollectionsMarshal.AsSpan(lastTokens);
--->!
return pipeline.Sample(ctx, logits, span);
...

@aropb aropb changed the title [BUG]: Object reference not set to an instance of an object [BUG]: KernelMemory.AskAsync() does not work - exception: Object reference not set to an instance of an object! Aug 23, 2024
@aropb aropb changed the title [BUG]: KernelMemory.AskAsync() does not work - exception: Object reference not set to an instance of an object! [BUG]: KernelMemory.AskAsync() does not work - exception: object reference not set to an instance of an object Aug 23, 2024
@aropb
Copy link
Author

aropb commented Sep 2, 2024

I found the place where the error occurs.

llama_get_logits_ith suddenly return null.
// returns NULL for invalid ids.

    public Span<float> SafeLLamaContextHandle.GetLogitsIth(int i)
    {
        var model = ThrowIfDisposed();

        unsafe
        {
            var logits = llama_get_logits_ith(this, i);
            return new Span<float>(logits, model.VocabCount);
        }
    }

Stack:

StatelessExecutor.InferAsync()
...
var id = pipeline.Sample(Context.NativeHandle, Context.NativeHandle.GetLogitsIth(_batch.TokenCount - 1), lastTokens);
...

But I don't understand what to do next and how to fix the error. Apparently, null shouldn't be there. Can anyone help with this error?
Due to this error, it is impossible to use Kernel memory.

Thanks.

@martindevans
Copy link
Member

martindevans commented Sep 2, 2024

That's probably indicative of two bugs in LLamaSharp.

Wrapper Error

The docs for llama_get_logits_ith (see here) say:

// Logits for the ith token. For positive indices, Equivalent to:
// llama_get_logits(ctx) + ctx->output_ids[i]*n_vocab
// Negative indicies can be used to access logits in reverse order, -1 is the last logit.
// returns NULL for invalid ids.
LLAMA_API float * llama_get_logits_ith(struct llama_context * ctx, int32_t i);

So it is valid for llama_get_logits_ith to return null! That means this SafeLLamaContextHandle.GetLogitsIth is incorrectly written, it should check for null and raise some kind of error in that case (throw an exception most likely). It is never valid to pass a null pointer into a span constructor!

This is why you get a hard crash instead of an exception.

Higher Level Error

llama_get_logits_ith returns null if an invalid value for i is passed in. There must be a bug somewhere in the higher level that is causing an incorrect value to be passed in. Since this error only seems to affect kernel memory it must be something specific to the KM wrapper.

@aropb
Copy link
Author

aropb commented Sep 2, 2024

@martindevans I have found a solution.

  1. Embeddings = false

    ...
    public static IKernelMemoryBuilder WithLLamaSharp(this IKernelMemoryBuilder builder, LLamaSharpConfig config)
     {
         ModelParams parameters = new(config.ModelPath)
         {
             Embeddings = false,
             ...
    
  2. set the values: UBatchSize, BatchSize
    ...
    public LLamaSharpTextEmbeddingGenerator(LLamaSharpConfig config, LLamaWeights weights)
    {
    ModelParams @params = new(config.ModelPath)
    {
    Embeddings = true,
    ...
    UBatchSize = 2000,
    BatchSize = 2000
    };

@aropb
Copy link
Author

aropb commented Sep 2, 2024

While testing, I noticed that it became slower to work, about 2 times after 0.13.0. Why is this interesting?

@martindevans
Copy link
Member

Embeddings = false

Aha, I think you've cracked it! A while ago the behaviour of the embeddings flag was changed, so logits can no longer be extracted if embeddings=true.

@aropb
Copy link
Author

aropb commented Sep 2, 2024

And in LLamaSharpTextEmbeddingGenerator must specify the values UBatchSize, BatchSize!

@martindevans
Copy link
Member

I'm not sure about that - there should be sensible defaults for those values. In LLamaSharp they're set to default values here. It's possible KernelMemory is overriding those defaults with something incorrect though (I don't really know the KM stuff, so I can't be certain).

@aropb
Copy link
Author

aropb commented Sep 2, 2024

Without these values, there will be an error "Input contains more tokens than configured batch size". That is, the value must be greater than 512. And now you can only define them by rewriting the LLamaSharpTextEmbeddingGenerator class.

@aropb
Copy link
Author

aropb commented Sep 2, 2024

Apparently it is necessary to add UBatchSize, BatchSize to LLamaSharpConfig.

It seems that embeddings=false should always be done.

@martindevans
Copy link
Member

I'm super busy this month, but I will try to make time to fix the issues you found that I summarised here when I get a chance (soon, hopefully. Definitely before the next release).

@martindevans
Copy link
Member

#920 Fixes the lowest level wrapper error, so at least it throws an exception. Hopefully that might help debug the higher level issue.

@aropb
Copy link
Author

aropb commented Sep 4, 2024

The problem has been found. You need to force embeddings=false.

@martindevans
Copy link
Member

I wasn't sure if there's more going on, since you also mentioned a need to change the batch size. Is that just because of the size of your request (you need a larger batch to fit it all in), or is there more going on there?

@aropb
Copy link
Author

aropb commented Sep 4, 2024

Yes, the block size is larger than batchSize, but now this value cannot be changed except to rewrite the class LLamaSharpTextEmbeddingGenerator.

@eocron
Copy link

eocron commented Sep 13, 2024

Any update on this?

@aropb
Copy link
Author

aropb commented Sep 13, 2024

Any update on this?

There is a solution above, Embeddings = false!

@jwangga
Copy link

jwangga commented Sep 13, 2024

@aropb Where should "Embeddings = false" be added? There does not seem to be the method WithLLamaSharp in LLamaSharp.KernelMemory project. Thanks.

@aropb
Copy link
Author

aropb commented Sep 14, 2024

It is indicated above where he is

@sangyuxiaowu
Copy link
Contributor

It seems that for models that support ChatCompletion and Embeddings, the new version must configure Embeddings=false in order to use ChatCompletion properly.

@emulk
Copy link

emulk commented Sep 27, 2024

not working in my case,
i have another error, on SafeLLamaContextHandle.cs
System.AccessViolationException: 'Attempted to read or write protected memory. This is often an indication that other memory is corrupt.'

@jwangga
Copy link

jwangga commented Oct 1, 2024

It does not work for me, either. I made the suggested changes in these places:
In BuilderExtensions.cs
image
In LLamaSharpTextEmbeddingGenerator
image

Am I missing something?

@aropb
Copy link
Author

aropb commented Oct 14, 2024

You need to always set the default Embeddings = false. The error occurs when calling AskAsync. The Embedding Generator does not need to be changed (if nbatch == ubatch).

lubotorok pushed a commit to lubotorok/LLamaSharp that referenced this issue Nov 6, 2024
@lubotorok
Copy link

I did what is mentioned here, see:

lubotorok@d38091d

but I had to lower the Context Size too. Currently I set it to 4000, I had 131 000 before and I was getting Access Violation with llama-3.1-8b-4k model even with this modification.

I am using the same model as a chat assistant with context size 131 000 and it works.
I am just learning both llamasharp and KM. I hope this observation somehow helps.

@aropb aropb closed this as completed Dec 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants