Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge upstream #2

Open
wants to merge 253 commits into
base: main
Choose a base branch
from
Open

merge upstream #2

wants to merge 253 commits into from

Conversation

francislabountyjr
Copy link
Member

No description provided.

psych0v0yager and others added 30 commits March 13, 2024 14:23
…pported by exllama (#729)

Refactored the exl2 function in exllamav2.py.

The new version offers the following benefits:
1. auto split support. You no longer need to split a large model over 2
GPUs manually, exllama will do it for you
2. 8 bit cache support. Supports the 8 bit cache, can squeeze more
context into the same GPU
3. Additional exllamav2 improvements. Supports low_mem, fasttensors.
4. No longer need to pass in num_experts, it is optional.
5. Future support for 4 bit cache. Whenever turbo updates the pip
package, uncomment the 4 bit lines for 4 bit support.
6. Refactored the function parameters. Changed the model_kwargs
dictionary to individual parameters. Combined with documentation this
makes it easier for new users to understand what options they can
select.
lapp0 and others added 30 commits October 7, 2024 16:31
- Will's structured generation workflow cookbook example was not in the
mkdocs index, so it was not being displayed.
- Same with the LM Studio serving docs.
- The brand color was also slightly off:


![image](https://github.com/user-attachments/assets/fd10fa4f-d140-4936-befa-4dcca09c0e51)

It has been fixed to this:


![image](https://github.com/user-attachments/assets/b6c2d71b-6a7f-4b86-935a-bf5072f1d945)
Request received in discord to add an example for the new transformers
vision capability.


# Vision-Language Models with Outlines
This guide demonstrates how to use Outlines with vision-language models,
leveraging the new transformers_vision module. Vision-language models
can process both text and images, allowing for tasks like image
captioning, visual question answering, and more.

We will be using the Pixtral-12B model from Mistral to take advantage of
some of its visual reasoning capabilities and a workflow to generate a
multistage atomic caption.

---------

Signed-off-by: jphillips <josh.phillips@fearnworks.com>
accross -> across
This is a condensed version of the demo for [extracting earnings
reports](https://github.com/dottxt-ai/demos/tree/main/earnings-reports)
to CSV.

Overview:

- Shows how to use Outlines to structure CSV output
- Provides simple tools for converting a table specification to regular
expressions
- Includes a tuned extraction prompt that performs reasonably well on
income statements
Adds a cookbook on extracting structured output from PDFs.

I included some extra bells and whistles here by showing how to do JSON,
regex, and `choice`, which should help provide inspiration to people
working with PDFs.
Forgot to add the earnings report cookbook to the cookbook index
(#1235), this fixes it.
I added a receipt processing cookbook. 

- Uses Qwen or Pixtral
- General purpose message templating, no messy model-specific token
adding
- Easy function for compressing images down for lower processing/memory
requirements

Should help illustrate a simple use case for vision models.
Fix that error NameError: name 'rng' is not defined
`[Outlines model](../models)` does not return the link correctly.   

Tried switching to `[Outlines model](../models/models.md)`
This PR adds a JAX compatible API, refer issue #1027
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.