Skip to content

Adding quantized models #3

Open
@ProfFan

Description

@ProfFan

Hi,

Thank you for this amazing model! I made a small tray utility with your model to convert LaTeX: https://github.com/ProfFan/Snap2LaTeX

However, running locally is not fast. It would be great if we can make quantized versions suitable for on-device inference :)

Metadata

Metadata

Assignees

No one assigned

    Labels

    help wantedExtra attention is needed

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions