Replies: 1 comment 4 replies
-
This is not tabbyapi's support page. That being said, that field can be left blank (I dont use it). Delete your config.yml, run ./start.sh again. It will regenerate.
You have to actually download the files and place them in the correct folder... |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
When I try to talk to an exllamav2 driven model in Open WebUI, through TabbyAPI, I get:
2025-03-20 18:00:24.781 ERROR: if model.container.prompt_template is None:
2025-03-20 18:00:24.781 ERROR: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-03-20 18:00:24.781 ERROR: AttributeError: 'NoneType' object has no attribute 'prompt_template'
I've configured my IP address in TabbyAPI's config.yml and confirmed that works. I don't know what to put in for
prompt_template:
. I've tried 'chat_template' (which comes from the tokenizer_config.json in Qwen2.5-VL-7B-Instruct-exl2), and I've tried the names of some of the files that come withgit clone https://github.com/theroyallab/llm-prompt-templates templates
. I also tried Llama-3.2-1B-exl2. Clearly I'm stuck on step "4. Run TabbyAPI (and set your config to use a new template)
"!!Background:
NVIDIA-SMI 470.256.02 Driver Version: 570.124.06 CUDA Version: 12.8
and it detects all (nine) GPUs.pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
. This is CUDA 11.8, not 12.8. Does this matter?pip install .
, and I fixed that by usingexport TORCH_CUDA_ARCH_LIST="7.5"
as found at How to solve the__hfma2
problem docker error oobabooga/text-generation-webui#2128 (Should I try 5.0?)500: Internal Server error
, and a stack trace from tabbyAPI's start.sh which ends with the quote above../start.sh download turboderp/Llama-3.2-1B-exl2-instruct
and./start.sh download turboderp/Qwen2.5-VL-7B-Instruct-exl2 --revision 4.0bpw
I'm a relative newbie with LLMs and all of this, and I appreciate my setup is not conventional. Forgive me if I've missed the
FAQ, but I've not been able to find the solution to address this (what seems like a) tiny issue!
Thanks for any help,
-D.
Beta Was this translation helpful? Give feedback.
All reactions