Which loader for AllInOne GGUF (ComfyUI)

#2
by rootey - opened

Hello - The original safetensor from Phr00t base model loads via the 'Load Checkpoint' ComfyUI node, and the Clip and VAE have outputs on the 'Load Checkpoint' node.
How do we load your GGUF version (and still use the embedded clip & vae) ? Which node are you using to load the GGUF? or do you load the clip and vae from different files? thanks.

I read on Phr00t hugging face discussion some people asking the same question, and they were directed to some civitai workflows. However, in the UK we are blocked from accessing civitai website, so thought it best to ask you direct, for how to use your GGUF in a comfy workflow.

rootey changed discussion title from Which loader for AllInOne GGUF to Which loader for AllInOne GGUF (ComfyUI)

I don't think this works as an all in one β€” I'm not aware of a node that can load GGUFs as checkpoints so I've been loading it as a unet and then adding the clip and vae like with any diffusion model

I don't think this works as an all in one β€” I'm not aware of a node that can load GGUFs as checkpoints so I've been loading it as a unet and then adding the clip and vae like with any diffusion model

Exactly how you should do.
Unet gguf (this file from here) + clip (text encoder - umt5_xxl_fp8_e4m3fn_scaled) + vae (wan2.1 - wan_2.1_vae).

Hello - The original safetensor from Phr00t base model loads via the 'Load Checkpoint' ComfyUI node, and the Clip and VAE have outputs on the 'Load Checkpoint' node.
How do we load your GGUF version (and still use the embedded clip & vae) ? Which node are you using to load the GGUF? or do you load the clip and vae from different files? thanks.

I read on Phr00t hugging face discussion some people asking the same question, and they were directed to some civitai workflows. However, in the UK we are blocked from accessing civitai website, so thought it best to ask you direct, for how to use your GGUF in a comfy workflow.

I've uploaded a modified version of Phr00t's example workflow here, (change to use ComfyUI-GGUF to load main model and text encoder), hope that helps.

You can get gguf version of the text encoder here : city96's umt5-xxl-encoder-gguf, or use the umt5_xxl_fp8_e4m3fn_scaled.safetensors as @tech77sugguested.

I've uploaded a modified version of Phr00t's example workflow here, (change to use ComfyUI-GGUF to load main model and text encoder), hope that helps.

You can get gguf version of the text encoder here : city96's umt5-xxl-encoder-gguf, or use the umt5_xxl_fp8_e4m3fn_scaled.safetensors as @tech77sugguested.

Thanks all - @befox @tech77 @Seeker36087 - All is working well now. I appreciate all the help !
Also appreciate you making the GGUF

image.png

Sign up or log in to comment