view post Post 3974 Uncensored, Heretic GGUF quants of GLM 4.7 (30B-A3B) with correct Llamacpp and all updates ; NEO-CODE Imatrix W 16 bit OTs.Also specialized quants (balanced for this model), and all quants are NEO-CODE Imatrix W 16 bit output tensor. DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF"Reg quants, non-heretic" :Also 16 bit ot, NEO-CODE Imatrix and specialized: DavidAU/GLM-4.7-Flash-NEO-CODE-Imatrix-MAX-GGUF See translation 🔥 6 6 👀 3 3 + Reply
DavidAU/GLM-4.7-Flash-Uncensored-Heretic-NEO-CODE-Imatrix-MAX-GGUF Text Generation • 30B • Updated 5 days ago • 28.7k • 32
view post Post 2592 Run GLM-4.7-Flash locally on your device with 24GB RAM!🔥It's the best performing 30B model on SWE-Bench and GPQA. With 200K context, it excels at coding, agents, chat & reasoning.GGUF: unsloth/GLM-4.7-Flash-GGUFGuide: https://unsloth.ai/docs/models/glm-4.7-flash See translation 🔥 10 10 + Reply