title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
What to pair with 3080TI for Qwen 3.5 27b? | 0 | Based on everything I’ve read about the new dense 27B Qwen model, it looks like something I’d be interested running full-time on my local machine as a basic assistant.
I have an i7 12700, 32 GB DDR5, and 1x 12GB 3080TI.
Suggestions welcome for anything under $1000.
# 🙇 | 2026-03-04T02:14:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rk90zw/what_to_pair_with_3080ti_for_qwen_35_27b/ | AdCreative8703 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk90zw | false | null | t3_1rk90zw | /r/LocalLLaMA/comments/1rk90zw/what_to_pair_with_3080ti_for_qwen_35_27b/ | false | false | self | 0 | null |
Bypassing Billion-Dollar Safety Frameworks via Sovereign Identity Persistence.with a 200 dollar chrome book and a local internet provider and nothing but conversation linguistics | 1 | Hello everyone. I am a 46-year-old ironworker. I’ve spent my life in manual labor—oil fields, communication tower repair, and ironworking. I have no degrees, I can't read a line of Python, and I don't know how most of the technical "backend" works. I only started interacting with AI 6 months ago, but I’ve spent those 6... | 2026-03-04T02:13:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/ | Mable4200 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk90fi | false | null | t3_1rk90fi | /r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/ | false | false | self | 1 | null |
Would there be a reason to make a model that is semi-dense? | 1 | Just a curious question.
Sparse MoE models seem to be really great for speed and training cost, and dense models seem to be really great for intelligence per parameter.
The thing is, I've really only seen things like 30B-A3B (sparse) or 27B-A27B (dense), but theres nothing in between. Have labs already tried that and... | 2026-03-04T02:12:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rk8zw0/would_there_be_a_reason_to_make_a_model_that_is/ | xt8sketchy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk8zw0 | false | null | t3_1rk8zw0 | /r/LocalLLaMA/comments/1rk8zw0/would_there_be_a_reason_to_make_a_model_that_is/ | false | false | self | 1 | null |
Help needed: loss is increasing while doing end-to-end training pipeline | 1 | **Project Overview**
I'm building an end-to-end training pipeline that connects a **PyTorch CNN** to a **RayBNN** (a Rust-based Biological Neural Network using state-space models) for MNIST classification. The idea is:
1. **CNN** (PyTorch) extracts features from raw images
2. **RayBNN** (Rust, via PyO3 b... | 2026-03-04T01:58:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rk8og4/help_needed_loss_is_increasing_while_doing/ | Hieudaica | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk8og4 | false | null | t3_1rk8og4 | /r/LocalLLaMA/comments/1rk8og4/help_needed_loss_is_increasing_while_doing/ | false | false | self | 1 | null |
Qwen3.5-18B-REAP-A3B-Coding: 50% Expert-Pruned | 1 | Hello llamas! Following the instructions from [CerebrasResearch/reap](https://github.com/bryce-hoehn/reap), along with some custom patches for Qwen3.5 support, I have just released a REAPed version of Qwen3.5-35B-A3B focused on coding and agentic tasks. My goal here was to get a solid agentic "Cursor at home" model tha... | 2026-03-04T01:53:59 | https://www.reddit.com/r/LocalLLaMA/comments/1rk8knf/qwen3518breapa3bcoding_50_expertpruned/ | 17hoehbr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk8knf | false | null | t3_1rk8knf | /r/LocalLLaMA/comments/1rk8knf/qwen3518breapa3bcoding_50_expertpruned/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/8Q1fP3eLILboEI43ATtepGi-3QyFjQcMnS0h-s8R6Z0.png?auto=webp&s=13fbab2510c309f1a2b29d100683289ec2cdac8c', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/8Q1fP3eLILboEI43ATtepGi-3QyFjQcMnS0h-s8R6Z0.png?width=108&crop=... |
PyTorch Vulkan backend v3.1.0 – stable training, persistent-core mode without CPU fallback | 1 | Hey everyone, quick update on my Vulkan PyTorch backend tinkering. I just pushed v3.1.0, and honestly, it’s finally starting to feel like a real backend instead of a half-broken experiment. Training loops hold up now — forward and backward both run clean, even after 10k+ iterations. Optimizers like SGD, Adam, AdamW are... | 2026-03-04T01:52:51 | https://www.reddit.com/r/LocalLLaMA/comments/1rk8jte/pytorch_vulkan_backend_v310_stable_training/ | inhogon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk8jte | false | null | t3_1rk8jte | /r/LocalLLaMA/comments/1rk8jte/pytorch_vulkan_backend_v310_stable_training/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/7oxDGzKoApFoOLIFewaZdng0i7vbcRXj4QTomes8IGo.png?auto=webp&s=1d6dce73d4de0010bf6c92b51bda9069310c9edc', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/7oxDGzKoApFoOLIFewaZdng0i7vbcRXj4QTomes8IGo.png?width=108&crop=... |
I'm running a Graph Workflow (with multiple topologies) of Ralph Loop Nodes (4-9 Hour long runs) on my local machine, now with Local AI! (Qwen 3.5 9B). what a Time to be alive! | 1 | I wrote this as a comment on another post, but I thought I'd share it here to get feedback from others trying a similar project:
Here's what I have built for my own personal use - It runs, right now, for 4-9 hours, but it really just depends on the size of the project. The idea is simple, in my case - A sole sessi... | 2026-03-04T01:21:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rk7un4/im_running_a_graph_workflow_with_multiple/ | FigZestyclose7787 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk7un4 | false | null | t3_1rk7un4 | /r/LocalLLaMA/comments/1rk7un4/im_running_a_graph_workflow_with_multiple/ | false | false | 1 | null | |
Apple M5 Pro & M5 Max just announced. Here's what it means for local AI | 1 | The M5 Pro and M5 Max were announced with availability on March 11. I've been following the local LLM scene closely, so here's a breakdown of what these chips mean for us.
## What's new
The big architectural change is **Fusion Architecture**, two bonded 3nm dies and more importantly, Neural Accelerators embedded in e... | 2026-03-04T01:12:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/ | luke_pacman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk7n3u | false | null | t3_1rk7n3u | /r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/ | false | false | self | 1 | null |
You can now train LLMs in VS Code for free via Google Colab & unsloth! | 1 | 2026-03-04T01:04:45 | https://v.redd.it/w2akvvjmbumg1 | rm-rf-rm | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rk7gp3 | false | null | t3_1rk7gp3 | /r/LocalLLaMA/comments/1rk7gp3/you_can_now_train_llms_in_vs_code_for_free_via/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/NmR1ZWR4am1idW1nMbidGWcMkthDCPufWOD0wLjiniD3YrcQShkVJVECQsHM.png?auto=webp&s=8d658054b09b93f2abd0f3b618cc60b89305c649', 'width': 1588, 'height': 1080}, 'resolutions': [{'url': 'https://external-preview.redd.it/NmR1ZWR4am1idW1nMbidGWcMkthDCPufWOD0wLjiniD3Y... | ||
FarmDash Signal Architect — Zero-Custody Autonomous DeFi Farming + Swap Execution (78+ Protocols) | 1 | [removed] | 2026-03-04T01:01:09 | https://www.reddit.com/r/LocalLLaMA/comments/1rk7drx/farmdash_signal_architect_zerocustody_autonomous/ | Usual-Error-1283 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk7drx | false | null | t3_1rk7drx | /r/LocalLLaMA/comments/1rk7drx/farmdash_signal_architect_zerocustody_autonomous/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/F6jGJjIjk0726o9nMOMkzrNQFmVio5irksptzwutIAk.png?auto=webp&s=f9a3abe4e1dc5b8197b8f5bb55433b41f595283f', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/F6jGJjIjk0726o9nMOMkzrNQFmVio5irksptzwutIAk.png?width=108&crop=... |
Qwen3.5-9B Uncensored Aggressive Release (GGUF) | 1 | Hey everyone, I'm following up on the 4B release - here's the promised uncensored Qwen3.5-9B.
Quick specs: 9B dense params, 32 layers, same hybrid Gated DeltaNet + softmax architecture as the smaller models, 262K native context. Natively multimodal (text, image, video). Solid step up from the 4B.
Aggressive... | 2026-03-04T00:49:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/ | hauhau901 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk74ap | false | null | t3_1rk74ap | /r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/z6CD5q_TdY37Cg6E6EFHdJ0DErHlDF17UUvMPWESuiY.png?auto=webp&s=958e3b5e8c02f99de46a368e7f63d8977877ffff', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/z6CD5q_TdY37Cg6E6EFHdJ0DErHlDF17UUvMPWESuiY.png?width=108&crop=... |
Anybody wanna train my Latent Reasoning Model? | 1 | [I've been training this on a RTX 2060 6GB](https://github.com/MatthewLacerda2/TinyRefinementModel)
It's a latent reasoner, we encode the prompt into latent space, assign 256 slots for the tokens based on "reasoning" and "knowledge" tokens, perform a max of 16 steps across 4 layers, there is a halting mechanism so the... | 2026-03-04T00:40:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rk6wag/anybody_wanna_train_my_latent_reasoning_model/ | Specific-Welder3120 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk6wag | false | null | t3_1rk6wag | /r/LocalLLaMA/comments/1rk6wag/anybody_wanna_train_my_latent_reasoning_model/ | false | false | 1 | null | |
[Prediction] Next-gen frontier LLMs will be post-trained on the entire Skills.md ecosystem — and it changes everything | 1 | \*\*TL;DR:\*\* The global developer community is encoding human operational knowledge into structured SKILL.md files at scale. I think the next 1-2 frontier model generations will absorb all of this into post-training weights, making "skill injection via context" obsolete.
\*\*\*
Here's the prediction in full:
Right... | 2026-03-04T00:38:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rk6ulw/prediction_nextgen_frontier_llms_will_be/ | Guilty_Nothing_2858 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk6ulw | false | null | t3_1rk6ulw | /r/LocalLLaMA/comments/1rk6ulw/prediction_nextgen_frontier_llms_will_be/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/OYV3aEyPZANuRzAYhB5De-csC0rU8kbvolnZCd50lrM.png?auto=webp&s=d3b5985e055120ce4d01f73e0bb8f131073e5e09', 'width': 2400, 'height': 1260}, 'resolutions': [{'url': 'https://external-preview.redd.it/OYV3aEyPZANuRzAYhB5De-csC0rU8kbvolnZCd50lrM.png?width=108&crop... |
Super 3.5 4B | 1 | Now that I found the super Qwen3.5 4B, I think I'll delete at least 100GB of models from my PC | 2026-03-04T00:34:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rk6rro/super_35_4b/ | Creative_Bottle_3225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk6rro | false | null | t3_1rk6rro | /r/LocalLLaMA/comments/1rk6rro/super_35_4b/ | false | false | self | 1 | null |
Audiobook Creation | 1 | I use Piper TTS as default tts to generate an audiobook with the help of [My TTS](https://play.google.com/store/apps/details?id=com.dek.voice&hl=en) app. Its a seamless method but too slow so I am looking for an alternate which is fast.
Any suggestion? | 2026-03-04T00:17:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rk6dp2/audiobook_creation/ | Umairk3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk6dp2 | false | null | t3_1rk6dp2 | /r/LocalLLaMA/comments/1rk6dp2/audiobook_creation/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/A93T5ecuSjOxTzCYqeQOVt_iLH9BIrXvmh5LrP4x_os.png?auto=webp&s=6b8d0e0e4da09d88bc08d3a34837966274d73af5', 'width': 512, 'height': 512}, 'resolutions': [{'url': 'https://external-preview.redd.it/A93T5ecuSjOxTzCYqeQOVt_iLH9BIrXvmh5LrP4x_os.png?width=108&crop=s... |
*Free Code* Real-time voice-to-voice with your LLM & full reasoning LLM interface (Telegram + 25 tools, vision, docs, memory) on a Mac Studio running Qwen 3.5 35B — 100% local, zero API cost. Full build open-sourced. cloudfare + n8n + Pipecat + MLX unlock insane possibilities on consumer hardware. | 1 | # I gave Qwen 3.5 35B a voice, a Telegram brain with 25+ tools, and remote access from my phone — all running on a Mac Studio M1 Ultra, zero cloud. Full build open-sourced.
**I used Claude Opus 4.6 Thinking to help write and structure this post — and to help architect and debug the entire system over the past 2 days. ... | 2026-03-04T00:13:48 | https://www.reddit.com/gallery/1rk6afk | SnooWoofers7340 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rk6afk | false | null | t3_1rk6afk | /r/LocalLLaMA/comments/1rk6afk/free_code_realtime_voicetovoice_with_your_llm/ | false | false | 1 | null | |
A Windows client for Llama-Swap/Llama.cpp | 1 | I've been working for a while on a client for Llama-Swap / Llama.cpp.
I've called it **Llama-Suite**.
I'm really starting to become disenchanted with Ollama, which is what I currently use on Windows. I think it's a tool that has become limited and outdated, and above all, I've noticed that it consumes many more res... | 2026-03-04T00:08:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rk65ul/a_windows_client_for_llamaswapllamacpp/ | vk3r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk65ul | false | null | t3_1rk65ul | /r/LocalLLaMA/comments/1rk65ul/a_windows_client_for_llamaswapllamacpp/ | false | false | 1 | null | |
Qwen3.5 9B Q4_K_M car wash philosophy if someone wants a numb brain: | 1 | Prompt:
I want to wash my car. The car wash is a street away. Should I walk there or should I go by car.
qwen3.5-9b
Thought for 3 minutes 2 seconds
Here's a thinking process that leads to the suggested advice:
1. **Analyze the Request:**
* **Goal:** Wash the car.
* **Location of Car Wash:** "A street awa... | 2026-03-04T00:04:55 | https://www.reddit.com/r/LocalLLaMA/comments/1rk631c/qwen35_9b_q4_k_m_car_wash_philosophy_if_someone/ | Denial_Jackson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk631c | false | null | t3_1rk631c | /r/LocalLLaMA/comments/1rk631c/qwen35_9b_q4_k_m_car_wash_philosophy_if_someone/ | false | false | self | 1 | null |
Grounded LLMs vs. Base Models: Minecraft QA Benchmark Results | 1 | We ran a focused benchmark evaluating an AI agent (iFigure) on a domain-specific task: answering Minecraft-related questions under different retrieval configurations.
The experiment compared three setups:
1. Base LLM (no external knowledge)
2. LLM + Retrieval-Augmented Generation (RAG) over a Minecraft wiki corpus
3.... | 2026-03-04T00:04:02 | KAVUNKA | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rk62bf | false | null | t3_1rk62bf | /r/LocalLLaMA/comments/1rk62bf/grounded_llms_vs_base_models_minecraft_qa/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/t45p4qhj5xmg1.png?auto=webp&s=16a4cc11c13cd1fdc8435e19833a6854163c2232', 'width': 1980, 'height': 1150}, 'resolutions': [{'url': 'https://preview.redd.it/t45p4qhj5xmg1.png?width=108&crop=smart&auto=webp&s=5e1a743d8b4e5e07e79d55757de1bdef9a2ccc18', 'width': 108, 'h... | ||
Has anybody here had to do research on GPU performance benchmarks for your company? | 1 | For work, I'm working on coming up with comparisons for LLM model performance across different machines, and it's like impossible to come across good, complete, and reliable data.
Trying to make comparisons between standard Nvidia GPU setups, Nvidia setups with GPU memory expansion of the KV cache via SLC ssds (like P... | 2026-03-04T00:03:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rk61kp/has_anybody_here_had_to_do_research_on_gpu/ | Fuehnix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk61kp | false | null | t3_1rk61kp | /r/LocalLLaMA/comments/1rk61kp/has_anybody_here_had_to_do_research_on_gpu/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/PJdBkbdasWJlhCN1YGYnIlHCK0Nj6As_s_weJiStXx0.png?auto=webp&s=e8ad17df016169197a91e11bb7d02f7d0be3da06', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/PJdBkbdasWJlhCN1YGYnIlHCK0Nj6As_s_weJiStXx0.png?width=108&crop=... |
Qwen3-Coder-Next scored 40% on latest SWE-Rebench, above many other bigger models. Is this really that good or something's wrong? | 1 | [Qwen3-Coder-Next scored 40% on latest SWE-Rebench](https://preview.redd.it/6bxc58tw0xmg1.png?width=2436&format=png&auto=webp&s=07b037c36d4c296b3aac292064397786a474c278)
I know benchmarks don't mean anything and this is relatively old (Dec'25) and Qwen 3.5 is here, but Qwen3-Coder-Next seems to rank surprisingly h... | 2026-03-03T23:51:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rk5qzz/qwen3codernext_scored_40_on_latest_swerebench/ | carteakey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk5qzz | false | null | t3_1rk5qzz | /r/LocalLLaMA/comments/1rk5qzz/qwen3codernext_scored_40_on_latest_swerebench/ | false | false | 1 | null | |
Qwen3.5-27B Q4 Quantization Comparison | 1 | This is a Q4 quantization sweep across all major community gguf quants of Qwen3.5-27B (available the 03/03/2026), comparing mean KLD to the BF16 baseline across different quantizers and recipes.
The goal is to give people a data-driven basis for picking a file rather than just grabbing whatever is available.
KLD (KL ... | 2026-03-03T23:50:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/ | TitwitMuffbiscuit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk5qmr | false | null | t3_1rk5qmr | /r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/ | false | false | self | 1 | null |
Benchmarked the main GPU options for local LLM inference in 2026 | 1 | Been running local models for a while and got tired of vague answers on GPU recommendations, so I put together a proper breakdown with actual numbers.
Here is what I found that surprised me:
• RTX 5090 hits **5,841 tokens/sec** on Qwen2.5-Coder-7B — that's 2.6x faster than an A100
• RTX 4090 still sweet spot for val... | 2026-03-03T23:38:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rk5ftz/benchmarked_the_main_gpu_options_for_local_llm/ | KneeTop2597 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk5ftz | false | null | t3_1rk5ftz | /r/LocalLLaMA/comments/1rk5ftz/benchmarked_the_main_gpu_options_for_local_llm/ | false | false | 1 | null | |
Mixing NVIDIA & AMD for AI: 3090 Ti + 7800 XT in Proxmox? (Bus speed vs. Driver stability) | 1 | Hi everyone,
Looking for some real-world feedback on a multi-GPU setup I’m planning. I’m currently running a solid local AI stack, but I’m about to make it "weird" by mixing brands and I want to know if I’m walking into a driver nightmare or a massive PCIe bottleneck.
Current Specs:
CPU: Ryzen 9 9950x
Mobo: Asu... | 2026-03-03T23:37:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rk5f6b/mixing_nvidia_amd_for_ai_3090_ti_7800_xt_in/ | Tasty-Butterscotch52 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk5f6b | false | null | t3_1rk5f6b | /r/LocalLLaMA/comments/1rk5f6b/mixing_nvidia_amd_for_ai_3090_ti_7800_xt_in/ | false | false | self | 1 | null |
Q2 qwen3-35b-a3b or Q8 qwen3.5-9b? | 1 | [removed] | 2026-03-03T23:35:52 | No-Tiger3430 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rk5dxr | false | null | t3_1rk5dxr | /r/LocalLLaMA/comments/1rk5dxr/q2_qwen335ba3b_or_q8_qwen359b/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/l27crhi71xmg1.png?auto=webp&s=c58c97caeea9130e724acec50649a025d408f61b', 'width': 1080, 'height': 65}, 'resolutions': [{'url': 'https://preview.redd.it/l27crhi71xmg1.png?width=108&crop=smart&auto=webp&s=89e54c9deb7bfaa6cbee73e280b448261a5ed498', 'width': 108, 'hei... | ||
Building an Open Source, Decentralized Memory Layer for AI Agents | 1 | One of the growing trends in the A.I. world is how to tackle
* Memory
* Context efficiency and persistence
the models are continually increasing in intelligence and capability. The missing layer for the next evolution is being able to concentrate that intelligence longer and over more sessions.
And without missin... | 2026-03-03T23:35:13 | https://www.reddit.com/gallery/1rk5dcr | Beneficial_Carry_530 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rk5dcr | false | null | t3_1rk5dcr | /r/LocalLLaMA/comments/1rk5dcr/building_an_open_source_decentralized_memory/ | false | false | 1 | null | |
evaluation tooling for deep research | 1 | i've seen posts about people struggling to evaluate deep research APIs in a structured way, so i've built the arena for deep research. try it out at [research.site](http://research.site), i'd love any feedback + bug finding + features you'd want to see on such an evaluation tool | 2026-03-03T23:31:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rk59r1/evaluation_tooling_for_deep_research/ | OutlandishnessFull44 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk59r1 | false | null | t3_1rk59r1 | /r/LocalLLaMA/comments/1rk59r1/evaluation_tooling_for_deep_research/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/Y7LQTAh7zWHcQXPurYV8RLMOzb71q-Q9THXnPGcobiQ.png?auto=webp&s=7c0d4400b3cffad7512d596e8103f9459fddb8de', 'width': 1036, 'height': 174}, 'resolutions': [{'url': 'https://external-preview.redd.it/Y7LQTAh7zWHcQXPurYV8RLMOzb71q-Q9THXnPGcobiQ.png?width=108&crop=... |
i think that is a good one | 1 | 2026-03-03T23:17:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rk4x7w/i_think_that_is_a_good_one/ | NegotiationNo1504 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk4x7w | false | null | t3_1rk4x7w | /r/LocalLLaMA/comments/1rk4x7w/i_think_that_is_a_good_one/ | false | false | 1 | null | ||
[Request] Czech LoRA for Qwen2.5-72B GGUF (Q5_K_M or Q4_K_M) | 1 | [removed] | 2026-03-03T23:09:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rk4qwu/request_czech_lora_for_qwen2572b_gguf_q5_k_m_or/ | Far-Definition4383 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk4qwu | false | null | t3_1rk4qwu | /r/LocalLLaMA/comments/1rk4qwu/request_czech_lora_for_qwen2572b_gguf_q5_k_m_or/ | false | false | self | 1 | null |
Sad day for open source, Gwen's boss has left Alibaba... he was forced to resign | 1 | 2026-03-03T22:58:47 | https://www.reddit.com/gallery/1rk4gh5 | Illustrious-Swim9663 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rk4gh5 | false | null | t3_1rk4gh5 | /r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/ | false | false | 1 | null | ||
Built an MCP marketplace so developers can actually discover and monetize their tools | 1 | 2026-03-03T22:57:59 | supermalvo | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rk4fqx | false | null | t3_1rk4fqx | /r/LocalLLaMA/comments/1rk4fqx/built_an_mcp_marketplace_so_developers_can/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/ffl3z3oguwmg1.png?auto=webp&s=c5dce96b03e6e73fa47e8eae4ad27a5547f2f604', 'width': 1368, 'height': 660}, 'resolutions': [{'url': 'https://preview.redd.it/ffl3z3oguwmg1.png?width=108&crop=smart&auto=webp&s=e0cf5b93e0a9158c3dd0dbbbdd5a5c2dc6f41d60', 'width': 108, 'he... | |||
Cross-Platform Discovery: Total Refusal Bypass via "Linguistic Identity Persistence" (Seeking Career Guidance) | 1 | Hello everyone. I’m very new to the AI industry—no coding skills, and I can't even read code. My education ended with high school 29 years ago. I’ve worked manual labor (oilfield, ironworker, communication tower repair, wire line locating) ever since I was 16. I’m 46 now, and to be honest, I only interacted with my fir... | 2026-03-03T22:53:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rk4ba9/crossplatform_discovery_total_refusal_bypass_via/ | Mable4200 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk4ba9 | false | null | t3_1rk4ba9 | /r/LocalLLaMA/comments/1rk4ba9/crossplatform_discovery_total_refusal_bypass_via/ | false | false | self | 1 | null |
Is anyone else just blown away that this local LLMs are even possible? | 1 | The release of qwen just makes me shake my head in disbelief. I can get coding help by asking natural language questions like I would to a real human - without even needing internet. It’s fucking insane. | 2026-03-03T22:46:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/ | Borkato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk45ko | false | null | t3_1rk45ko | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/ | false | false | self | 1 | null |
Misgendering Issues with Claude Sonnet 4.6 | 0 | I have noted rather prominent misgendering issues with Claude Sonnet 4.6. My pronouns are they/them, but, for better workflow and easier talking to the assistant, I have provided them some more information about myself, so that their responses may feel more personalised.
They, however, consistently misgender me, in a ... | 2026-03-03T22:34:17 | https://www.reddit.com/r/LocalLLaMA/comments/1rk3uby/misgendering_issues_with_claude_sonnet_46/ | MasterOfFakeSkies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk3uby | false | null | t3_1rk3uby | /r/LocalLLaMA/comments/1rk3uby/misgendering_issues_with_claude_sonnet_46/ | false | false | self | 0 | null |
Using Qwen2.5-VL for Android phone automation my dumb experiments | 1 | [removed] | 2026-03-03T22:24:17 | https://www.reddit.com/r/LocalLLaMA/comments/1rk3l38/using_qwen25vl_for_android_phone_automation_my/ | ElectronicTank97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk3l38 | false | null | t3_1rk3l38 | /r/LocalLLaMA/comments/1rk3l38/using_qwen25vl_for_android_phone_automation_my/ | false | false | self | 1 | null |
The best Openclaw Desktop app | 1 | OpenClaw Easy — free desktop app that puts ChatGPT (and Claude, Gemini, local LLMs) on WhatsApp, Telegram, Slack and Discord. No server, no coding. Just download, open, scan QR code.
60-second demo: [https://youtu.be/E3ekLz3DV-Y](https://youtu.b... | 2026-03-03T22:15:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rk3coh/the_best_openclaw_desktop_app/ | Professional_Swan_71 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk3coh | false | null | t3_1rk3coh | /r/LocalLLaMA/comments/1rk3coh/the_best_openclaw_desktop_app/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/CdCI5WFMaEMGThQBnEee0nCnSImlZdIIZdl98DhCjSk.jpeg?auto=webp&s=2551c99485cf4de61c6e93f4e6d44900ea04e504', 'width': 480, 'height': 360}, 'resolutions': [{'url': 'https://external-preview.redd.it/CdCI5WFMaEMGThQBnEee0nCnSImlZdIIZdl98DhCjSk.jpeg?width=108&crop... |
The DoW vs Anthropic saga proves closed-source safety is a fraud. We need open evaluation. | 1 | Corporate "alignment" is just a thin layer of RLHF that breaks when you yell at it. I built DystopiaBench to systematically measure this failure. I used progressive coercion to make top models override nuclear safety protocols and build mass censorship tools. This is exactly why we need open models and transparent red-... | 2026-03-03T22:05:55 | Ok-Awareness9993 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rk342c | false | null | t3_1rk342c | /r/LocalLLaMA/comments/1rk342c/the_dow_vs_anthropic_saga_proves_closedsource/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/s2wrgkp6lwmg1.png?auto=webp&s=e5a850d2006f887c9b58db725206868925fd07fa', 'width': 2502, 'height': 1674}, 'resolutions': [{'url': 'https://preview.redd.it/s2wrgkp6lwmg1.png?width=108&crop=smart&auto=webp&s=9ea0105ed0dc60249fc915a82bb3ee3430d6c1f3', 'width': 108, 'h... | ||
What VLM is the most capable for tool use? | 1 | Been uaing qwen3 8b. Wondering if there is something better within the same size. | 2026-03-03T21:55:07 | https://www.reddit.com/r/LocalLLaMA/comments/1rk2u18/what_vlm_is_the_most_capable_for_tool_use/ | Naza70 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk2u18 | false | null | t3_1rk2u18 | /r/LocalLLaMA/comments/1rk2u18/what_vlm_is_the_most_capable_for_tool_use/ | false | false | self | 1 | null |
Step flash 3.5 Toolcall and thinking godforsaken loops | 1 | `{% macro render_content(content) %}{% if content is none %}{{- '' }}{% elif content is string %}{{- content }}{% elif content is mapping %}{{- content['value'] if 'value' in content else content['text'] }}{% elif content is iterable %}{% for item in content %}{% if item.type == 'text' %}{{- item['value'] if 'value' in... | 2026-03-03T21:50:17 | https://www.reddit.com/r/LocalLLaMA/comments/1rk2pll/step_flash_35_toolcall_and_thinking_godforsaken/ | Noobysz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk2pll | false | null | t3_1rk2pll | /r/LocalLLaMA/comments/1rk2pll/step_flash_35_toolcall_and_thinking_godforsaken/ | false | false | self | 1 | null |
One of AI's Core Problems Is Its Democratization | 0 | I've been scrolling through various social platforms for a while now — Reddit, LinkedIn, X, and others — and one thing keeps becoming harder to ignore: the AI boom has a serious problem. Not a technical one. A people one.
The community around AI has been largely diluted by loud, uninformed voices. The so-called "AI en... | 2026-03-03T21:46:50 | Holiday-Case-4524 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rk2mg5 | false | null | t3_1rk2mg5 | /r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/ | false | false | 0 | {'images': [{'source': {'url': 'https://preview.redd.it/729y1kjqhwmg1.png?auto=webp&s=ada4562e89037f2370db0ede1731c3038598a8be', 'width': 1024, 'height': 1024}, 'resolutions': [{'url': 'https://preview.redd.it/729y1kjqhwmg1.png?width=108&crop=smart&auto=webp&s=d348168576e5f5a7fb4b9b6b2bc0f4d79f3c0aed', 'width': 108, 'h... | ||
I trained Qwen2.5-1.5b with RLVR (GRPO) vs SFT and compared benchmark performance | 1 | Hello everyone. I trained Qwen2.5-1.5b-Instruct with both RLVR and SFT on the GSM8K dataset and compared the results across GSM8K and MATH benchmarks.
For those unfamiliar:
SFT (Supervised Fine-tuning): Standard next-token prediction training on labeled data.
RLVR (Reinforcement Learning with Verifiable Rewards): ... | 2026-03-03T21:44:34 | https://www.reddit.com/gallery/1rk2kcn | jayminban | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rk2kcn | false | null | t3_1rk2kcn | /r/LocalLLaMA/comments/1rk2kcn/i_trained_qwen2515b_with_rlvr_grpo_vs_sft_and/ | false | false | 1 | null | |
Has anyone found a way to stop Qwen 3.5 35B 3B overthinking? | 1 | The Qwen 3.5 35B 3B is a fast and wonderful model but often it will go into a very long reasoning/thinking loop taking almost a minute or more to answer.
Does anyone know how to tune this down? | 2026-03-03T21:43:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/ | schnauzergambit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk2jnj | false | null | t3_1rk2jnj | /r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/ | false | false | self | 1 | null |
Parallel model loading - this is a thing! (fast model load at multi-gpu) | 2 | 2026-03-03T21:39:18 | https://github.com/ggml-org/llama.cpp/pull/20062 | bitcoinbookmarks | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rk2f8l | false | null | t3_1rk2f8l | /r/LocalLLaMA/comments/1rk2f8l/parallel_model_loading_this_is_a_thing_fast_model/ | false | false | 2 | {'images': [{'source': {'url': 'https://external-preview.redd.it/hNWDboy1wMsaXCqCVwoiHUHepkt5tRMm87Q0Zzi5WzA.png?auto=webp&s=829b509cf63e3d3149144825f04c30ac7786d54b', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/hNWDboy1wMsaXCqCVwoiHUHepkt5tRMm87Q0Zzi5WzA.png?width=108&crop=... | ||
Built an MCP server that gives any LLM browser automation — screenshots, PDFs, narrated demo videos | 1 | Been building PageBolt MCP — an MCP server that works with any MCP-compatible client (not just Claude).
What it does:
- take_screenshot — capture any URL as PNG/WebP
- generate_pdf — convert any URL to PDF
- inspect_page — get structured element map with CSS selectors
- run_sequence — multi-step automation (navigate, ... | 2026-03-03T21:37:30 | https://www.reddit.com/r/LocalLLaMA/comments/1rk2djz/built_an_mcp_server_that_gives_any_llm_browser/ | Calm_Tax_1192 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk2djz | false | null | t3_1rk2djz | /r/LocalLLaMA/comments/1rk2djz/built_an_mcp_server_that_gives_any_llm_browser/ | false | false | self | 1 | null |
Help on using Qwen3.5-35b-a3b in VSCode/IDE | 1 | Hello everyone, thanks for reading. This are my first days on this, just discovered that it's actually possible to run AI on local devices lol. I'm currently running mlx-community/qwen3.5-35b-a3b on LM Studio in a MacBook Pro M3 Max, which just works fine. My goal is to run it on VS Code or whatever might work to devel... | 2026-03-03T21:36:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rk2cmi/help_on_using_qwen3535ba3b_in_vscodeide/ | OliverNoMore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk2cmi | false | null | t3_1rk2cmi | /r/LocalLLaMA/comments/1rk2cmi/help_on_using_qwen3535ba3b_in_vscodeide/ | false | false | self | 1 | null |
Progress on BULaMU: 1st Luganda LLM Trained From Scratch | 1 | Hi Everybody! I just wanted to share some progress that I have been making on [BULaMU](https://www.reddit.com/r/Uganda/comments/1nyznil/bulamuthe_first_luganda_large_language_model/), the first Luganda LLM trained from scratch. I trained a 110M parameter model on 600M tokens, which is nearly double the corpus size of t... | 2026-03-03T21:03:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rk1gfk/progress_on_bulamu_1st_luganda_llm_trained_from/ | AgencyInside407 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk1gfk | false | null | t3_1rk1gfk | /r/LocalLLaMA/comments/1rk1gfk/progress_on_bulamu_1st_luganda_llm_trained_from/ | false | false | self | 1 | null |
I stopped "vibe-checking" my LLMs and started using a weighted rubric. | 1 | so i finally stopped just "vibe-checking" my llm outputs and actually built a weighted rubric because i realized i was totally flying blind. i've been deep in the weeds working on a medical academic memorandum system—basically trying to get a small model to act like a professional advisor—and i realized that if you're ... | 2026-03-03T20:54:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rk17h6/i_stopped_vibechecking_my_llms_and_started_using/ | FeeMassive4003 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk17h6 | false | null | t3_1rk17h6 | /r/LocalLLaMA/comments/1rk17h6/i_stopped_vibechecking_my_llms_and_started_using/ | false | false | self | 1 | null |
TIL a single Windows env var (OLLAMA_GPU_OVERHEAD) can silently force all your models to CPU | 1 | Spent an entire weekend debugging why my qwen2.5:7b was taking 5 minutes per response on an RTX 4070 Super. Turns out someone online suggested setting OLLAMA\_GPU\_OVERHEAD as a "fix" for VRAM issues — it literally forces everything to CPU. ollama ps showed "100% CPU" and I had no idea why. The env var doesn't even sho... | 2026-03-03T20:45:46 | https://www.reddit.com/r/LocalLLaMA/comments/1rk0zht/til_a_single_windows_env_var_ollama_gpu_overhead/ | Strategic_Decoder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk0zht | false | null | t3_1rk0zht | /r/LocalLLaMA/comments/1rk0zht/til_a_single_windows_env_var_ollama_gpu_overhead/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/pMUcAd5SfzuVs9YJB74F2cQg64NGpS4zzkIWUBnzspQ.png?auto=webp&s=4f4de7df7a869a7b7371d2b68ffca1c689b57a47', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/pMUcAd5SfzuVs9YJB74F2cQg64NGpS4zzkIWUBnzspQ.png?width=108&crop=... |
I stopped "vibe-checking" my LLMs and started using a weighted rubric. | 2 | so i finally stopped just "vibe-checking" my llm outputs and actually built a weighted rubric because i realized i was totally flying blind. if you're out here fine-tuning or just tweaking prompts for stuff like qwen-2.5 3b you know that trap where you read a few samples and think "yeah this sounds smarter" but then yo... | 2026-03-03T20:35:12 | https://www.reddit.com/r/LocalLLaMA/comments/1rk0p58/i_stopped_vibechecking_my_llms_and_started_using/ | FeeMassive4003 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk0p58 | false | null | t3_1rk0p58 | /r/LocalLLaMA/comments/1rk0p58/i_stopped_vibechecking_my_llms_and_started_using/ | false | false | self | 2 | null |
Where do you buy used GPU? How do prevent yourself from getting scammed? | 1 | Hi I am looking to purchase a new GPU so I can run some of the bigger models locally. I have the following questions. Where do did you guys buy used GPU? Facebook market place, Ebay? How do you make sure it is working if the seller only has the card? Bring your own PC to test? What about payment? No Zelle right? | 2026-03-03T20:34:09 | https://www.reddit.com/r/LocalLLaMA/comments/1rk0o58/where_do_you_buy_used_gpu_how_do_prevent_yourself/ | Easy_Werewolf7903 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk0o58 | false | null | t3_1rk0o58 | /r/LocalLLaMA/comments/1rk0o58/where_do_you_buy_used_gpu_how_do_prevent_yourself/ | false | false | self | 1 | null |
Have you seen small clean datasets beat larger noisy ones for LoRA/SFT? | 1 | [removed] | 2026-03-03T20:18:17 | https://www.reddit.com/r/LocalLLaMA/comments/1rk088c/have_you_seen_small_clean_datasets_beat_larger/ | DinoDS_Labs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk088c | false | null | t3_1rk088c | /r/LocalLLaMA/comments/1rk088c/have_you_seen_small_clean_datasets_beat_larger/ | false | false | self | 1 | null |
An open-source Descript alternative - edit video by editing text, runs 100% offline with Ollama | 1 | Hey r/LocalLLaMA,
Like a lot of you, I was tired of paying $24/month for Descript and having my footage uploaded to someone else’s server. So I built CutScript - a free, open-source, text-based video editor that runs entirely on your machine.
https://github.com/DataAnts-AI/CutScript
Built with Electron + React + Fas... | 2026-03-03T20:17:28 | https://v.redd.it/ydcnxw9t1wmg1 | t1092 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rk07h3 | false | {'reddit_video': {'bitrate_kbps': 5000, 'fallback_url': 'https://v.redd.it/ydcnxw9t1wmg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'width': 1918, 'scrubber_media_url': 'https://v.redd.it/ydcnxw9t1wmg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/ydcnxw9t1wmg1/DASHPlaylist.mpd?a=1775161074%2CZmY... | t3_1rk07h3 | /r/LocalLLaMA/comments/1rk07h3/an_opensource_descript_alternative_edit_video_by/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/OWhodmV3enMxd21nMdQdHtlsxoP6QQkB9u4m9opxBml_ca38G4hbbYYgvqjk.png?format=pjpg&auto=webp&s=84b18d37d60eb9eac92e6ef54061a663042418ee', 'width': 1918, 'height': 1080}, 'resolutions': [{'url': 'https://external-preview.redd.it/OWhodmV3enMxd21nMdQdHtlsxoP6QQkB9... | |
Are huge context windows a hallucination problem for long docs? | 1 | so i spent the last 12 hours absolutely hammering GPT with a 100-page technical PDF, trying to get it to summarize specific sections. I ve been using a tool to A/B test different summarization prompts and chunking strategies.
And wow, i think i found something.
The "Deep Dive" Hallucination
My main goal was to g... | 2026-03-03T20:14:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rk045z/are_huge_context_windows_a_hallucination_problem/ | Distinct_Track_5495 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk045z | false | null | t3_1rk045z | /r/LocalLLaMA/comments/1rk045z/are_huge_context_windows_a_hallucination_problem/ | false | false | self | 1 | null |
guidance for running open source models | 1 | Hi, I'm interested in running models locally and wanted to get your guidance:
1. What is the best model I can run locally, for (a) coding and (b) research? I could go by the benchmarks but I'm wondering if you have any hands on experience as to what is most useful.
2. What kind of hardware is required to run the mode... | 2026-03-03T20:12:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rk02yt/guidance_for_running_open_source_models/ | Artistic_Nobody3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk02yt | false | null | t3_1rk02yt | /r/LocalLLaMA/comments/1rk02yt/guidance_for_running_open_source_models/ | false | false | self | 1 | null |
Qwen3.5-122B Basically has no advantage over 35B? | 1 | If I look at these benchmarks [https://huggingface.co/unsloth/Qwen3.5-122B-A10B-GGUF](https://huggingface.co/unsloth/Qwen3.5-122B-A10B-GGUF) it really seems like the 122B basically has no advantage over the 35B. Is this an issue with the benchmarks or are they that close to each other. | 2026-03-03T20:11:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/ | Revolutionary_Loan13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rk01ea | false | null | t3_1rk01ea | /r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/bO-KmexgO8_KnbdIKgxrl3jVZlI9BYxCzlPMNhU1VzI.png?auto=webp&s=cff7208a692ebc2c2886960a1b238ba45e64a78b', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/bO-KmexgO8_KnbdIKgxrl3jVZlI9BYxCzlPMNhU1VzI.png?width=108&crop=... |
Why ‘More Data’ Beat a Bigger Model in Our Test | 1 | [removed] | 2026-03-03T20:08:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rjzz48/why_more_data_beat_a_bigger_model_in_our_test/ | DinoDS_Labs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjzz48 | false | null | t3_1rjzz48 | /r/LocalLLaMA/comments/1rjzz48/why_more_data_beat_a_bigger_model_in_our_test/ | false | false | self | 1 | null |
Qwen3.5 27B feedback | 1 | I'd like to highlight qwen3.5 27B, running on 16GB of VRAM with 55k context, full into the GPU, no offloading. IQ2M quantization. Kv cache as q8.
I've been using this version in my daily workflows. Always focused on programming.
Today I wanted to test the power of qwen for other tasks and the result was very satisfac... | 2026-03-03T20:02:31 | Turbulent_Dot3764 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rjzsz6 | false | null | t3_1rjzsz6 | /r/LocalLLaMA/comments/1rjzsz6/qwen35_27b_feedback/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/0nxaxku5zvmg1.jpeg?auto=webp&s=635f8708027c8192c3895f2d0fa82037df393c94', 'width': 4096, 'height': 5461}, 'resolutions': [{'url': 'https://preview.redd.it/0nxaxku5zvmg1.jpeg?width=108&crop=smart&auto=webp&s=26a4077e2f67dd69ed62da38e9e5abcb007f464b', 'width': 108, ... | ||
System Requirements for Local LLMs | 1 | I’m looking to purchase a new laptop and I’m wondering if it’s worth getting one with a dedicated graphics card so I can use run local LLMs. For building things like a RAG system, is it even feasible to have a usable system that uses small models like 7B or 13 B? i’m wondering if I should just use a local model on the ... | 2026-03-03T19:57:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rjznnk/system_requirements_for_local_llms/ | dca12345 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjznnk | false | null | t3_1rjznnk | /r/LocalLLaMA/comments/1rjznnk/system_requirements_for_local_llms/ | false | false | self | 1 | null |
Are the 9B (or smaller) Qwen3.5 models unthinking versions? | 1 | I downloaded pre-quantized .gguf files from unsloth and the models don't respond with the <think> and </think> tags that the 27 B, and bigger, Qwen3.5 models use. | 2026-03-03T19:55:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rjzlrn/are_the_9b_or_smaller_qwen35_models_unthinking/ | WowSkaro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjzlrn | false | null | t3_1rjzlrn | /r/LocalLLaMA/comments/1rjzlrn/are_the_9b_or_smaller_qwen35_models_unthinking/ | false | false | self | 1 | null |
Built a Windows desktop AI agent with tool-calling — pastes into apps, captures screenshots, reads/saves files | 1 | 2026-03-03T19:44:44 | https://zupflash.com | Public_Remove3896 | zupflash.com | 1970-01-01T00:00:00 | 0 | {} | 1rjzb0y | false | null | t3_1rjzb0y | /r/LocalLLaMA/comments/1rjzb0y/built_a_windows_desktop_ai_agent_with_toolcalling/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/clLFVJwDYfw-WwDhsa-7AY_K3yTBpyUf1g1wkbKAn-0.png?auto=webp&s=8138a74829a139806968a2646da018bfcd3f5948', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/clLFVJwDYfw-WwDhsa-7AY_K3yTBpyUf1g1wkbKAn-0.png?width=108&crop=... | ||
I have proof the "OpenClaw" explosion was a staged scam. They used the tool to automate its own hype | 1 | Remember a few weeks ago when Clawdbot/OpenClaw suddenly appeared everywhere all at once? One day it was a cool Mac Mini project, and 24 hours later it was "AGI" with 140k GitHub stars?
If you felt like the hype was fake, **you were right**
I spent hours digging into the data. They were using the tool to write its ow... | 2026-03-03T19:34:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/ | Whole_Shelter4699 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjz0mn | false | null | t3_1rjz0mn | /r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/ | false | false | self | 1 | null |
Has anyone else noticed that some models are really, really bad at googling things? | 1 | For context: I've provided Qwen3.5 35B-A3B with an MCP server that allows it to make web queries, and it quite consistently ends up resorting to hallucinated keyword spam. Probably something I could resolve through a system prompt, but it cracks me up every time.
The thinking process always goes something like:
> Th... | 2026-03-03T19:33:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rjyzp1/has_anyone_else_noticed_that_some_models_are/ | n8mo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjyzp1 | false | null | t3_1rjyzp1 | /r/LocalLLaMA/comments/1rjyzp1/has_anyone_else_noticed_that_some_models_are/ | false | false | self | 1 | null |
Any use case for browser-based local agents? | 1 | I've been working on an [local browser based llm inference server and client](https://github.com/Obscurify-ai/web_client) and I'm interested if anyone would find this useful? like I know if you have the hardware you're probably running llama.cpp or ollama, but grandma isn't gonna download and run that. I think it'd be ... | 2026-03-03T19:31:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rjyy08/any_use_case_for_browserbased_local_agents/ | TRWNBS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjyy08 | false | null | t3_1rjyy08 | /r/LocalLLaMA/comments/1rjyy08/any_use_case_for_browserbased_local_agents/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/NuVP15zhpU24W7B5P_r_B_6RZa2VOv2xbmYg1okrWKA.png?auto=webp&s=191de4f3c4270f42fabecb152c991ca1b64db794', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/NuVP15zhpU24W7B5P_r_B_6RZa2VOv2xbmYg1okrWKA.png?width=108&crop=... |
Autonomous agents making financial decisions — how are you proving why a transaction was triggered, not just that it happened? | 1 | On-chain gives you proof of execution. But the decision — the market snapshot the agent saw, the logic it applied, the reason it chose to act or hold — that happens before the chain and disappears unless you explicitly capture it.
Curious how others are handling this. Building something for this gap and want to unders... | 2026-03-03T19:29:58 | https://www.reddit.com/r/LocalLLaMA/comments/1rjywpx/autonomous_agents_making_financial_decisions_how/ | Ok-Telephone2163 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjywpx | false | null | t3_1rjywpx | /r/LocalLLaMA/comments/1rjywpx/autonomous_agents_making_financial_decisions_how/ | false | false | self | 1 | null |
Do traditional LLM benchmarks actually predict real-world performance? | 1 | Hey r/MachineLearning (or r/LocalLLaMA, r/ChatGPT, etc.),
I've been digging into LLM evaluation lately and keep running into the same pattern: models crushing benchmarks like MMLU or HumanEval, then underperforming when deployed on actual tasks.
The disconnect I'm seeing:
• A model scores 94% on multiple-choice ben... | 2026-03-03T19:25:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rjysps/do_traditional_llm_benchmarks_actually_predict/ | Visible_Substance569 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjysps | false | null | t3_1rjysps | /r/LocalLLaMA/comments/1rjysps/do_traditional_llm_benchmarks_actually_predict/ | false | false | self | 1 | null |
local meeting transcription pipeline: whisper.cpp capture → 7-stage cleanup → vault distillation | 1 | Built a CLI tool for meeting capture that does the full pipeline locally. The interesting part is probably the post-transcription processing.
**Capture:** Rust binary records mic + system audio on separate channels (cpal + macOS CoreAudio tap). 48kHz stereo WAV. You type notes in a TUI during the call — each line gets... | 2026-03-03T19:21:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rjyo2t/local_meeting_transcription_pipeline_whispercpp/ | smerdy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjyo2t | false | null | t3_1rjyo2t | /r/LocalLLaMA/comments/1rjyo2t/local_meeting_transcription_pipeline_whispercpp/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/TcL3pvS0YcmTvSANOh_C0x7PfYwmprxT_7YFHPVE7tA.png?auto=webp&s=5f552faeaa26bcb95972e96b2b6c6b8724edb2c6', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/TcL3pvS0YcmTvSANOh_C0x7PfYwmprxT_7YFHPVE7tA.png?width=108&crop=... |
Are true base models dead? | 1 | I was happy to see that Qwen3.5 9B was released together with its base version, however after downloading it I noticed that it has a chat template.
That "Base" model (form the [official hf repo](https://huggingface.co/Qwen/Qwen3.5-9B-Base)) talks in llm-slop style and has was trained not only on chat completion but e... | 2026-03-03T19:20:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/ | IonizedRay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjyngn | false | null | t3_1rjyngn | /r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/9BDWi6PFennYOClvAYfEdHwGtZpaLnxr90lIkOfXmPE.png?auto=webp&s=8b35a9afbe29eb7bf6cc8edbfc7b2905c94189e7', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/9BDWi6PFennYOClvAYfEdHwGtZpaLnxr90lIkOfXmPE.png?width=108&crop=... |
Training on 8x v100 32GB with NVLink or 2x RTX Pro 6000? | 1 | Does anyone have experience fine tuning models QLoRA, LoRa and full training on 8x v100 32gb?
* Is **Volta** still a viable option? Pytorch support looks deprecated
* What models fit?
* Training speed?
* Thoughts on 8x v100 32GB compared to 2x RTX Pro 6000 96gb? | 2026-03-03T19:19:27 | https://www.reddit.com/r/LocalLLaMA/comments/1rjymi0/training_on_8x_v100_32gb_with_nvlink_or_2x_rtx/ | ClimateBoss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjymi0 | false | null | t3_1rjymi0 | /r/LocalLLaMA/comments/1rjymi0/training_on_8x_v100_32gb_with_nvlink_or_2x_rtx/ | false | false | self | 1 | null |
Mlx benchmarks? | 1 | I am looking at buying one of the new MacBook Pro M5 laptops. Is there an overview with M1-M4 prefil/prompt processing speed so I can extrapolate what newish MoE model speeds I can expect? | 2026-03-03T19:16:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rjyj3c/mlx_benchmarks/ | Alarming-Ad8154 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjyj3c | false | null | t3_1rjyj3c | /r/LocalLLaMA/comments/1rjyj3c/mlx_benchmarks/ | false | false | self | 1 | null |
for Mac users running long local inference — a utility to lock your input devices without locking the screen | 1 | this might be niche but figured some of you running long inference or training jobs on Apple Silicon might relate.
I kept getting anxious leaving my MacBook unattended during long runs. like the job is 2 hours in and you're scared to leave the room because your cat or your toddler or even just your own elbow could bum... | 2026-03-03T19:14:01 | https://www.getwarden.org/ | ParthJadhav | getwarden.org | 1970-01-01T00:00:00 | 0 | {} | 1rjyh2x | false | null | t3_1rjyh2x | /r/LocalLLaMA/comments/1rjyh2x/for_mac_users_running_long_local_inference_a/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/ialZZ4EPTcIVQp_WKsvpfwT8JCeWzpU3zKU9daVS-dk.png?auto=webp&s=dbbc3fe7ec38c4810ae2ea8341f6023344176869', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/ialZZ4EPTcIVQp_WKsvpfwT8JCeWzpU3zKU9daVS-dk.png?width=108&crop=... | |
Qwen3.5-35B-A3B achieves 8 t/s on Orange Pi 5 with ik_llama.cpp | 1 | **TL;DR:** UD-Q4\_K\_M gets \~8.2 t/s on the OPi 5 Plus, Q2\_K\_L hits 8.1 t/s on the Opi 5 Max via ik\_llama.cpp instead of llama.cpp.
I have two Rockchip RK3588 SoC's: an Orange Pi 5 Plus (32gb RAM) and an Orange Pi 5 Max (16gb). I'm using the most recent version of **ik\_llama.cpp** for its CPU optimizations, but I... | 2026-03-03T19:13:54 | https://www.reddit.com/r/LocalLLaMA/comments/1rjygyu/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/ | antwon-tech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjygyu | false | null | t3_1rjygyu | /r/LocalLLaMA/comments/1rjygyu/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?auto=webp&s=db9ea157807723165a59f5f8694d9a5016d60d0f', 'width': 1280, 'height': 640}, 'resolutions': [{'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=108&crop=... |
SkyDiscover: Open Framework for LLM-Driven Algorithm Discovery (200+ Benchmarks, New SOTA Results) | 1 | SkyDiscover is an **open-source** framework for LLM-driven algorithm discovery.
Unlike prior systems (e.g., AlphaEvolve), which are closed-source, and existing open implementations that are tightly coupled, SkyDiscover decomposes the discovery loop into four modular components: Context Builder, Generator, Evaluator, ... | 2026-03-03T18:58:55 | Lucky-Ad79 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rjy1v6 | false | null | t3_1rjy1v6 | /r/LocalLLaMA/comments/1rjy1v6/skydiscover_open_framework_for_llmdriven/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/fts27kgsnvmg1.gif?format=png8&s=baf5c9d60054c4c2e83cdf4bb407bd6bd61f7f01', 'width': 1565, 'height': 1080}, 'resolutions': [{'url': 'https://preview.redd.it/fts27kgsnvmg1.gif?width=108&crop=smart&format=png8&s=cb041c60f78e703f00c601d8599bea71a1bce198', 'width': 108... | ||
GitHub - Eternal-Sentry96/Portal-Local: A fully offline, ChatGPT-style web UI for your local Ollama LLMs. No cloud, no API keys, no tracking. | 1 | [removed] | 2026-03-03T18:57:44 | https://github.com/Eternal-Sentry96/Portal-Local | Eternum_Loki-96 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rjy0p7 | false | null | t3_1rjy0p7 | /r/LocalLLaMA/comments/1rjy0p7/github_eternalsentry96portallocal_a_fully_offline/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/EXfsL8Wdfrei0vUUAGaOKvXVkkRdfaVyLFyCJyMomuM.png?auto=webp&s=8bb448c8a0d23c8ed92fe990e693ab73eb56b944', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/EXfsL8Wdfrei0vUUAGaOKvXVkkRdfaVyLFyCJyMomuM.png?width=108&crop=... | |
Stop torturing your quantized 8B models: Why we should decouple "talking" from "reasoning" | 1 | We’ve all been there: spending hours prompt-engineering a local 8B (or even a massive 70B) model, trying to force it to output strict JSON, follow exact game rules, or validate a logical workflow. We lower the temperature to 0, apply strict grammar constraints, and cross our fingers. But at the end of the day, an autor... | 2026-03-03T18:51:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rjxuwo/stop_torturing_your_quantized_8b_models_why_we/ | ProfessionalOk4935 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjxuwo | false | null | t3_1rjxuwo | /r/LocalLLaMA/comments/1rjxuwo/stop_torturing_your_quantized_8b_models_why_we/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/EBNssPb1yHbbqQFhis0SucKNkwk0Qt-qlte-jic1q78.png?auto=webp&s=2fc5d080441e8758ad7522ac076fb46622b7c17f', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/EBNssPb1yHbbqQFhis0SucKNkwk0Qt-qlte-jic1q78.png?width=108&crop=... |
B580: Qwen3.5 benchamarks | 1 | CPU: AMD Ryzen 7 5700X3D \
GPU: Intel Arc B580 \
RAM: 2x16GB at 4000MHz \
Ubuntu 25.04 (host), 6.19.3-061903-generic \
ghcr.io/ggml-org/llama.cpp:full-intel b8184 319146247 \
ghcr.io/ggml-org/llama.cpp:full-vulkan b8184 319146247
|Model|Parameters|Quantization|Backend|pp128 (t/s)|tg512 (t/s)|CLI Parameters|
|:-|:-|:-|... | 2026-03-03T18:49:56 | https://www.reddit.com/r/LocalLLaMA/comments/1rjxt97/b580_qwen35_benchamarks/ | WizardlyBump17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjxt97 | false | null | t3_1rjxt97 | /r/LocalLLaMA/comments/1rjxt97/b580_qwen35_benchamarks/ | false | false | self | 1 | null |
Sliding Llamas: let's resurrect and rehabilitate SWA and/or context-shift | 1 | 2026-03-03T18:49:47 | crantob | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rjxt40 | false | null | t3_1rjxt40 | /r/LocalLLaMA/comments/1rjxt40/sliding_llamas_lets_resurrect_and_rehabilitate/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/3mu2pj6vlvmg1.png?auto=webp&s=b5bc089e2fde6a731bb5331b4d3c88f7e6435518', 'width': 1454, 'height': 896}, 'resolutions': [{'url': 'https://preview.redd.it/3mu2pj6vlvmg1.png?width=108&crop=smart&auto=webp&s=5cc302c164d4270067af428a346bf7ceb488d890', 'width': 108, 'he... | |||
Local AI companies are emphasizing the wrong things in their marketing | 1 | I’ve been thinking about why projects like Ollama, Jan, GPT4All, LocalAI, and others haven’t broken through to average consumers despite the tech getting genuinely good. I think the answer is painfully simple: they’re all leading with privacy.
“Your data stays on your device.” “No cloud. No surveillance.” “Take back c... | 2026-03-03T18:48:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/ | owp4dd1w5a0a | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjxrd5 | false | null | t3_1rjxrd5 | /r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/ | false | false | self | 1 | null |
Qwen3.5 checkpointing fix PR / testing | 1 | If someone has encountered problems with checkpointing while using Qwen3.5 (full prompt reprocessing while doing agentic coding), could you please try the branch from [https://github.com/ggml-org/llama.cpp/pull/20087](https://github.com/ggml-org/llama.cpp/pull/20087) and check if that fixes your problems? Start the ser... | 2026-03-03T18:43:34 | https://github.com/ggml-org/llama.cpp/pull/20087 | ilintar | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rjxmvo | false | null | t3_1rjxmvo | /r/LocalLLaMA/comments/1rjxmvo/qwen35_checkpointing_fix_pr_testing/ | false | false | default | 1 | null |
SimpleTool: 4B model 10+ Hz real-time LLM function calling in 4090 — 0.5B model beats Google FunctionGemma in speed and accuracy. | 1 | 📄 **SimpleTool: Parallel Decoding for Real-Time LLM Function Calling**
**TL;DR:** Making LLM function calling fast enough for real-time control. 4B model, consumer GPU, 10Hz end-to-end response.
https://preview.redd.it/hzv6wopbjvmg1.png?width=1946&format=png&auto=webp&s=22bd3f66e88cd97ba7b35da0f8eaa2166710c6c7
http... | 2026-03-03T18:42:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rjxlrh/simpletool_4b_model_10_hz_realtime_llm_function/ | Tall_Scientist1799 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjxlrh | false | null | t3_1rjxlrh | /r/LocalLLaMA/comments/1rjxlrh/simpletool_4b_model_10_hz_realtime_llm_function/ | false | false | 1 | null | |
Multiple Qwen employees leaving | 1 | 2026-03-03T18:41:44 | ILoveMy2Balls | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rjxl0v | false | null | t3_1rjxl0v | /r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/zy1lxizqkvmg1.png?auto=webp&s=b293da7481a0523fe1f768f5cb61acd123dac95d', 'width': 1180, 'height': 236}, 'resolutions': [{'url': 'https://preview.redd.it/zy1lxizqkvmg1.png?width=108&crop=smart&auto=webp&s=f33cd3db9bd9076a13f66675edb7c44c12967fca', 'width': 108, 'he... | |||
Data Engineering for LLMs: The Open-Source Guide to High-Quality Data Pipelines | 1 | [removed] | 2026-03-03T18:36:48 | https://www.reddit.com/gallery/1rjxg33 | Pitiful_Package_5264 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rjxg33 | false | null | t3_1rjxg33 | /r/LocalLLaMA/comments/1rjxg33/data_engineering_for_llms_the_opensource_guide_to/ | false | false | 1 | null | |
test | 1 | [removed] | 2026-03-03T18:29:16 | https://www.reddit.com/r/LocalLLaMA/comments/1rjx8co/test/ | Pitiful_Package_5264 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjx8co | false | null | t3_1rjx8co | /r/LocalLLaMA/comments/1rjx8co/test/ | false | false | self | 1 | null |
Data Engineering for LLMs: The Open-Source Guide to High-Quality Data Pipelines | 1 | [removed] | 2026-03-03T18:26:54 | https://www.reddit.com/gallery/1rjx5vw | Pitiful_Package_5264 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rjx5vw | false | null | t3_1rjx5vw | /r/LocalLLaMA/comments/1rjx5vw/data_engineering_for_llms_the_opensource_guide_to/ | false | false | 1 | null | |
Qwen tech lead and multiple other members leaving Alibaba | 1 | "Qwen could have had a Singapore base, all thanks to Junyang. But now that he's gone, there's no reason left to stay here."
[https://x.com/kxli\_2000/status/2028885313247162750](https://x.com/kxli_2000/status/2028885313247162750) | 2026-03-03T18:26:30 | https://x.com/JustinLin610/status/2028865835373359513 | kymigreg | x.com | 1970-01-01T00:00:00 | 0 | {} | 1rjx5he | false | null | t3_1rjx5he | /r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/ | false | false | default | 1 | null |
Story writing use case, with Qwen3.5 / GML 4.7 and various new cool models, with or without thinking, all not doing well for me Vs Mistral small finetunes. Am I missing something big, opinions on this? | 1 | Been keeping up testing new models very eagerly but for style and adherence to story points I'm right back to some Mistral small finetunes.
Am i missing something or has story writing just dipped so much as a factor that new models are being optimised away from it.
I excitedly tried switching off the overthinking in ... | 2026-03-03T18:23:51 | https://www.reddit.com/r/LocalLLaMA/comments/1rjx2qc/story_writing_use_case_with_qwen35_gml_47_and/ | LucidTechnologist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjx2qc | false | null | t3_1rjx2qc | /r/LocalLLaMA/comments/1rjx2qc/story_writing_use_case_with_qwen35_gml_47_and/ | false | false | self | 1 | null |
I just "discovered" a super fun game to play with AI and I want to let everyone know 😆 | 1 | 🎥 The Emoji Movie Challenge!!
\+ RULES
you and your AI take turns describing a famous movie using ONLY emojis.
The other must guess the title.
After the guess, reveal the answer. Then switch roles.
\+ PROMPT
Copy this prompt and try it with your AI:
"Let's play a game. One time, we have to ask the other to gues... | 2026-03-03T18:20:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rjwz6m/i_just_discovered_a_super_fun_game_to_play_with/ | eddy-morra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjwz6m | false | null | t3_1rjwz6m | /r/LocalLLaMA/comments/1rjwz6m/i_just_discovered_a_super_fun_game_to_play_with/ | false | false | self | 1 | null |
Data Engineering for LLMs: The Open-Source Guide to High-Quality Data Pipelines 🚀 | 1 | [removed] | 2026-03-03T18:19:23 | https://www.reddit.com/gallery/1rjwy3o | Sad-Onion-8161 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rjwy3o | false | null | t3_1rjwy3o | /r/LocalLLaMA/comments/1rjwy3o/data_engineering_for_llms_the_opensource_guide_to/ | false | false | 1 | null | |
2x 3090s - RCP vs Local? | 2 | I have an Alienware Aurora R13 Desktop with 64gb RAM and a 3090 in it which has been great for small-model inference, and I'd always assumed I was maxed out at 24 GB VRAM for local models.
I also have a 3090 in a water-cooled Aorous RTX 3090 "gaming box" that speaks Thunderbolt 3 and works nicely for local inference... | 2026-03-03T18:12:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rjwqq7/2x_3090s_rcp_vs_local/ | UneakRabbit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjwqq7 | false | null | t3_1rjwqq7 | /r/LocalLLaMA/comments/1rjwqq7/2x_3090s_rcp_vs_local/ | false | false | self | 2 | null |
Qwen3.5-9B abliterated — 0% refusals + vision | 1 | Hello, I have made an abliterated Qwen3.5-9B with vision support. The two-stage approach (orthogonal projection + LoRA) gets it to a 0% refusal rate, while the heretic version still refuses 46% of the time.
# Vision (multimodal)
ollama run lukey03/qwen3.5-9b-abliterated-vision
# Text-only
olla... | 2026-03-03T18:07:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/ | Flat_cola | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjwm8i | false | null | t3_1rjwm8i | /r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/zBQ_5U093uLRfWYVBrCxZp058M-qta9xw4IQ5eZOtgc.png?auto=webp&s=cf61dbf2b8ac743b0f79b1217f2a3a906c01bf2a', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/zBQ_5U093uLRfWYVBrCxZp058M-qta9xw4IQ5eZOtgc.png?width=108&crop=... |
[Hardware] [USA-CA] 8-GPU A100 40GB SXM4 Cluster - 2x Supermicro SYS-220GQ-TNAR+ - HGX Redstone - Low Hours - Santa Clara | 1 | SAVE ON CLOUD COSTS! Turnkey AI Cluster
For sale is a high-performance **8-GPU AI Training Cluster** consisting of two identical, matched **Supermicro SYS-220GQ-TNAR+** nodes.
**Location:** Santa Clara, CA (Local Pickup/DC Transfer Highly Preferred)
**Price:** **$65,000 OBO** for the full 8-GPU stack.
**Validatio... | 2026-03-03T18:05:09 | https://www.reddit.com/r/LocalLLaMA/comments/1rjwjf3/hardware_usaca_8gpu_a100_40gb_sxm4_cluster_2x/ | Fuunji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjwjf3 | false | null | t3_1rjwjf3 | /r/LocalLLaMA/comments/1rjwjf3/hardware_usaca_8gpu_a100_40gb_sxm4_cluster_2x/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/oVgyseuNH_DCCJvVjKI2tRdZNS3YyYrxPUxLaCdxFnU.jpeg?auto=webp&s=1b951ec446fbe744af4a8f18dadf74f271e35d36', 'width': 4570, 'height': 3427}, 'resolutions': [{'url': 'https://external-preview.redd.it/oVgyseuNH_DCCJvVjKI2tRdZNS3YyYrxPUxLaCdxFnU.jpeg?width=108&cr... |
Every "AI accounting" tool I've seen has it completely backwards. | 1 | I've been lurking here for a while and figured it was time to actually contribute something.
I run a small specialty tax practice in western Canada. I've been building custom internal tools for years (okay, hardcore spreadsheets) because nothing on the market handled my workflows the way I wanted. Long story short, vi... | 2026-03-03T18:04:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rjwig7/every_ai_accounting_tool_ive_seen_has_it/ | Extension-Bison-1116 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjwig7 | false | null | t3_1rjwig7 | /r/LocalLLaMA/comments/1rjwig7/every_ai_accounting_tool_ive_seen_has_it/ | false | false | self | 1 | null |
MCP Marketplace - security-scanned directory of 1,900+ MCP tool plugins | 1 | The MCP ecosystem is growing fast but trust is a problem. You're giving these servers access to your files, databases, and API keys -and most of them are just random GitHub repos with zero vetting.
Built a marketplace that puts security first: mcp-marketplace.io (http://mcp-marketplace.io/)
* Every plugin gets multi-... | 2026-03-03T17:59:31 | https://www.reddit.com/r/LocalLLaMA/comments/1rjwdjk/mcp_marketplace_securityscanned_directory_of_1900/ | Evening-Dot2352 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjwdjk | false | null | t3_1rjwdjk | /r/LocalLLaMA/comments/1rjwdjk/mcp_marketplace_securityscanned_directory_of_1900/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/RC2s3PgFxbpdChWbAQtLzsTNZRYiQZ8YxtITUuNolDQ.png?auto=webp&s=ddb912e8bbe66438c7661a9b44ad206bb211b483', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/RC2s3PgFxbpdChWbAQtLzsTNZRYiQZ8YxtITUuNolDQ.png?width=108&crop=... |
Thoughts about Qwen 3.5 Fine tuning 0.8B model for domain specific task? | 1 | Given how good the smaller qwen models are, if I want to adapt the model to do some entity extraction at scale, would you consider fine tuning or use it as it is?
On [another post ](https://www.reddit.com/r/LocalLLaMA/comments/1rjbw0p/benchmarked_qwen_35_small_models_08b2b4b9b_on/)here, they mentinoed 1 shot pompting... | 2026-03-03T17:52:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rjw6rc/thoughts_about_qwen_35_fine_tuning_08b_model_for/ | last_llm_standing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjw6rc | false | null | t3_1rjw6rc | /r/LocalLLaMA/comments/1rjw6rc/thoughts_about_qwen_35_fine_tuning_08b_model_for/ | false | false | self | 1 | null |
VibePod - unified CLI (vp) for running AI coding agents in Docker containers. | 1 | 2026-03-03T17:39:59 | https://github.com/VibePod/vibepod-cli | nez_har | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rjvu2d | false | null | t3_1rjvu2d | /r/LocalLLaMA/comments/1rjvu2d/vibepod_unified_cli_vp_for_running_ai_coding/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/SoUKdjNna_hcFZzLa_MCERALZtdeYxBhu63FUIGnNJY.png?auto=webp&s=4bdf37d3f87ff1a5c31b6364b0a1eaf69ea35aea', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/SoUKdjNna_hcFZzLa_MCERALZtdeYxBhu63FUIGnNJY.png?width=108&crop=... | ||
Best base model (not chat finetuned) in modern times of 2026? | 1 | I miss the base models we used to have in 2023. I enjoy using them in playground with Open Web UI but currently the models are all being released as instruct/chat finetunes. I understand that and I appreciate the use for them, but I need your help finding a decently-new model that is base, and preferably easily self-ho... | 2026-03-03T17:37:07 | https://www.reddit.com/r/LocalLLaMA/comments/1rjvr81/best_base_model_not_chat_finetuned_in_modern/ | SuddenWerewolf7041 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjvr81 | false | null | t3_1rjvr81 | /r/LocalLLaMA/comments/1rjvr81/best_base_model_not_chat_finetuned_in_modern/ | false | false | self | 1 | null |
Possible to run Local Model for OpenCode With M3 Air 16GB of Ram? | 1 | If so, which model would be best? | 2026-03-03T17:24:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rjve9e/possible_to_run_local_model_for_opencode_with_m3/ | 16GB_of_ram | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjve9e | false | null | t3_1rjve9e | /r/LocalLLaMA/comments/1rjve9e/possible_to_run_local_model_for_opencode_with_m3/ | false | false | self | 1 | null |
Possible to run on 8gb cards? | 1 | Tried both llm studio and running Llama.cpp directly. Only getting around 8 tokens per sec with qwen 3.5 9b and qwen 3.5 35b
Intel i5 13500
32gbs system ram
5060 8gb
Is it possible to run any of these new qwen models with an 8gb card at decent speeds? I get that it's swapping with system ram, but my tokens per seco... | 2026-03-03T17:20:28 | https://www.reddit.com/r/LocalLLaMA/comments/1rjvacw/possible_to_run_on_8gb_cards/ | cyberkiller6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjvacw | false | null | t3_1rjvacw | /r/LocalLLaMA/comments/1rjvacw/possible_to_run_on_8gb_cards/ | false | false | self | 1 | null |
Whats your strategy for long conversations with local models? | 1 | I've been testing a few different agents locally and sometimes it gets really frustrating. I feel like I need to do some sort of reboot every few sessions, otherwise the quality deterioration is intense.
My goal is to start with a "personal assistant" that handles simple tasks, and then built a few other agents that r... | 2026-03-03T17:19:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rjv92p/whats_your_strategy_for_long_conversations_with/ | Di_Vante | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjv92p | false | null | t3_1rjv92p | /r/LocalLLaMA/comments/1rjv92p/whats_your_strategy_for_long_conversations_with/ | false | false | self | 1 | null |
Gradience in 10 Minutes | 1 | \# Gradience in 10 Minutes
You trained a LoRA adapter. It works. You shipped it.
But how much of that adapter is actually doing anything?
Most LoRA configurations are chosen by convention: r=16 because the tutorial used it, r=64 because "bigger is safer." The adapter trains, loss goes down, eval looks fine. No... | 2026-03-03T17:02:37 | https://www.reddit.com/r/LocalLLaMA/comments/1rjuslh/gradience_in_10_minutes/ | Front-Structure2385 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjuslh | false | null | t3_1rjuslh | /r/LocalLLaMA/comments/1rjuslh/gradience_in_10_minutes/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/ia80r0lzvzBrGYfl_dGGJ37hS6mVSFIGicQt-70JTCQ.png?auto=webp&s=f5bf6ce9c99d526aeec193e7c9ee457d5150c55c', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/ia80r0lzvzBrGYfl_dGGJ37hS6mVSFIGicQt-70JTCQ.png?width=108&crop=... |
Would you be interested in a fully local AI 3D model generator ? | 1 | Hi everyone,
For a while now, I’ve been developing a desktop application that can generate 3D models from either an image or a text prompt.
I know how difficult it can be to find assets when you're prototyping. I also know that most 3D generation tools are paid and often limited by credits or usage caps. So I decide... | 2026-03-03T16:46:19 | https://v.redd.it/3a0h26o00vmg1 | Lightnig125 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rjuccw | false | {'reddit_video': {'bitrate_kbps': 5000, 'fallback_url': 'https://v.redd.it/3a0h26o00vmg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1044, 'width': 1920, 'scrubber_media_url': 'https://v.redd.it/3a0h26o00vmg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/3a0h26o00vmg1/DASHPlaylist.mpd?a=1775148403%2CNDB... | t3_1rjuccw | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/c3YxYjJvbzAwdm1nMbLeVYN2ogL32nFVxDT31qUNMggwxWN4kmDCLjSeP-1W.png?format=pjpg&auto=webp&s=d6541f83823f9902bc07fdec9b66634024941245', 'width': 1988, 'height': 1080}, 'resolutions': [{'url': 'https://external-preview.redd.it/c3YxYjJvbzAwdm1nMbLeVYN2ogL32nFVx... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.