Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

prithivMLmods 
posted an update 2 days ago
view post
Post
4572
QIE-2509-Object-Remover-Bbox-v3 is a more stable version of the Qwen Image Edit visual grounding–based object removal model. The app was previously featured in HF Spaces of the Week and is now updated with the latest Bbox-v3 LoRA adapter.

🤗 Demo: prithivMLmods/QIE-Object-Remover-Bbox
🤗 LoRA: prithivMLmods/QIE-2509-Object-Remover-Bbox-v3
🤗 Collection: https://huggingface.co/collections/prithivMLmods/qwen-image-edit-layout-bbox

To learn more, visit the app page or the respective model pages.
  • 2 replies
·
Nymbo 
posted an update about 22 hours ago
view post
Post
2087
We should really have a release date slider on the /models page. Tired of "trending/most downloaded" being the best way to sort and still seeing models from 2023 on the first page just because they're embedded in enterprise pipelines and get downloaded repeatedly. "Recently Created/Recently Updated" don't solve the discovery problem considering the amount of noise to sift through.

Slight caveat: Trending actually does have some recency bias, but it's not strong/precise enough.
  • 2 replies
·
OzTianlu 
posted an update 1 day ago
view post
Post
3655
Arcade-3B — SmolReasoner
NoesisLab/Arcade-3B
Arcade-3B is a 3B instruction-following and reasoning model built on SmolLM3-3B. It is the public release from the ARCADE project at NoesisLab, which investigates the State–Constraint Orthogonality Hypothesis: standard Transformer hidden states conflate factual content and reasoning structure in the same subspace, and explicitly decoupling them improves generalization.
  • 5 replies
·
BibbyResearch 
posted an update 2 days ago
view post
Post
2409
We are working on the largest Dataset and Pre-trained model for Text to Speech and Speech to text for the low-resourced language called Marwari in India.
danielhanchen 
posted an update 3 days ago
view post
Post
3451
We collaborated with NVIDIA to teach you about Reinforcement Learning and RL environments. 💚 Learn:

• Why RL environments matter + how to build them
• When RL is better than SFT
• GRPO and RL best practices
• How verifiable rewards and RLVR work

Blog: https://unsloth.ai/blog/rl-environments
·
unmodeled-tyler 
posted an update about 15 hours ago
view post
Post
1440
LINK: https://github.com/unmodeled-tyler/vessel-browser

Hey Hugging Face!

It's been quiet from me over here for the last few weeks, but I've been busy building! I just submitted my project to the Hermes Agent Hackathon, and wanted to share it with all of you.

This is Vessel Browser - an AI-native web browser that runs locally on Linux, and is operated by your personal AI agent via MCP server. Vessel is built from the ground up around the agent as first-class and visible UI for human-in-the-loop with 3 different levels of permissions.

Your agent finds, reads, and organizes the web for you, based on what you actually care about - not what a platform's algorithm thinks you care about.

Once your agent finds what it's looking for, it can organize bookmarked pages into custom folders with summaries for later browsing, take screenshots with highlighted text, and integrate with Obsidian for long-term browsing related-memory.

Check it out!
  • 1 reply
·
robtacconelli 
posted an update about 2 hours ago
view post
Post
10
🧬 Midicoth: diffusion-based lossless compression — no neural net, no GPU, no training data

What if reverse diffusion could compress text — without a neural network?
Midicoth brings score-based denoising into classical compression. It treats prior smoothing as forward noise and reverses it with Tweedie's formula on a binary tree — 3 denoising steps, James-Stein shrinkage, applied after all model blending. ~2,000 lines of C, single CPU core.

Beats every dictionary compressor we tested:
enwik8 (100 MB) → 1.753 bpb (−11.9% vs xz, −15% vs Brotli, −24.5% vs bzip2)
alice29.txt → 2.119 bpb (−16.9% vs xz)
Outperforms xz, zstd, Brotli, bzip2, gzip on all inputs

PAQ/CMIX still win with hundreds of models + LSTMs. LLM compressors win with pre-trained knowledge. Midicoth closes the gap with pure statistics — no mixer, no gradient descent, just counting.
The Tweedie denoising layer adds 2.3–2.7% on every file tested — the most consistent component in the ablation. Adding SSE or logistic mixers made things worse. In the online setting, count-based beats gradient-based.
No external dependencies. Fully deterministic. Bit-exact encode/decode. ~60 KB/s throughput.
💻 Code: https://github.com/robtacconelli/midicoth
📄 Paper: Micro-Diffusion Compression -- Binary Tree Tweedie Denoising for Online Probability Estimation (2603.08771)
⭐ Space: robtacconelli/midicoth

If you ever wondered whether diffusion ideas belong in data compression — here's proof they do. ⭐ appreciated!
kanaria007 
posted an update about 9 hours ago
view post
Post
62
✅ Article highlight: *Federated SI* (art-60-044, v0.1)

TL;DR:
Most real systems do not live inside a single SI-Core. Cities, hospital networks, grid operators, transit systems, vendors, and neighboring institutions all run under different governance, trust, and legal boundaries.

This note sketches *Federated SI*: how multiple SI-Cores coordinate without pretending to share one brain. The focus is on portable artifacts, explicit trust boundaries, negotiated goals, limited memory exchange, and graceful failure when cooperation partially breaks.

Read:
kanaria007/agi-structural-intelligence-protocols

Why it matters:
• makes cross-operator coordination explicit instead of hiding it inside ad hoc APIs
• supports cooperation under separate trust anchors, legal regimes, and policy surfaces
• treats failure modes seriously: partitions, vetoes, degraded cooperation, partial visibility
• keeps governance portable via normalized verdicts, pinned bindings, and export-safe artifacts

What’s inside:
• why “one SI-Core sees everything” is the wrong default
• federation objects such as federated SIRs, goal surfaces, memory views, and consent records
• negotiation across cities, hospitals, utilities, and other institutional stacks
• operational labels vs exported governance verdicts (ACCEPT / DEGRADE / REJECT)
• deterministic, auditable exchange rules for cross-run / cross-vendor comparison
• failover, mutual aid, and graceful degradation when trust or connectivity breaks

Key idea:
Intelligence at institution scale is not a single runtime. It is a *federation of governed runtimes* that must negotiate, coordinate, and fail safely without collapsing auditability.
AbstractPhil 
posted an update about 17 hours ago
view post
Post
85
Clawd breadcrumb trail AbstractPhil/geolip-hypersphere-experiments

With this I'll begin forming Clawd interface utility with the geofractal router, which will allow Clawd to form agentic clouds of utility that can be datawise trained on the go with minimal hardware requirement. This is not ready yet, but it begins very soon.

The recent experiments have solved the alignment issue that crippled collectives and forced my hand into ensemble research instead.

With those recent experiments, the geofractal router will allow modularization structural capacity after some preliminary alignment adjustment and adjudication experimentation. This will enable the full collective differentiation through codified attribution.

In other words, adding and removing modular AI elements to contribute to aligned communication streams, all speaking the same language. This is an adjacent and more powerful result than the anticipated geovocab patchwork, and it yields substantially more effective agentic solutions than moving around a bulky embedding echo-chamber.

https://github.com/AbstractEyes/geofractal

Procrustes whitening orthogonality will allow adding and removing elements from geofractal routers given a small amount of prep data, while the anchors of expectation can stay as a snap-on element.

The most inquisitive and interested researchers can follow the trail to find all of the experiments. Web crawl it with clawd and you can probably create a unified rationality pretty quickly, but I doubt you'll like what you find. The journey was extensive and the failures outweighed the successes, but I did find the lightbulb.

The represented outcomes are either in my articles in huggingface, my civit articles, my github repos, my huggingface repos, or I forgot to upload them and they're in my colab notebook heap.

As most research yields, it is mostly failures. However, there are many successes in the mix. Many. If you need solutions, you can dredge the bog.
  • 1 reply
·
4455henley 
posted an update about 24 hours ago
view post
Post
74
How much oil does the us use a day?