VILARIN PRO
vilarin
AI & ML interests
Pantheon
Recent Activity
updated
a model about 2 months ago
vilarin/Llama-Qwen3-4B-RPG-gguf updated
a model about 2 months ago
vilarin/Llama-Qwen3-4B-RPG-gguf published
a model 2 months ago
vilarin/Llama-Qwen3-4B-RPG-gguf Organizations
reacted to nroggendorff's post with ๐ about 1 year ago
reacted to merve's post with ๐ over 1 year ago
Post
4001
Small yet mighty! ๐ซ
We are releasing SmolVLM: a new 2B small vision language made for on-device use, fine-tunable on consumer GPU, immensely memory efficient ๐ค
We release three checkpoints under Apache 2.0: SmolVLM-Instruct, SmolVLM-Synthetic and SmolVLM-Base HuggingFaceTB/smolvlm-6740bd584b2dcbf51ecb1f39
Learn more from our blog here: huggingface.co/blog/smolvlm
This release comes with a demo, fine-tuning code, MLX integration and TRL integration for DPO ๐
Try the demo: HuggingFaceTB/SmolVLM
Fine-tuning Recipe: https://github.com/huggingface/smollm/blob/main/finetuning/Smol_VLM_FT.ipynb
Also TRL integration for DPO ๐
We are releasing SmolVLM: a new 2B small vision language made for on-device use, fine-tunable on consumer GPU, immensely memory efficient ๐ค
We release three checkpoints under Apache 2.0: SmolVLM-Instruct, SmolVLM-Synthetic and SmolVLM-Base HuggingFaceTB/smolvlm-6740bd584b2dcbf51ecb1f39
Learn more from our blog here: huggingface.co/blog/smolvlm
This release comes with a demo, fine-tuning code, MLX integration and TRL integration for DPO ๐
Try the demo: HuggingFaceTB/SmolVLM
Fine-tuning Recipe: https://github.com/huggingface/smollm/blob/main/finetuning/Smol_VLM_FT.ipynb
Also TRL integration for DPO ๐
reacted to davanstrien's post with โค๏ธ over 1 year ago
Post
2546
First dataset for the new Hugging Face Bluesky community organisation: https://huggingface.co/datasets/bluesky-community/one-million-bluesky-posts ๐ฆ
๐ 1M public posts from Bluesky's firehose API
๐ Includes text, metadata, and language predictions
๐ฌ Perfect to experiment with using ML for Bluesky ๐ค
Excited to see people build more open tools for a more open social media platform!
๐ 1M public posts from Bluesky's firehose API
๐ Includes text, metadata, and language predictions
๐ฌ Perfect to experiment with using ML for Bluesky ๐ค
Excited to see people build more open tools for a more open social media platform!
posted an
update over 1 year ago
Post
1914
A few days ago, Blackforestlabs released FLUX.1 Tools, which has surprised everyone with its quality and effects. Now that diffusers support these features, you can easily deploy and build your own Tools.
Combined with the powerful Gradio and ZeroGPU, you can experience the Tools immediately, which is truly wonderful.
I was impressed by the Flux.1 Fill dev, so here I've built a demo for it, making it easy to use for inpainting and outpainting images.
๐Model: black-forest-labs/FLUX.1-Fill-dev
๐ฆDemo: vilarin/Flux.1-Fill-dev
๐diffusers: https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/flux
Combined with the powerful Gradio and ZeroGPU, you can experience the Tools immediately, which is truly wonderful.
I was impressed by the Flux.1 Fill dev, so here I've built a demo for it, making it easy to use for inpainting and outpainting images.
๐Model: black-forest-labs/FLUX.1-Fill-dev
๐ฆDemo: vilarin/Flux.1-Fill-dev
๐diffusers: https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/flux
posted an
update over 1 year ago
Post
1617
๐โโ๏ธWhile browsing new models, I stumbled upon Lumiere from aixonlab. After testing it, I feel it has considerable potential. Keep up the good work!
Lumiere Alpha is a model focusing on improving realism without compromising prompt coherency or changing the composition completely from the original Flux.1-Dev model.
๐ฆ Model: aixonlab/flux.1-lumiere-alpha
๐ฆ Demo: vilarin/lumiere
Lumiere Alpha is a model focusing on improving realism without compromising prompt coherency or changing the composition completely from the original Flux.1-Dev model.
๐ฆ Model: aixonlab/flux.1-lumiere-alpha
๐ฆ Demo: vilarin/lumiere
reacted to merve's post with ๐ over 1 year ago
Post
1709
Tencent released a new depth model that generates temporally consistent depth maps over videos โฏ๏ธ
Model: tencent/DepthCrafter
Demo: tencent/DepthCrafter
Paper: DepthCrafter: Generating Consistent Long Depth Sequences for Open-world Videos (2409.02095)
You don't need to input anything other than video itself, no need for optical flow or camera poses! ๐คฉ
Model: tencent/DepthCrafter
Demo: tencent/DepthCrafter
Paper: DepthCrafter: Generating Consistent Long Depth Sequences for Open-world Videos (2409.02095)
You don't need to input anything other than video itself, no need for optical flow or camera poses! ๐คฉ
reacted to merve's post with ๐ฅ over 1 year ago
Post
5719
I have put together a notebook on Multimodal RAG, where we do not process the documents with hefty pipelines but natively use:
- vidore/colpali for retrieval ๐ it doesn't need indexing with image-text pairs but just images!
- Qwen/Qwen2-VL-2B-Instruct for generation ๐ฌ directly feed images as is to a vision language model with no processing to text!
I used ColPali implementation of the new ๐ญ Byaldi library by @bclavie ๐ค
https://github.com/answerdotai/byaldi
Link to notebook: https://github.com/merveenoyan/smol-vision/blob/main/ColPali_%2B_Qwen2_VL.ipynb
- vidore/colpali for retrieval ๐ it doesn't need indexing with image-text pairs but just images!
- Qwen/Qwen2-VL-2B-Instruct for generation ๐ฌ directly feed images as is to a vision language model with no processing to text!
I used ColPali implementation of the new ๐ญ Byaldi library by @bclavie ๐ค
https://github.com/answerdotai/byaldi
Link to notebook: https://github.com/merveenoyan/smol-vision/blob/main/ColPali_%2B_Qwen2_VL.ipynb
reacted to clem's post with ๐ฅ over 1 year ago
Post
1774
"LLM inference at scale with TGI". Cool blogpost: https://www.adyen.com/knowledge-hub/llm-inference-at-scale-with-tgi
Well done
@martinigoyanes @rafa-hernandez @Vidusharma @frisokingma @hannahwright @jeanmarcs @antonioramos & the whole
adyen team. Could be useful to cross-post here: https://huggingface.co/blog/community
Well done
@martinigoyanes @rafa-hernandez @Vidusharma @frisokingma @hannahwright @jeanmarcs @antonioramos & the whole
posted an
update over 1 year ago
Post
1665
๐ฃAi2 Releasing OLMoE!
OLMoE-1B-7B-Instruct is a Mixture-of-Experts LLM with 1B active and 7B total parameters, and, OLMoE is 100% open-source in model, code-base, datasets!
๐ฆPaper: https://arxiv.org/abs/2409.02060
๐คModel: allenai/OLMoE-1B-7B-0924-Instruct
๐พDatasets: allenai/OLMoE-mix-0924
OLMoE-1B-7B-Instruct is a Mixture-of-Experts LLM with 1B active and 7B total parameters, and, OLMoE is 100% open-source in model, code-base, datasets!
๐ฆPaper: https://arxiv.org/abs/2409.02060
๐คModel: allenai/OLMoE-1B-7B-0924-Instruct
๐พDatasets: allenai/OLMoE-mix-0924
posted an
update over 1 year ago
Post
6130
๐คฉ Amazing day. AWPortrait-FL finally here!
๐ฆ AWPortrait-FL is finetuned on FLUX.1-dev using the training set of AWPortrait-XL and nearly 2,000 fashion photography photos with extremely high aesthetic quality.
๐คModel: Shakker-Labs/AWPortrait-FL
๐Demo: vilarin/flux-labs
๐ฆ AWPortrait-FL is finetuned on FLUX.1-dev using the training set of AWPortrait-XL and nearly 2,000 fashion photography photos with extremely high aesthetic quality.
๐คModel: Shakker-Labs/AWPortrait-FL
๐Demo: vilarin/flux-labs
posted an
update over 1 year ago
Post
2493
Shakker-Labs brings an amazing LoRA trained on FLUX.1-dev for blended realistic illustration by Muertu ๐ the front character is in illustration style, while the background is realistic. ๐คฉ
๐คModel: https://huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-blended-realistic-illustration
๐โโ๏ธMy space for demo: vilarin/flux-lab-light
๐คModel: https://huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-blended-realistic-illustration
๐โโ๏ธMy space for demo: vilarin/flux-lab-light
posted an
update over 1 year ago
Post
4275
Black Forest Labs, BASED! ๐
FLUX.1 is more delightful, with good instruction following.
FLUX.1 dev( black-forest-labs/FLUX.1-dev) with a 12B parameter distillation model, second only to Black Forest Labs' state-of-the-art model FLUX.1 pro. ๐
Update ๐คOfficial demo:
black-forest-labs/FLUX.1-dev
FLUX.1 is more delightful, with good instruction following.
FLUX.1 dev( black-forest-labs/FLUX.1-dev) with a 12B parameter distillation model, second only to Black Forest Labs' state-of-the-art model FLUX.1 pro. ๐
Update ๐คOfficial demo:
black-forest-labs/FLUX.1-dev
Thank you :) I updated the demo to support file.
reacted to merve's post with โค๏ธ almost 2 years ago
Post
2743
THUDM has released GLM-4V-9B and it's.. chatty! ๐
I asked it to describe my favorite Howl's Moving Castle scene and here's how it went ๐๐ป
joke aside it seems to outperform the previous VLMs. however the license isn't open-source ๐
model repo: https://huggingface.co/THUDM/glm-4v-9b
a community member has built a demo: vilarin/VL-Chatbox
I asked it to describe my favorite Howl's Moving Castle scene and here's how it went ๐๐ป
joke aside it seems to outperform the previous VLMs. however the license isn't open-source ๐
model repo: https://huggingface.co/THUDM/glm-4v-9b
a community member has built a demo: vilarin/VL-Chatbox