Post
96
HELLO 3D WORLD !
What if you could control a 3D model just by talking to it?
Not clicking. Not dragging sliders. Not writing animation code.
Just… describing what you want.
"Rotate slowly on the Y axis."
"Move forward, don't stop."
"Scale up, then reset."
That's the core idea behind Hello 3D World - a space I've been building
as an open experiment.
───────────────────────────── Here's how it works:
You load a 3D model. You describe it to the LLM
("this is a robot", "this is a hot air balloon").
Then you type a natural language command.
The LLM — Qwen 72B, Llama 3, or Mistral - reads your intent
and outputs a JSON action: rotate, move, scale, loop, reset.
The 3D scene executes it instantly.
One model. One prompt. One action.
─────────────────────────────
Why build this?
I'm genuinely curious where the limit is.
Today it's simple geometric commands. But what happens when
the model understands context? When it knows the object has
legs, or wings, or a cockpit? When it can choreograph a sequence
from a single sentence?
Maybe this becomes a prototyping tool for robotics.
Maybe a no-code animation layer for game dev.
Maybe something I haven't imagined yet.
That's why I'm keeping it open — I want to see what
other people make it do.
─────────────────────────────
The space includes:
→ DR8V Robot + Red Balloon (more models coming)
→ 5 lighting modes: TRON, Studio, Neon, Cel, Cartoon
→ Import your own GLB / OBJ / FBX
→ Built-in screen recorder
→ Powered by open LLMs — bring your own HF token
Record your best sequences and share them in the comments.
I want to see what this thing can do in other hands.
🔗 ArtelTaleb/hello-3d-world
What if you could control a 3D model just by talking to it?
Not clicking. Not dragging sliders. Not writing animation code.
Just… describing what you want.
"Rotate slowly on the Y axis."
"Move forward, don't stop."
"Scale up, then reset."
That's the core idea behind Hello 3D World - a space I've been building
as an open experiment.
───────────────────────────── Here's how it works:
You load a 3D model. You describe it to the LLM
("this is a robot", "this is a hot air balloon").
Then you type a natural language command.
The LLM — Qwen 72B, Llama 3, or Mistral - reads your intent
and outputs a JSON action: rotate, move, scale, loop, reset.
The 3D scene executes it instantly.
One model. One prompt. One action.
─────────────────────────────
Why build this?
I'm genuinely curious where the limit is.
Today it's simple geometric commands. But what happens when
the model understands context? When it knows the object has
legs, or wings, or a cockpit? When it can choreograph a sequence
from a single sentence?
Maybe this becomes a prototyping tool for robotics.
Maybe a no-code animation layer for game dev.
Maybe something I haven't imagined yet.
That's why I'm keeping it open — I want to see what
other people make it do.
─────────────────────────────
The space includes:
→ DR8V Robot + Red Balloon (more models coming)
→ 5 lighting modes: TRON, Studio, Neon, Cel, Cartoon
→ Import your own GLB / OBJ / FBX
→ Built-in screen recorder
→ Powered by open LLMs — bring your own HF token
Record your best sequences and share them in the comments.
I want to see what this thing can do in other hands.
🔗 ArtelTaleb/hello-3d-world