Feature deep dive

Live thinking: watch IDO-1 do the work.

The glowing lines you saw on the homepage aren’t gimmicks. They track how IDOLL moves data from your sensors into IDO-1, how the model reasons, and how the reply routes back into motion and speech.

Voice → Understanding

Real-time speech pipeline captures your tone and words, then hands it to IDO-1 with sentiment tags and priority markers.

Memory graph → Reasoning

IDO-1 looks up who and what you mentioned before connecting the dots—deadlines, promises, motivations.

Motion & light → Response

As IDO-1 speaks, IDOLL mirrors the thought process with beams, lights, subtle head tilts, and the animated graph you saw earlier.

Live graph

Model promises

Why build IDO-1 ourselves?

  • Fast enough to feel alive: IDO-1 targets sub-second turnarounds for short replies so the physical robot never feels laggy.
  • Long-context memory: the model can reference weeks of conversation without losing the thread.
  • Mix of logic + empathy: benchmarks show IDO-1 holds its own on reasoning tasks while staying tuned for emotional cues.