Generalist LLMs are incredibly powerful... but they remain "jack of all trades." Today, specialized models exist: some excel at code, others at deep reasoning, others at long texts, web search, or multimodal tasks.
The editor was born precisely for this: instead of asking everything from a single model, it puts an orchestra of LLMs in your hands, each called at the right moment to do what it does best.
A generalist model is fine "for everything," but rarely the best at anything. By combining multiple LLMs you can use: the top model for analysis and reasoning, the strongest for creative writing, the coding and debugging specialist, achieving a final result that a single model would struggle to reach.
Research on LLM ensembles and "LLM councils" shows that having multiple models work together, with comparison and voting, reduces hallucinations and errors compared to relying on a single voice.
In your workflow you can: use fast and cheap models for classifying, summarizing, cleaning data; reserve premium models only for truly critical steps. It's the cost-speed-quality "triangle": orchestrate multiple models to get maximum results with minimum waste.
When a new super-specialized model comes out (more context, more reasoning, more coding...), you don't have to redo everything: you add it to your workflow and put it where it performs best. Your work isn't tied to a single vendor.
Andrej Karpathy published llm-council, a small open source web app that does something brilliant: instead of asking a single model for the answer, it sends your question to multiple LLMs, lets them critique and evaluate each other, and then a "Chairman" synthesizes the best into a single response.
Various academic works are extending this approach: councils of heterogeneous models that verify intermediate steps, vote and correct the path, for more robust and consistent results.
When you build a workflow in the editor:
You break down the problem into steps (analysis, research, writing, verification...).
At each step you choose (or let the system choose) the most suitable model: creative, analytical, technical, reviewer.
The output of one step becomes structured context for the next, so the entire "chain" reasons toward the same goal.
Where needed, you can have multiple models work in parallel, compare their responses, ask for a final synthesis "Karpathy-style".
Result: you're no longer "talking to a chatbot," you're directing a team of artificial intelligences.
The editor gives you the baton, you decide the symphony.