A fully European project, born and developed in Italy, built step by step by listening to our users. Here's where we come from and where we're headed.
Canonity was born from a simple idea: AI used one prompt at a time is not enough for reliable and repeatable results, which is what businesses want. Prompters are not all technical programmers and empowered AI needs to be democratized. Every milestone in this journey has been based on these ideas and driven by direct user feedback.
Market analysis and architecture design phase completed. The project takes shape.
Development kicks off with the goal of creating a professional tool for orchestrating AI models.
The first version of the platform is released under the name u-prompt. Early adopters start using it and providing invaluable feedback.
After listening to early adopters, the development path changes direction. The graphic editor and execution engine are born, supporting 3 LLMs: OpenAI, Anthropic, and Google.
The project is renamed to Canonity. Grok and DeepSeek are added to the multi-LLM engine. The team is assembled and structured.
The first version of the graphic editor goes online. Users can create and run multi-model workflows directly from the browser.
The model catalog expands with the addition of Kimi, Qwen, and Mistral. The platform now supports 8 LLM providers.
The startup is officially incorporated. The system enters patent pending status. Introduction of MIIA, LLaMA, and Gemma on an Italian instance, fully compliant with ISO 27001 and ISO 42001. Subscription sales begin.
Marketplace launch: users will be able to publish, share, and sell the execution results of their workflows (not the source prompts!).
A third element will be added to the platform. We can't talk about it yet, but it will change the game.
The entire Canonity journey is self-funded. No investment rounds, no compromise on the vision. Every decision is driven by the product and its users.