I’ve just registered on this forum, so please don’t judge too harshly if I’m presenting the idea in a slightly unconventional or imperfect format.
⸻
In my work, I ran into a fundamental problem with most existing AI-based 3D generators: they typically produce geometry with broken topology (open edges, non-manifold regions, self-intersections), which then requires significant manual cleanup. The majority of these approaches follow the diffusion paradigm focused on noise reduction, rather than on ensuring engineering-grade geometric correctness.
I decided to approach this problem from a different angle — by introducing a formalized structural format optimized for interaction with LLMs. This format allows individual nodes of the model to be recursively refined, after which the structure is converted into a CAD representation using a conventional converter. In practice, this approach has produced stable and reproducible results.
The process starts with generating an abstract base shape, guided by technical prompts that define overall dimensions and key constraints. After that, the model is refined step by step: each part is elaborated separately in a conversational manner, moving from coarse structure to fine details.
The format itself embeds textual meta-descriptions for its fields, enabling the AI model to operate on small, localized portions of the structure rather than the entire file at once. This makes it possible to recursively refine specific areas without compromising the integrity of the overall geometry.