The main value of using SDF shapes for 3D modeling workflows is you don't have to worry about topology (the vertex, edge, face graph structure which has to be formed over the surface of all 3D models) which makes a lot of modifiers (like boolean combinations of intersecting objects) vastly less tedious (Womp calls this feature "goop").
Right now Blender work still involves a lot of tedium, mostly related to topology. A lot of upcoming 3D ML applications also work considerably better when using SDF instead of mesh representations. I wouldn't be surprised to see this form of 3D modeling take off to a significant degree because of those two factors.
There was the original metaballs. But more recently there's also been sdf addons using geometry nodes [1] that mimic the same workflow - with my guess being that it uses voxels to generate the final polygon mesh that blender needs since it's not a fully sdf editor. Although, while I was googling this, I did find someone that managed to do it by using pure shaders [2] which is pretty cool.
Also, thanks for actually explaining that. I've seen a few examples of this kind of "clay like" sculpting approach that tries to make it easier for artists. Adobe's Modeler uses sdfs for example.
Blender already has metaballs. It's just not user friendly or multiplayer
Interestingly most folks think of 3d modeling as quad modeling/subdivision surfaces, but Toy Story 1 was done completely with NURBs (also supported by blender)
You can throw a voxel remesh modifier onto your model in Blender to get the same functionality. It will convert your model from polygons to SDF and then back to polygons.
I would imagine that's a fairly lossy process with some downsides?
Ideally - the end result of an SDF pipeline is pixels. Going back to polygons throws away much of the advantage of SDFs. Raymarching is costly and rarely used in realtime engines but Blender isn't realtime so rendering SDFs directly would probably be viable.
It's a lot easier to handle realtime in an editor than in a fully featured game. There's less other stuff going on, you can take extra shortcuts and usually rely on higher-end hardware.
> It's a lot easier to handle realtime in an editor than in a fully featured game. There's less other stuff going on
I don't think that's true. Most games are optimized to limit the number of draw calls and textures. The viewport of a 3D software package has no such limits. As a result, my viewport rarely runs as well as the games I play.
In any case, that's besides the original point, which was that it wouldn't need to run in realtime. My argument was that it would.
It's not a completely impossible goal. Look into what some artists are making with Dreams on the PS4. It uses raymarched SDFs for modeling.
Right now Blender work still involves a lot of tedium, mostly related to topology. A lot of upcoming 3D ML applications also work considerably better when using SDF instead of mesh representations. I wouldn't be surprised to see this form of 3D modeling take off to a significant degree because of those two factors.