AI Creates 3D Models From Images | Two Minute Papers #186

Categories Windows

The paper “Hierarchical Surface Prediction for 3D Object Reconstruction” is available here: We would like to thank our …

Leave a comment

It's only fair to share...Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInPin on PinterestShare on TumblrShare on RedditDigg this
Admin Biqa

39 thoughts on “AI Creates 3D Models From Images | Two Minute Papers #186

  1. Wow! Who knows what this sort of technology could turn into. I doubt that many people saw the early room-sized calculators shrinking down into iPhones and things like computer modeling and video games emerging. Imagine the things that are just around the corner that we didn't even realize were possible.

  2. ooh, i wonder if the boundary free and occupied space data could be improved with 3-sweep. all that app does is build models from photos by way of symmetrizing detected edges into sensible 3d forms. that could help the AI determine what is a surface boundary and what is free/occupied space as well as better predict what is on the other side of objects. and if it is able to patch in pre-existing mesh detail from the 3-sweep data you could theoretically get more accurate and detailed models each iteration.

  3. That sure is getting somewhere!
    Though I wonder if you could train these things to actually be efficient about their geometry. Don't approximate with cubes but try to figure out a mesh with proper usable quad-only, nice edge- and loop-flow meshes. Perhaps at multiple resolution.
    Bonus points for also adding each of a full rig to also make it readily animatable or a UV unwrapped version along with all kinds of neat textures separating out the fine details from the larger scale structure!
    If you ask 3D modelling artists, their art is kind of a science involving pretty specific rules about somewhat high level ideas. I think these could be learned.

  4. I don't think 3D artists are going to be out of a job any time soon. These models would produce far more work for worse results than doing it by hand. I realise it's early days, but I don't want to think about what I'd have to do to make these usable for everyday purposes. Essentially, unless they're more accurate and better optimised than what humans can otherwise produce, they're not very useful.

    I think the most obvious use for AI in modeling is for producing good, accurate and optimised low poly models, that can either be used as-is or built upon by the 3D artists. They could very conceivably build models than are far better optimised and cleaner than humans.

    High poly models would have to be incredibly accurate and detailed to be desirable, and these are so far neither. Humans can produce very high poly and good looking models relatively quickly. It's the nth level of detail that takes the most time, and it seems like this is no where near approaching that nth level.

  5. I seriously can't wait until these algorithms are at human level. I would love to see what games would come out if companies only had to invest in writers and not modelers or animators.

  6. We don't think about 3D objects as a 3D grid of space that is either empty or occupied by matter. All those models still look like the same voxel approach, but just smoothed/melted. Optimizing the same way that some fluid dynamics work (so basically subdividing the voxel grid on the bounderies) gets you only a little bit further. This is like comparing vector graphics to raster images. With low resolution you will get decent results, but you'll never get to the same understanding of 3D space as humans. Sorry to say this, but this is a dead end.

  7. What about trying to do low-poly, ideally with symmetry guess and maybe some aid from previously reconstructed models (ie. modeling 10000th car could get previous info from past reconstructions).

Leave a Reply

Your email address will not be published. Required fields are marked *