The 3D model we generate is an improved NeRF that produces a 3D volume with density, color, and surface normals:
DreamFusion represents appearance as a material color, which can be combined with normals for rendering under different lighting conditions:
We can even take several 3D models generated by DreamFusion and compose them into new scenes:
Check out the paper for more details, including a distillation-based loss function that could enable many new applications of pretrained diffusion models: arxiv.org/abs/2209.14988
This was an incredibly fun team effort w/ NeRF wizards @BenMildenhall & @jon_barron, and NeRF + diffusion expert @ajayj_ (graduating this year!).
We're excited to incorporate our methods with open source models and enable a new future for 3D generation! 🚀 #dreamfusion
• • •
Missing some Tweet in this thread? You can try to
force a refresh