Modeling With Photogrammetry

Photogrammetry uses a set of photographs to solve for the camera positions where the photographs were taken, as well as the geometry of the objects in the photographs. From this information, 3D models are created and textured with textures projected from the solved camera positions. This enables very rapid construction of accurate, photorealistic virtual 3D scenes.

In order to model with speed and accurately, this process focuses on generating models from textures. In other words, because the textures are projected into the 3D space, the camera positions are accurately solved in Bundler, and point cloud data is generated, modifying a mesh based on texture detail is reliable and desired.

The steps to modeling with Photogrammetry are:

  1. Start with a cube
  2. Add and Adjust Shader
  3. Refine Geometry based on Texture Detail
  4. Add New geometry to Existing Shader (if desired)
  5. Refine Texture
  6. Ready Mesh for Export

Last Modified: April 16, 2013