Nearly every object that you’ll see in our world started off as a physical piece of art that we digitized using a process called photogrammetry. Many of the articles in our blog discuss in detail how and why we’ve used this process to create our art, and you can find them all under the Digital Capture subcategory. It’s a lot of information to digest though, so we thought it might be helpful to summarize it all in FAQ form.
What is photogrammetry?
It’s the process of creating a geometric representation of an object from photographs. In the context of video game development, you can think of it as creating 3D digital art from a physical object. Photogrammetry is a bit of a mouthful, so we often call it “3D Capture”.
Why would you want to use it for video games? Why not just digitally create art?
Photogrammetry has a few advantages over the traditional way of making video game art, especially for a small game studio. First and foremost, digital authoring tools are expensive, with a single piece of 3D modeling software running in the multiples of thousands of dollars. This is not good for an indie studio on a budget. Secondly, digital artistry has a high learning curve and can involve years of schooling or job-related experience in order for an artist to produce high-quality digital art. Photogrammetry on the other hand, lets you fall back on classical skills, working in a medium where realism is a natural byproduct rather than a conscious effort. Similarly, you can take advantage of found art, which takes no effort to create. All of the rocks and plants in our game were found in our neighborhood. And last but certainly not least, it’s really fun to make a video game by hand.
OMG is this going to change how we make video games forever??!?!?
No. There are still many advantages to digitally authored art, and photogrammetry techniques have a lot of room for improvement. Think of it more as another tool in the indie developer’s toolbox.
What made you choose this art style?
It was a good fit for our team’s strengths. Jeff is a veteran graphics engineer and wanted to create a visually striking game that let him flex his graphical muscles. Rae has been a classical artist all of her life, and wanted to work with her hands. And Dave is also a sculptor who loves tinkering with new technology. Using physical art seemed like the natural solution.
Is this a new idea?
Not at all. Film studios have been using these techniques for a very long time. However, these tools have just started to become more accessible and affordable. To our knowledge, no one has ever used them to produce all of their game art before, so that part at least is a new idea. If you know of any other developers that are doing this, please tell us! We’d love to get in contact with them.
What program do you use to digitize your art?
We use two programs. We started using Autodesk’s 123D Catch, but now we’re primarily using Agisoft PhotoScan.
Which program is better?
They both have their advantages and disadvantages. Ultimately, what made us switch to using PhotoScan is that it is capable of producing much higher-fidelity textures than the currently available version of 123D Catch. This is critical for us because we want our game to be as beautiful as we can possibly make it. PhotoScan also has a higher success rate when generating models from our source images, which means we spend less time having to tweak and adjust our setup. However, PhotoScan is a little bit more expensive than 123D Catch. Also, 123D Catch does all of its processing on remote servers, which means that you get your results fast, regardless of how powerful your computer is. This was really important to us early on in development.
What are the limitations of modern photogrammetry tools?
They like a stable lighting environment. This means that they don’t like glossy or reflective surfaces, or anything else that would cause lighting to change significantly based on the camera angle (like flashes). Translucent surfaces are also a problem. However, gloss, reflectivity and translucency can be added back after the object is digitized, and we’ve come up with ways of dealing with difficult objects See our blog for more details.
What kind of equipment do you need to use these tools?
For start, just a camera. Even a simple camera phone can produce pretty decent results, depending on what you’re trying to get out of it. If you’re trying to create a textured model to be used in a 3D game world though, there are a couple of other considerations that you need to take into account. You want neutral lighting on your target object, and you want good photo coverage around the object. A hundred bucks at a hardware store got us enough wood and metal pipe fittings to build a rotating camera mount that can pivot around a central table. We use this to get a consistent set of photos. Another hundred bucks at a photography supply store got us a set of fluorescent day lights and shoot-through umbrellas to create a nice neutral lighting environment.
Why is neutral lighting so important?
This topic can get complicated, but the short answer is that you want to make sure that you aren’t lighting the object twice. If you capture an object with a strong light to the left of it, then it will always appear as if it’s lit from the left, even if you place it in your game world and try to light it from the right. A neutral lighting environment during your photo shoot gives you the greatest amount of flexibility later on.
Can you use your art as soon as you’ve digitized it?
Technically, yes. 3D Capture tools will export a textured mesh to a variety of common formats which can be imported directly into many commercial game engines. Depending on how you’re using this art, that may be all that you have to do. However, if you’re relying entirely on this type of art like we are, then there are a few other steps that you’ll want to take. You’ll want to create lower-resolution versions of the mesh and build a normal map for it, and you’ll want to rebuild the color texture to improve quality and performance for a 3D engine. You’ll probably also want to generate other textures like gloss maps and emissive maps for high-quality lighting.
That sounds complicated.
It can be, but there are some great tools out there to help. An open source program called Meshlab is capable of automatically doing most of these post-processing steps. Also, we plan on sharing a lot of information about our process as we go so that other indie developers can follow in our footsteps.
I want to start capturing objects for my game. What can I do?
If you have a camera, you can just download 123D Catch and start fiddling around with it. If you get stuck, their forums are a good source of information, and the admins there are pretty responsive. Agisoft PhotoScan also offers a demo version and a free trial which can help you decide if it’s a better tool for your needs.
What tips do you have for someone who wants to start using these techniques?
Be patient – no 3D capture software works perfectly the first time. If you don’t get the results that you’re looking for, try something a little bit different during your photo shoot and keep track of what works and what doesn’t. Being consistent is really important in the long run and lets you refine your process over time. Use consistent lighting, camera angles, and camera settings. And if you’re going to invest in one thing, invest in good lighting! It’s not too expensive to get a handful of fluorescent daylights, and your textures will look far better than if you spent your money on an expensive camera instead.