Interview With Kyle Mcdonald

Q: What inspired you to write the three phase decoder? What did you want yourself or others to use it for?

A: Last year I heard about a choreographer who was using the DAVID line laser scanner with some Lego motors for creating 3D stop motion animations. Every scan took about one minute, so it was a painstaking process. The idea of 3D stop motion had me interested, but one minute per frame seemed way too long!

I wrote the three phase decoder because I wanted to make something that would allow more people to experiment with 3D stop motion using resources already available to them.

Q: How did you write it, i.e. how and where did you learn everything about structured light 3D scanning? How long did it take?

A: When I started I didn't have a laser, so I thought I'd use a projector instead. This helped me realize that the fastest scanning technique would record information about every pixel simultaneously rather than one line per frame. I created a scanner based on binary subdivision, which takes around 8-10 frames, one or two orders of magnitude faster than laser scanning.

I thought that was the best you could do, but then I discovered "structured light": an umbrella term for the kind of projector-camera 3D scanning system I was working with. I learned that people had been doing this for decades, and they even had some techniques for using fewer frames to 3D scan motion in real time (like three phase scanning).

While the initial subdivision scanner took a few days from idea to demo, the three phase scanner took a few weeks. It wouldn't have happened without some code by Alex Evans from Media Molecule (ported to Processing by Florian Jennett), and some great research papers from Song Zhang at Iowa State University (who worked on the technology for Radiohead's "House of Cards" music video).

After that initial development, it's been over a year of brainstorming with people, reading papers, and completely rewriting the code multiple times. And there's still a lot of work to do.

Q: Why did you make it open source, completely free to anyone interested?

A: I think we need as many people as possible doing what they love. Open source is one way to get people the tools they need when they wouldn't otherwise have access.

I'd also like to help overcome the novelty associated with new technologies. More people using 3D scanning means more diverse perspectives on what can be done with the technology.

Q: Any further/deeper applications for ThreePhase?

(obviously perfect for anyone with a 3D printer)

A: One advantage of a 3D printer is that you can resize while you replicate. I'd love to see some very large things scanned and made very small, or vice versa.

There is a also malleability to 3D scanned data that isn't available in the physical world. It'd be nice to have some objects that are combinations of averages of multiple items. Maybe a Katamari ball made from real household objects?

Q: What is the future of structured light 3D scanning? What do you wish to see happen next with it?

While the three phase technique comes primarily from academic papers and is relatively unencumbered by patents, I have an idea for a completely open source scanning technique that would allow a more flexible trade off between accuracy and speed. It could be adapted to high resolution still scans or lower resolution motion scans.

Q: Can ThreePhase be improved? why and how?

A: The Processing three phase decoder is meant more as a demo, and lacks a lot of features for automated decoding. There is a more robust version built with with Open Frameworks where the majority of my work is focused.

But for both apps I'd really like to get some people involved who have a stronger mathematics and computer vision background. The decoding process is currently very naive and doesn't account for the various parameters inherent to the projector and camera.

Q: Did you ever imagine this would be a project worked on at Makerbot or another 3D printing company?

A: I'm regularly surprised by the ways this work is used. So, in a way, this makes perfect sense.

Q: What is your next big project?

A: I've been looking into projection mapping, or using a projector to augment a scene. This is another technique that is currently thriving on its novelty, so I'm working on a toolkit that makes it easier to scan and projection map arbitrary scenes and objects. There's also a specific interactive environment I'd like to create with this technique that plays with our understanding of light sources.

For updates, see my website http://kylemcdonald.net/ or follow me @kcimc

Unless otherwise stated, the content of this page is licensed under GNU Free Documentation License.