3D Scanning - XBox Kinect

Tasks

individual assignment:

Context

This week we covered learned about a couple of different ways to scan and digitize objects of the real world: Photogrammetry and 3D Scanning.

The first one involves compiling lots of overlapping pictures and using lots of math and computing power to guess what object is being photographed, and where the pictures where taken from.

The second one involves using special equipment that can capture distances to object and/or pictures of objects, and will produce an almost-realtime output of what’s being captured.

This week we used Kinect (the XBOX hardware equipment) and Skanect (a specific piece of software for this purpose) to scan a

Scanning an object

So, this week we were asked to try out 3D scanning, and through pure coincidence, or maybe serendipity, I met Angel from the MDEF course, and we ended up pairing up on the 3D Scanning assignment.

Original

Target

Camera tracking

Kinect is able to automatically capture the motion, position, and angle of the camera, which provides very valuable information when it tries to recreate the scene.

Thoughts on 3D scanning

As you can see above, the results are both:

It seems that 3D scanning is a process that consumes both Time and Equipment, and outputs quality (this statement is so obvious and board that if you were to roll your eyes any harder, you could get injured!)

This triangle has 3 dimensions:

Just like the famous “Project Management” triangle, if you want fantastic quality, you better have amazing equipment, or gigantic amounts of time, either to scan the objects, or to do post-processing and heavy maths to interpolate the results and get a decent output.

This rule also seems to apply to 3D scanning using Photogrammetry.

In essence, Photogrammetry uses lots of pictures, and lots of post-processing time to get a high quality result, and the cost of equipment seems to scale linearly with its size (small = just take pictures with your phone, large = get an army of drones to dance and spin at predefined trajectories)

On the other hand, 3D scanning delivers much faster but rough results. For example, these are good enough if you want to capture human movement around a room, and the latest devices can even calculate in near-real time the positions of your body, and bones/arms/hands/legs; at the cost of having a “good enough” approximation. That is, the system will instantly know if you are sitting or jumping, but will be unable to determine specific measurements or facial features to the resolution that other solutions could provide.

Thoughts beyond tech

It seems that the market and tools for these technologies have found their own niche and are not really competing with each other, but are even able to complement the gaps of they might have with the strengths of the other ones.

Undoubtedly, this fantastic technology will be used for evil, if it isn’t already happening, right now. The bigger question is, how can the broader impact of this tech be explained and showcased easily to everyone so that they can make informed decisions on whether they consent on being subjects to it. Because the fact that we can use this tech does not matter, what matters is what we do with the information that it provides to us.