Week 05. 3D scanning and printing

Goal

We have two goals this week. The first one is to 3D scan an object. There is an extra credit if we make the scanner. I'm not pretty sure what they mean with make. It might be from making it from scratch (which I consider raw laser scanning and milk-scanning) or using a DIY 3D scanner (Kinect, structured light, etc...). I think that this week the extra credit should go to Jonathan Minchin and John Rees. Just by remembering all the work they did this week I am feeling tired. Minchin patiently drowned a cow (was it a salt shaker?) in milk and took 50 pictures of the process. He later outlined every picture by hand and assembled a mesh in Rhino. The resulting mesh was not very accurate, actually it looked like the initial cow inside a bag. But the work is remarkable, check it out. John Rees experimented with all sorts of scanners, laser scanner, milk scanner, structured light... I think he tried them all. I can't wait to learn from their results.

The second goal is to design and 3D print an object. Also there is an extra credit if you edit and print an object that you scanned.

Last week MIT lecture review

For the first part of the afternoon (morning there) we were reviewing the FabISP assignment. Not a very interesting review since everybody were doing the same boards (except some people that re-designed the boards from scratch). The script did not choose me this week either, maybe I'm not in the list. I counted 28 people that were chosen by the script, so my assumption that it is about 30 students every week is fine. According to the calculations I made last week, the probability of being picked by the script for this week assignment have just raised eleven points to 68.4%. This is consistent with the empirical process because I saw a lot of people that were the second or even third time that had been chosen.

After the pause Neil introduced us to the most famous process of digital fabrication: 3D printing. When I tell someone I am learning digital fabrication, they don't know what I am talking about. It is only when they hear 3D printing that they start listening to you. So now we have a bunch of technologies and materials, but no one of them is perfect, all they have their pros and cons. It is still far from sci-fi movies but it is promising and evolving very, very fast. Future possibilities are endless.

Welcome back, Linux (off-topic)

This section is somewhat off-topic so you can skip it if you are not interested. This week I am making what I think is a key decision in my Fab Academy experience. I have installed Ubuntu 12.04 LTS under a virtualized environment in my Mac. I have decided that during the Academy I am not going to use anything but free and/or open source software in this Linux environment. I might obtain poorer results that with other software I am used to in Mac OS X or Windows, but I am feeling this as necessary step to progress in the Academy. I don't know when nor where, but in the future I would like to run a Fablab (near the low end side of powers of ten Fablabs). This will require free open source software and hardware. It is better that I face now the problems with the software side (if any). So far I am very happy with it. Installation was pretty straight-forward. I am editing this page with from Linux Bluefish and I feel it wonderful. Much better than the open source Bluegriffon or any commercial software in Mac OS. I also have installed fab modules in no time (it was a pain in Mac OS).

KERNEL PANIC? I could not achieve my goal of using only Ubuntu for the Fab Academy this week. After resizing the partition in Parallels desktop -because I installed so many programs that the virtual HD was running low of space- the Ubuntu virtual machine didn't like it and stopped working. I only get a black screen with a blinking cursor. I googled for the problem and it looks like I need now to boot from an Ubuntu Live DVD and perform a fsck. Unfortunately, I ran out of time this week so I will start repairing Ubuntu on Wednesday afternoon.

SOLVED: Finally I solved the problem 2 days later without a Live CD. If you just  press and hold SHIFT while booting it will appear a text menu which includes all sort of repair tools.

3D Scanning

123D Catch & Cubify Capture

Let's make it short: I am very dissapointed with Cubify Capture and 123D Catch. It is so easy in their promo videos, you just upload a bunch of pictures and you obtain a very detailed and accurate textured 3D model! Why would someone bother with Kinect, structured light, laser scanners or milk-scanning? I'll tell you why. Because these magical apps are not reliable at all. I haven't been able to scan anything properly with 123D Catch nor Cubify Capture. Moreover, these web apps have a terrible user feedback, because you will wait in front of your computer wondering whether if waiting so long is a feature, the internet connection dropped, your computer froze or there is anything wrong with the app. You will never know, because all you will see is a spinning bar in either case. It looks like it is all a matter of who is going to be the first in loosing the patience. In the iPad version of 123D Catch you cannot even choose pictures from the camera roll. Instead of that you have to take the pictures every time you want to scan an object, pretty advanced tech, huh? And when you have been waiting for an hour and you hit reload because you know this is too much time to upload some pictures, and you realize you lost all the work done... It is a hard to explain sensation. The best thing I can say about 123D Catch and Cubify is that I hate them so much.

Object images
Images of the object that I wanted to scan. This is the Drac of Sitges.

On Monday I breathed like ten times in a row and gave it a final try. Someone inside me told me: you are stupid. He was right. This time I installed 123D Catch Windows desktop version, which I don't like because I wanted to use Ubuntu, but anyway. Before this you need to activate and download .NET Framework 3.5 if you have Windows 8. I made a new set of pictures of me and tried to upload them. This time the upload process was fine and in the windows version I was even receiving feedback of the percentage done. But later I saw an on-screen message saying that the model had en error. No more details. In Cubify, after (successfully?) uploading the images there was no NEXT STEP button and when I reloaded the page there wasn't any project in my account. Finally, Juan Ranera, one of my colleages, made a 3D model of his head in just 15 minutes of workflow using 123D Catch for iPhone. I wonder if this is a hidden camera joke or what.

MY ADVICE: Use 123D Catch and Cubify either as your first resource (maybe you are lucky) or either your last resource (when don't have anything else to loose). And maybe for experimental or comparing purposes, but I wouldn't spend waste a lot of time because you are not going to learn anything by spending more time on it.

Structured Light 3D Scanner

These last four days including the weekend I have been ill at home so I could not drop by the lab. On monday I saw that my buddies were working with a laser line scanner, a milk scanner and David's scanner.

So I saw on the web an Open Source project of a camera-projector scanner and Jose Pablo and me decided to give it a try. It consist on a Processing sketch which processes and decodes three images of the object you want to scan with three B/W band patterns projected over it (phase1.jpg, phase2.jpg and phase3.jpg). It uses a triangulation principle to sort out the depth of every pixel in the image. If you do not modify the sketch, it is expecting 640x480 images in portrait mode with horizontal bands on it. Based on my experience you get better results using a dark background. If you use a light background, the script will have problems interpreting the shadows. Also scanning a dark object is not recommended since the pattern is not well recognized by the script in dark surfaces.

I scanned myself using this technique, you can see the phase pictures below:

Projected pattern over me

And the final result:

Structured light scan

CONCLUSSION: Though it looks a promising technology and requires just 3 pictures I need to experiment more with it in order to make a final decission. So far it is nice but requires a lot of post processing. I don't think it is possible to scan 360 of an object without heavy post processing work.

Kinect

I could not experiment with it this week but I want to do it later and document here the results, so I reserve here a space.

Even though we have a Kinect in the Lab, on week 07 I found an offer in a Shopping Mall: Microsoft Kinect for Xbox 360 at 99 EUR (normal price 150 EUR). Since I had a 25 EUR gift card from that Mall I bought it. I installed a free version of ReconstructMe and did some tests. I have to say that this is the most reliable scanning solution for objects bigger than a football ball that I have tried so far. I obtained fast results with little or nothing post-processing. So I think this is a great peripheral for a Fab Lab. Also it is said (I still have to try it) that if you put a +2.5 reading glasses in front of your Kinect you can greatly increase the scanning resolution. It looks like Microsoft Kinect is shortsighted.

Microsoft Kinect

3D Printing

Makerbot Replicator 2

I didn't use this machine but I saw the results from a couple of my mates. I like the ease of use of the software and the machine menu itself. It makes 3D printing sort of easy and it is relatively fast. It uses PLA. But there is one thing I don't like very much. It is not open source anymore. Luciano says that open source hardware and reliability are not always holding hands. It was like this at the beginning of open source software as well. But now, mature  OSS packages by far surpasses proprietary software features. And I see no problem if they release the source and there are some freaks that tweak the machine a bit. I think they took a wrong decision, but again, IMO it is the best desktop 3D printing machine money can buy right now.

Makerbot Replicator

Zcorp 3D printer

I am using this machine to make my assignment this week. This is an old technology machine. They use it a lot here in the Iaac because it makes very precise models and they are white. And everybody knows that Architects love white. It uses a white powder that spreads over the whole room and makes you look like a trafficker because there is a lot of white powder on the tables and if you end up full of it over your hands, face and clothes. It has a drawback though. It is extremely slow. It is so slow that it didn't finish printing our assignments on time to document them for the review. The machine gives you an accurate estimate on how long it is going to be printing, in our case more that 10 hours. And then you have to wait for it to curate one more hour and removing all the dust using a vacuum and brushes. And it will need some coating to protect the surface.

Zcorp Printer

For my model I was inspired by Star Wars: The Empire Strikes Back: Han Solo in Carbonite. Actually I wanted to 3D scan myself and made my own version of Franc Solo in Carbonite but couldn't scan myself properly so I designed a Pixelbot in Sketchup and put it inside the Carbonite.

Pixelbot in Carbonite

Processing the mesh was very time consuming because the design from Sketchup was exported to collada format (the pro version of sketchup can export to .stl), then imported the collada in Meshlab, exported to .stl, and finally imported again the .stl in Rhino. Then there was a lot of refining, simplifying, boolean operations to the parts, closing holes, etc... both in Meshlab and Rhino so that I obtained a single closed object.

The 3D printed object needed post-processing. Using a piece of acrylic found over here I refined the sharp angles. The small voids were cleared using a leg of an LED, but you could use any needle. Finally the small space between the letters was refined with a piece of paper. So this is the final result.

Refining the bot

The last step was to apply a coating of methyl cyanoacrylate (aka Superglue 3) in order to strengthen the superficial layer. Lastly, a picture of the the Pixelbot in Carbonite along with some Playmobil's, ready to go!

Pixelbot Playmobil

The future and beyond

We have now experimented with scanning and designing objects and bringing them to real world, from bits to atoms as they name it. The manufacturing industry may not feel the threat, since the resolution of even the best of these machines is at most mediocre. But at this rate of evolution I wonder what will happen when I can print a Playmobil and you cannot tell which one is the printed and which one is has been mass manufactured. What will the Playmobil company do? If they ignore this (like the music industry ignored digital mp3 music at the beginning) they will more than probably go bankrupt. Will they stop/reduce selling atoms (the physical object) and start selling bits (the model of the object)? Will it be an open format that will allow the people to modify or add accessories to their objects? Where will lay this shared intellectual property and how will people sell their creations? This is an interesting topic of discussion.

In my field (building construction) the change is going to be epic, fading out 3000 years of an already obsolete industry. Digital construction will destroy unqualified jobs while creating new qualified ones. As Negroponte predicted, education is going to be key for the digital era.

What I learned

Digital fabrication by 3D printing is still an immature technology, but we are not here to review early stage machines. We are here learning and exploring the implications of this technology. Do not underestimate this temporary weakness. Digital will replace analog in every field. It has already happened in TV, video, photography, music, press, computing, communications and it will happen again with fabrication and construction. It is a mathematical certainty.  We'd better consider this short delay as a gift to get ready and educate for what's coming.

Download files

You can download all the files related to this week here.