<<<

abstract

Principe

Difficulties

Bash draft program

Images extracting

Processing loop

Vectorisation

STL generation

Multithread quick and dirty

Gcode generation

First generation test

Graphical interface




abstract

The topic of the week is the fabacademy Application progamming : we must make an application interfacing input and/or output device

I choose to try to make a software to transform a movie to a physical object, via 3D printing :
input is a movie, and output is a reprap, so some Gcode.


Principe

The movie is split in frames that they are individualy :

after a global gcode is genrated.
The result is a gcode with one layer for each frame.
optionaly, the gcode can be directly send to the reprap.

Difficulties

The bigger difficulty is the conversion from mages sequence to 3D mesh.
I tryed the fabmodule's gi_stl tool, but this tool had a little bug : if images have different numbers of "plain" pixels, holes appears in the mesh.
This inconvenient isn't so important for a small amount of images, because it's possible to fix the mesh,
but with a very large number of images (up to 1000) manual corrections are impossible.


So I examined the possibility to use openSCAD to build a stack of each images extruded.
But I find that quickly, openscad use a lot of ressources (particulary memory) for extruding a large amount of complex dxfs.
I fail to generate a large stack in a raisonable time and memory usage.

It's finaly why I decide to generate one stl for each frame, then one gcode, and stack the gcodes.
Finaly, the result is conceptualy interesting because I'm able to build a physical object without geting the mesh of it :
I only produce the gcode for the printing.

Bash draft program


I begin by writing a draft program, to validate the toolchain.
I choose using bash because I need to use several different softwares



Images extracting


first, it's necessary to extract each frames of the movie.
for this I use ffmpeg :

  1. fmpeg -i RaceHorseMuybridge.flv images/image-%3d.bmp


Processing loop

Then I build a loop to process each image individualy.


  1.  
  2. function demon(){
  3.  
  4. echo "traitement de toutes les images image-$1*.bmp";
  5. for i in image-$1*.bmp;
  6. do
  7.  
  8.  
  9. if [ -f ${i} ]
  10. then
  11. echo "_________________________________________________________${i}";
  12. potrace -o ${i%bmp}eps -k 0.6 -t 50 -B-0.25 -e ${i};
  13. #potrace -o ${i%bmp}eps -k 0.43 --tight -M -0.41 -e ${i};
  14. pstoedit -dt -f dxf:-polyaslines ${i%bmp}eps ${i%bmp}dxf;
  15. rm ${i%bmp}eps
  16. #ecrit le fichier openscad
  17. echo "intersection(){
  18. translate([-4,0,-5])cube([96,63,10]);scale([15,15,1])linear_extrude(height = 0.15, center = false, convexity = 10) import (file = \"${i%bmp}dxf\");}translate([-4,0,0])cube(0.05);
  19. translate([91.95,62.95,0])cube([0.05,0.05,0.14]);" > ${i%bmp}scad;
  20. echo "calcul du ${i%bmp}STL";
  21. /home/cedric/soft/openscad/openscad -o ../stl/${i%bmp}stl ${i%bmp}scad;
  22. rm ${i%bmp}scad;
  23. rm ${i%bmp}dxf;
  24. echo "_______________________________________${i%bmp}STL ok";
  25. else
  26. echo ${i} existe -----------------pas;
  27. fi
  28. done
  29.  
  30. }

Vectorisation

For defining filled and empty zones from color (or black and white) of a picture, I vectorise it with potrace

  1. potrace -o ${i%bmp}eps -k 0.6 -t 50 -B-0.25 -e ${i};


Potrace is able to output in DXF format, but this variant isn't lisible by openscad,
it's why I use pstodedit to convert EPS to DXF format (wich this variant is well formated for openSCAD).

  1. pstoedit -dt -f dxf:-polyaslines ${i%bmp}eps ${i%bmp}dxf;



STL generation

For each image, I build a openscad file containing :

  1. intersection(){
  2. translate([-4,0,-5])cube([96,63,10]);
  3. scale([15,15,1])
  4. linear_extrude(height = 0.15, center = false, convexity = 10)
  5. import (file = "${i%bmp}dxf");
  6. }translate([-4,0,0])cube(0.05);
  7. translate([91.95,62.95,0])
  8. cube([0.05,0.05,0.14]);

Then I produce the STL file executing openscad :

  1. openscad -o ../stl/${i%bmp}stl ${i%bmp}scad;


Multithread quick and dirty

Openscad isn't optimised for multiprocessor processing, and it take a long time to process each image:
for the muybridge horse example image, about 20 sec/image

It's why I decide to force multithreading, to use the total computing power of my computer (wich is a quad core) and reduce the processing time.

For the draft program, I simply use the "&" option of terminal :

  1. for z in {0..4}
  2. do
  3. echo ${z}1;
  4. ##lance trois demons sur différentes parties des images
  5. demon ${z}0&
  6. demon ${z}1&
  7. demon ${z}2&
  8. demon ${z}3;
  9. echo "premiere fournee finie====================================================="
  10. demon ${z}4 &
  11. demon ${z}5&
  12. demon ${z}6;
  13. echo "seconde fournee finie====================================================="
  14. demon ${z}7 &
  15. demon ${z}8&
  16. demon ${z}9;
  17. echo "troisieme fournee finie====================================================="
  18. done

this strategy isn't very clean because the end of process isn't correctly detected, but it's enought for a draft.



Gcode generation

They are generated with Slic3r.
Because this program is multithread optimised, I processed it separatly that stls loop

  1. for i in image-*.stl;
  2. do
  3. #calcul slicer
  4. slic3r --load ../config.25.plein.ini -o ../gcodes/${i%stl}gcode ${i}
  5. echo ${i};
  6. done
  7. python mixGcode.py

The last command : "MixGcodes.py" is a python script that's concatenate gcodes, adding a z ofset fofr each layer.

The draft programm is here : [file:Movie2gode.draft.zip]


First generation test

As my program is modest part in the history of cinema, I try it with the Muybridge horse that is one of the first movies :
Muybridge race horse animated.gif Horse.gcode.jpg
Horse.pers.jpg Horse.pers.jpg Horse.pers.jpg
By this test, I validate that it's possible to get a physical object from a movie, with time translated to z axe.

The result is quite good, but some details can be improved :
Because the principe of generation (and in some particular images cases), slic3r is unable to process support material.
So it's necessary to anticipe this by using images with only small differences between each (in other words slow motions) :
In this case, the previous image layer can support the next.
For this particular example, I used slomovideo software to generate extra intervaling images.

At this step, fine tuning of vectorisation, size and slicing isn't possible without go into the code : I need to program a graphical interface to made the program usable with different movies.


Graphical interface

I start programming the graphical interface to be able to :

optionaly, I will be good to get a direct communication with the reprap, via a serial interface, to stream the gcode directly.

In fact, I have to rexrite entierly the program, to made it more coherent and evolutive.

Because I still use some python scripts inside, I decide to use this language for the main program.
The interface will use the wxpython library.
Movie2Gcode.png
At this time, the work isn't finished, but I can say that I've learned :

I plane to finish this program and publishing soon.

[File:movie2gcode.zip]