Week 02: Computer-Aided Design¶
Model a possible final project¶
Autonomous pen plotter I modeled an autonomous pen plotter with its camera eyes and some electric components imported.
Assignments¶
- model (raster, vector, 2D, 3D, render, animate, simulate, …) a possible final project
- compress your images and videos, and post a description with your design files on your class page
1. 3D modeling, animation and rendering with Fusion -¶
Step 1-1: Research & Design¶
Before modeling a possible final project, I researched existing drawing robots, and what they use for electronic components, and also cameras that can be used for my robot eyes.
Existing projects: I found Open Source Turtle Robot (OSTR) and Otto Drawing Turtle Robot both seemed good starting points.
Components selected: I decided to use two 28BYJ-48 stepping motors as in the above two projects, and skipped the servo and microcontroller for the time being. For my robot camera eyes I picked the OV2640 module as it seems to go well with ESP32 microcontrollers.
Step 1-2: 3D Modeling with Fusion¶
I started with 3d modeling with Fusion which I have some prior experience.
Starting from basic shape: I started with a very basic shape as I learned from the Student Bootcamp and the Week 2 lecture, and my choice was a cylinder.
Import electronics components: I imported components, and adjusted the size of my model where necessary.
Applied appearances: After some trial, I chose a combination of Glass - Light Color (blue), pine wood and brass appearances. I also adjusted some fillets further.

Step 1-3: Animation with Fusion¶
I made an animation of the model as below.
Step 1-4: Rendering with Fusion¶
Render: Then, finally, I rendered the image of the robot. I was surprised that the glass is not blue as it was in the design phase—it is almost clear—but it looks so realistic, and I was happy.

2. 2D vector image modeling with Inkscape¶
One of the use cases of my potential final project robot is to draw a very big coloring page based on a line drawing in SVG format. I decided to generate such SVG with Inkscape.
Step 2-1: Generate image using AI¶
I first generated a traditional Japanese painting like image with Midjourney with the following prompt.
A contemporary interpretation of Rinpa-style Nihonga, depicting camellia flowers with a strong emphasis on negative space and silence. Minimalist composition with vast blank areas, asymmetrical placement of a few refined camellia blossoms and leaves, reduced forms, restrained lines, and intentional simplicity. Traditional Japanese aesthetics reimagined through a modern lens: flat decorative surfaces, subtle mineral-pigment-like textures, vivid flowers against muted background, calm and sophisticated palette. Poetic emptiness, quiet tension, and balance between form and void. No realism, no perspective depth, no dramatic lighting, no Western painting style. Timeless, serene, museum-quality contemporary Japanese art. Aspect ratio 4:3

Step 2-2: Trace Bitmap with Inkscape¶
I opened Inkscape and went to the menu bar “Path” > “Trace Bitmap.”

Step 2-3: Edge detection¶
I selected “Edge Detection” at the “Detection mode.”

Step 2-4: Remove image¶
When edge detection is applied the black edge path is on the original image. I had to delete the image and leave the path only.

Step 2-5: Save path as SVG¶
Then, I obtained the path in black and white, and saved it as Plain SVG. The lines in the SVG file at this moment are not as clean as I had expected but I will solve it later perhaps with a python program.

3. 2D raster image modeling with GIMP¶
I chose GIMP to model a 2D raster image. I found the product has many features and is not easy for a beginner. I just put some letters on the rendered image using Fusion.

4. Simulation with Blender Physics¶
My drawing robot will somehow know the four corners of the surface it will draw on. I think this could be achieved with 4 blocks at the corners. So I modeled them in Blender.
Blender looked intimidating at first, but after Rico’s lecture in the Asian Regional Session and with Tamiya-san’s instruction on the FabLab Kannai page, I managed to create the following simulation. One of the blocks changes direction before it falls down. I need to dig deeper later.
I also imported the robot from Fusion in OBJ format and the coloring page image into Blender.
5. Image compression with AppleScript/FFmpeg¶
Last week, I was simply using Mac’s Preview app for image compression. But this week at the local session at FabLab Kannai, I learned a cool way to create my own AppleScript application and make compression a simple drag and drop operation.
Step 5-1: Install FFmpeg¶
First of all, I had to install FFmpeg by typing the following command in the terminal.
brew install ffmpeg
Step 5-2: Create a new script¶
I opened the Script Editor app on my Mac, and created a new document.
Step 5-3: AppleScript code¶
I pasted the following AppleScript code into the Script Editor.
on open droppedFiles
set targetWidth to 800 -- Target width for resizing (in pixels)
repeat with eachFile in droppedFiles
-- Get the file path
set filePath to POSIX path of eachFile
-- Check file extension and process only image files
if filePath ends with ".jpg" or filePath ends with ".jpeg" or filePath ends with ".png" then
-- Determine the output path (add _resized to the original filename)
set outputFilePath to (text 1 through -5 of filePath) & "_w1080.jpg"
-- Check if output file already exists
set fileExists to false
try
set fileExists to (do shell script "test -e " & quoted form of outputFilePath & " && echo true || echo false") is "true"
end try
-- If the file exists, ask the user if they want to cancel
if fileExists then
set cancelChoice to display dialog "File already exists. Do you want to cancel?" buttons {"Cancel"} default button "Cancel"
if button returned of cancelChoice is "Cancel" then
return -- Skip this file without showing a message
end if
end if
-- Resize the image using ffmpeg
do shell script "/opt/homebrew/bin/ffmpeg -i " & quoted form of filePath & " -vf \"scale=1080:-1\" -q:v 2 " & quoted form of outputFilePath
else
display dialog "Only image files (JPG, PNG, etc.) are allowed." buttons {"OK"} default button "OK"
end if
end repeat
-- Display a completion message
-- display dialog "Images resized successfully!"
end open
Step 5-4: Save as an application¶
From the Script Editor menu, I chose File > Save As. When saving, I changed File Format to “Application”. Location can be where I want the app to appear. I chose my Desktop.
Step 5-5: Drag and drop the image file¶
After saving, this AppleScript works as an application. I dragged and dropped an image file, and the compressed version is ready!
You can see below that while the original image was a 2MB PNG, with the above script it was successfully compressed to a 282KB JPG all at once.

6. Video compression with FFmpeg¶
Step 6-1: Edit the video¶
When I created the video with Fusion, I first edit the video with iMovie.
Tips
If the video size is small, then it cannot be expanded to width later. Include a white paper of size 1080p in the project, and then you can create a size of 1080p.
Step 6-2: Run FFmpeg command¶
Then I run the following command.
ffmpeg -i input_video.mp4 -vcodec libx264 -crf 25 -preset medium -vf scale=-2:1080 -acodec libmp3lame -q:a 4 -ar 48000 -ac 2 output_video.mp4
Checklist¶
- [X] Modelled experimental objects/part of a possible final project in 2D and/or 3D software
- [X] Shown how you did it with words/images/screenshots
- [X] Documented how you compressed your image and video files
- [X] Included your original design files
Digital Files¶
References¶
- Open Source Turtle Robot (OSTR)
- Otto Drawing Turtle Robot
- OV2640 3d model from GrabCad library
- Midjourney
- FFmpeg
- FabLab Kannai AppleScript Tutorial
- FabLab Kannai Fusion Animation Tutorial
- FabLab Kannai Mkdocs video Tutorial
- fablab Kannai Blender physics Tutorial
Copyright¶
Copyright 2026 Fumiko Toyoda - Creative Commons Attribution Non Commercial Source code hosted at gitlab.fabcloud.org