2. Computer-Aided Design#
heroshot#
summary#
This time, the main focus was on modeling the design using Fusion 360. Additionally, I set up an environment where drawing tools like drow.io can be used in the IDE, and I created diagrams to illustrate the relationships between the requirements, specifications, functions, parts, and various parameters of the final project.
0. Sketch using the iPad 2D character app “Procreate” (2D raster design software)#
I have a paid account for the iPad 2D character app “Procreate”.
Procreate is a powerful and intuitive digital painting app developed for iPad. You can perform a wide range of creative activities such as sketching, painting, and illustration production on your iPad.
Procreate Main features:#
- Variety of brushes: More than 200 types of brushes are built-in, including pencils, inks, charcoals, and paintbrushes, and can also be customized in the brush studio.
- High-performance graphics engine: The Valkyrie engine, which utilizes Apple’s Metal architecture, provides a smooth drawing experience.
- Intuitive gesture operation: Efficient work is possible with gestures optimized for touch operation, such as undoing with a two-finger tap and redoing with a three-finger tap.
- Animation function: You can create simple frame-by-frame animations using animation assist.
Above all, I like that it is a one-time purchase app rather than a subscription.
and , I have already sketched in the previous assignment.
This is a screenshot of the sketch screen on ProQueryate.
Here is the sketch.
More information about my final project can be found here.
It seems that I need to improve my sketching skills themselves… My teammate, Hayashi-san, told me that the following YouTuber is helpful for learning sketching techniques. Thank you, Hayashi-san!
Rome wasn’t built in a day, so I’ll try to do as many sketches as possible. Also, for those who will be taking the course in the future, I will introduce some books that I purchased to learn product design sketching.
- How to Draw: Drawing and Sketching Objects and Environments from Your Imagination
- How to Render: the Fundamentals of Light, Shadow and Reflectivity
- Sketching: Drawing Techniques for Product Designers
- Vision: Color and Composition for Film
- Sketchbook: Composition Studies for Film
2D vector design software#
This assignment also required me to touch 2D vector design software. I touched Graphite.
Graphite#
Graphite is a free, open-source vector editor with the following features: * Non-destructive editing workflow combining node-based generative design and layer-based composition * Desktop app (Windows, Mac, Linux) release scheduled for 2025 * The only graphic design package that supports procedural editing * Parametric possibilities can be explored Being able to parametrically adjust designs in this way is great not only for 3D. Since it’s a web-based tool, you who are viewing this document can easily experience parametric editing of this dress from this URL: https://editor.graphite.rs/#demo/red-dress
Clicking on an object will display parameters on the right side.
Let’s change Rondom Scale Min from 2 to 15.
The size has changed.
1. First, I tried using Fusion 360 to understand what I didn’t know.#
Since I was granted access to Fusion 360, I decided to use it.
I repeatedly found myself stopping shortly after trying to design, realizing that I was facing two issues.
- Issue 1: I couldn’t properly define or understand the parts of the final product I was trying to create.
- Issue 2: My skill set in Fusion 360 was underdeveloped, so I couldn’t imagine the process of creating the ideal state or deliverables.
Issue 1: I couldn’t properly define or understand the parts of the final product I was trying to create.#
I had completed the sketch, but I didn’t understand how the interrelated parts would operate. The operation of the parts depends on their functions, and the functions should reflect the requirements. Furthermore, the requirements must adequately meet the demands. To maintain the consistency of these relationships while moving towards modeling, I decided to visualize my thoughts for the time being.
Using Drow.io to graphically express my thought process#
I often use a tool called Drow.io when I think about and organize conceptual relationships. Drow.io is a free drawing tool that works similarly in both browser and local environments, making it very user-friendly. Additionally, being able to manage it on Google Drive and GitHub is one of its attractive features.
Upon researching, I found that there is an extension that allows Drow.io to be displayed in VSCode.
Extension: Draw.io Integration
I installed the extension and decided to document my thought process in Drow.io.
- Red: Questions or issues regarding my thoughts
- Purple: Document links
- Yellow: Supplementary information
- Green: Specific challenges
- Blue: Parameters
Thus, my thoughts were organized for the time being. This revealed several challenges.
- It would be better to separate the tank that draws up water from the tank that stores the discharged water.
- It is necessary to use a motor to perform multiple different operations in sequence, and I need to think about that mechanism.
- If the goal is just to meet the requirements, there is no need to use a brush. For example, it would be fine to hit a jet stream of water against the wheel to remove dirt.
- It became clear that I had not sufficiently thought about the function of blowing air onto the wheel.
- It became clear that I had not sufficiently thought about the pump mechanism for drawing up water.
- When I consulted with instructor Tamiya, he introduced me to the following projects and YouTube videos. What I need is an instructor like Tamiya.
- YouTube: How to make a Peristaltic Pump
- Peristaltic pump - additive design & fabrication
- When I consulted with instructor Tamiya, he introduced me to the following projects and YouTube videos. What I need is an instructor like Tamiya.
From the perspective of limited rationality, it can be inferred that these relationships are likely incomplete. However, I believe that visualizing and committing to the relationships of my current thoughts is valuable.
For now, I decided to tackle the next issue to do my best at this point.
Issue 2: My skill set in Fusion 360 was underdeveloped, so I couldn’t imagine the process of creating the ideal state or deliverables.#
I think this issue is very common yet often overlooked.
I spent about two hours experimenting with my current knowledge. However, I kept creating and breaking things, resulting in zero progress.
At this point, I realized that what I lacked was not experience but knowledge.
Therefore, I decided to refer to the tutorials published by Autodesk Japan.
Let’s input knowledge through YouTube#
The videos are in Japanese, but I will also include English-translated titles for context.
- 【Autodesk Fusion Beginner’s Guide 2】Modeling - Part 1 - YouTube
- 【Autodesk Fusion Beginner’s Guide 3】Modeling - Part 2 - YouTube
- 【Autodesk Fusion Beginner’s Guide 4】Assembly - YouTube
- 【Autodesk Fusion Beginner’s Guide 5】Drawings - YouTube
- Autodesk Fusion Quick Tips: How to Modify Sketch Dimensions - YouTube
- (296) 【With Japanese Subtitles】Automated Snap Fit: Saving Design Time with Automation - Introduction to Plastic Design Tool Functions - Fusion Extension - YouTube
What I found particularly interesting was the automated snap fit feature. Fusion 360 has released plastic design functions as extensions, allowing for simplification of designs for snap fits, bosses, ribs/webs, automatic assignment of physical properties of plastics, and easy setting of geometric patterns. This was extremely convenient.
At this point, I realized the importance of understanding the latest features of Fusion 360 and began searching for videos on the latest technologies.
The 2022 video explained by a Fusion 360 evangelist provided a lot of learning, covering basics, tips for actual use, and the utilization of the latest technologies.
Overview of Knowledge Topics Learned from the Above Video#
Handling History#
Fusion 360 is characterized by its ability to manage modeling history, but there are several points to consider when handling history.
- How to toggle history on and off
- Fusion 360 records history by default, but it can be changed to not record history.
- If there is history, sketches and operations can be edited by going back.
- Setting to not capture history:
- Right-click on “Design History” in the “Browser” → “Do not capture design history”
- It can also be set from the gear icon in the bottom right of the UI.
- Differences between having history and not having it
- When there is history:
- Direct edits are recorded in history and can be modified later.
- When there is no history:
- Direct edits are not recorded, and changes cannot be undone.
- When there is history:
- Deleting history
- Deleting history will result in the loss of all previous edit history, making it impossible to revert to past edits, so caution is needed.
Difference between Removal and Deletion#
In Fusion 360, there are methods to “remove” or “delete” elements within a model, and there are clear differences between these two actions.
- Deletion
- Deletes the selected body or sketch element along with its history.
- Other operations that depended on that element may become invalid, potentially corrupting the history.
- Removal
- Hides the selected body from the scene but retains it in history.
- Dependencies remain intact, allowing for later restoration.
- For example, it is useful when you want to display an element that was “removed” during later design changes.
Creating Solids Using Boundary Fill and Executing Split#
Fusion 360 has modeling techniques that utilize surfaces, one of which is creating solids using “Boundary Fill.”
- What is Boundary Fill
- A method that utilizes a closed space of surfaces to generate a solid within it.
- If the surface faces are closed, a solid can be filled within.
- It can be used in form modeling and freeform surface design.
- Utilizing Split
- It is possible to split (Split) models using surfaces or solids.
- Using split body, it can also be used to remove unnecessary parts from mesh data or scanned data.
- For example, it can be utilized when cutting and shaping unnecessary parts of scanned data.
Fusion 360 Can Handle and Convert Any State of Files: Solid, Surface, Form, Mesh#
Unlike typical 3D CAD software, Fusion 360’s significant feature is its ability to freely navigate between different modeling formats.
- Integrated Modeling Environment
- Solid: General parametric modeling.
- Surface: Modeling primarily based on curved surfaces.
- Form: T-Spline-based freeform modeling.
- Mesh: Scan data or polygon models like STL or OBJ.
- Conversion Between Different Data
- Mesh → Solid:
- Convert mesh data by generating face groups → “Convert Mesh” to solidify.
- Surface → Solid:
- Use “Boundary Fill” or “Stitch.”
- Form → Solid:
- After shaping with form modeling, add thickness to convert to solid.
- Solid → Mesh:
- Export as STL for 3D printing.
- Mesh → Solid:
- Strengths of Fusion 360
- Since it can handle these formats integratively on a single platform, it allows for designs that combine different modeling techniques.
Command-Based Conversion of STL Data to Solid Data#
Fusion 360 has a feature that directly converts STL data (mesh) to solid data.
- Handling STL Data
- STL or OBJ data imported from external sources is handled by default without history.
- To edit mesh, you can choose to either “not capture history” or “capture history.”
- Steps to Convert STL to Solid
- Generate Face Groups
- First, group the faces of the STL data by angle.
- Convert Mesh
- There are three conversion options:
- Faceted: Convert while keeping the triangles (maintaining original data).
- Prismatic: Convert while maintaining angular shapes (keeping the edges of the STL).
- Organic: Convert to smooth surfaces.
- There are three conversion options:
- Generate Face Groups
- Utilization After Conversion
- The converted solid data can be edited with standard Fusion 360 modeling tools.
- For example, you can apply fillets (chamfers) or create holes.
Design Rules and Advanced Features for Plastic Products#
Fusion 360 has added features specifically for designing plastic products, making it easier to design resin parts, especially when utilizing the “Product Design Extension.”
- Boss
- Automatically generates boss shapes for screw holes commonly found in plastic products.
- Automatically creates bosses when selecting sketch points.
- The length, screw size, diameter, etc., can be set parametrically.
- Snap Fit
- Automatically generates “claws” used for assembling plastic parts.
- There are three types (standard claws, claws that fit into holes, rectangular claws).
- The strength and shape of the snap fit can be easily adjusted.
- Thickness Management and Rule-Based Design
- In resin parts, uneven wall thickness can cause issues during molding.
- It features a function that allows for bulk management of “wall thickness.”
- For example, you can set an appropriate thickness (e.g., 2mm) for ABS resin as a parameter and change it in bulk.
- Geometric Patterns (Lattice)
- A function to create holes in plastic parts or apply patterns for weight reduction.
- It can easily apply complex shapes like hexagonal grids or radial patterns.
2. Re-operate Fusion 360 and create animations#
- Policy
- Set parameters later
- Focus on modeling the shape first
- Steps taken
- Create a box with form
- Split with form
- Hollow out the inside
- Create a joining surface with lip
- Create a box with sketch and carve the frame from above
- Create components from body
- The created 3D model is set to be embeddable in an iframe
- How to set it up
- Open in web
- Select the target and share
- Embed
- How to set it up
Here is the output 3D model.#
- Set up the animation
- Requirements
- Understand that it can be separated
- Understand that the inside is hollowed out
- Procedure
- Use the disassembly function
- Rotate
- Export
- Requirements
Here is the video created in this way.#
3. Trying product design with generative AI#
Next, separately from the modeling I created, I used generative AI to evaluate each image and assess the strengths and weaknesses of my design relatively.
Concept of this initiative#
I want to make a more accurate judgment about whether my design is good or bad. At this point, it is just a design idea for sample 1. From the perspective of speeding up the PDCA cycle, I will generate images using generative AI, compare that design with my own, and consider better designs.
There is a mathematical concept called the secretary problem or marriage problem.
This is a type of optimal stopping problem that mathematically considers how many candidates to pass over in order to hire (or marry) the most outstanding (or suitable) person.
What I want to say: I believe that by building relative evaluation points within a certain number, it becomes possible to have a more valuable evaluation.
The evaluation items will be subjectively evaluated based on the following items.
- Aesthetic elements
- Shape (form): Is the design’s shape harmonious or original?
- Color (color palette): Is the color choice appropriate and consistent with the overall impression?
- Texture: Is the surface texture visually and tactilely pleasant?
- Proportion: Is there a balance among the elements?
- Functional impression
- Intuitiveness: Does the design intuitively convey its purpose?
- Ease of use: Does the appearance suggest the actual usability?
- Durability impression: Can reliability and durability be sensed from the appearance?
- Explicit functionality
- Functional beauty: Does the appearance beautifully express functionality?
- Visualization of operability: Can the operation method and usage be understood from the appearance?
Assumptions#
- I believe I already fully utilize image-generating AI and possess the necessary knowledge as a premise.
- Therefore, I will not particularly describe background knowledge in the main text.
- I decided to compile information about the prerequisite knowledge regarding image-generating AI and prompt engineering in an appendix. I hope you will read it if you are interested.
- However, the progress of AI is very rapid, not limited to image generation. Please be aware that knowledge as of February 2025 may quickly become unusable.
Image-generating AI to be used#
The image generation to be used this time is as follows.
Prompt engineering in image-generating AI#
First, I will create a prompt to create a prompt.
The model used is ChatGPT-o3-mini-high.
Background of the idea:
What I want to create:
A machine that cleans the wheels of carry-on cases
Reason for the idea:
Do you use a carry-on case? My family often uses carry-on cases for business trips and vacations.
It is fine when we use it, but every time we put it away, I have to clean it.
I found that very annoying.
I wanted a special item that would solve this easily. Apparently, a quick Amazon research shows that this item does not exist.
Thus, I decided to create this item. Perhaps it could be a business?
Requirements for the machine to be created
・Carry wheels can be cleaned.
・Carry wheels can be disinfected.
・Wipe or dry carry wheels with water.
・Put a cap on the carriage wheel.
}
Instructions:
{
Please create a prompt in English to generate product design images based on {{Background of the idea}}. However, please follow the structure of {{Reference prompt structure}}.
Also, when creating the prompt, please consider {{Evaluation items}} sufficiently.
}
Evaluation items:
{
Aesthetic elements
Shape (form): Is the design's shape harmonious or original?
Color (color palette): Is the color choice appropriate and consistent with the overall impression?
Texture: Is the surface texture visually and tactilely pleasant?
Proportion: Is there a balance among the elements?
Functional impression
Intuitiveness: Does the design intuitively convey its purpose?
Ease of use: Does the appearance suggest the actual usability?
Durability impression: Can reliability and durability be sensed from the appearance?
Explicit functionality
Functional beauty: Does the appearance beautifully express functionality?
Visualization of operability: Can the operation method and usage be understood from the appearance?
}
Reference prompt structure:
{
[art medium], [main objects], [attribute], [expression], [key light], [detailing], [camera shot], [media], [engine], [artist]
}
Model’s thought process
Reasoned about product design requirements for 13 seconds
Creating a solution
OK, let's think this through: the user needs a machine to clean carry-on case wheels, addressing the frequent cleaning inconvenience, as existing products lack such functionality.
Crafting the vision
I’m working through the design for a carry-on case wheel cleaning machine, focusing on aesthetics, functionality, and user experience. Considering elements like form, color, texture, and durability.
Crafting a detailed vision
I’m devising a prompt to generate a product design image, integrating aesthetic aspects, functional impressions, and explicit functionalities, all while adhering to a structured format.
Crafting the vision
I'm shaping a 3D rendering for a carry-on case wheel cleaning machine, blending aesthetic and functional design elements to highlight innovation, user-friendliness, and sophisticated aesthetics.
Crafting the design
OK, let me see. I’m shaping a 3D rendering for a cleaning machine, merging design elements with industrial design principles, aiming for a blend of aesthetics and functionality.
Refining the prompt
I’m working through a 3D rendering prompt for a carry-on case wheel cleaning machine, detailing attributes like design style, light, textural details, and media, with a focus on aesthetic and functional elements.
Laying out the details
I’m refining a structured prompt for a 3D rendering of a carry-on case wheel cleaning machine, ensuring it highlights key design aspects like aesthetics, functionality, and operational clarity.
Refining the prompt
I’m sharpening a 3D rendering prompt for a carry-on case wheel cleaning machine, emphasizing design elements such as aesthetics, functionality, and operational clarity.
Crafting the prompt
I'm working through a structured 3D rendering prompt, emphasizing aesthetics, functionality, and operational clarity. This approach aligns perfectly with the provided animation principles, ensuring a comprehensive and effective result.
Generated image generation prompt
3D rendering, a sleek and innovative carry-on case wheel cleaning machine featuring integrated cleaning, disinfection, water-wiping, and a retractable wheel cap, minimalist and futuristic design with harmonious proportions, intuitive interfaces, and a balanced metallic-matte color palette, soft diffused key lighting that highlights tactile textures and ergonomic details, intricate detailing with visible operational buttons and smooth, durable surfaces, close-up macro shot that emphasizes the product's functional beauty and clarity of operation, photorealistic render, Octane Render, inspired by the industrial design ethos of Dieter Rams.
Next, I will generate images based on this prompt.
- genspark image_studio
- Number of creations
- 10
- Settings
- Model: Mixture-of-Agents
- Auto prompt ON (Chatgpt, Gemini, Claud)
- Image style: Realistic
- Image aspect: 1:1
- Number of creations
- Leonard ai
- Number of creations
- 30
- Settings
- Number of creations
To conclude, this did not go well…
Therefore, I manually modified the prompt and tried generating images again.
Prompt
3D rendering, special machine to clean only wheels of carry-on cases, mobile size
Before evaluation, it is immediately clear that this is a non-functional design.
However, this may be due to the use of prompts and general models.
In the future, I want to explore ways to express appropriate designs through image generation with the aim of improving evaluation criteria for creating better designs.
4. using python, I have created a script that will convert all images under a certain folder to webp and compress the size.#
The script is coded as follows. Images stored in the “98_input_image” folder are converted and output to “99_output_imgae”. Finally, they are converted together and rearranged into a folder of your choice in the GUI of cursor (IDE).
import subprocess
import os
# please set your input folder & output folder path
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
INPUT_FOLDER = os.path.join(BASE_DIR, "../docs/images/98_input_image/")
OUTPUT_FOLDER = os.path.join(BASE_DIR, "../docs/images/99_output_imgae/")
PREFIX = "converted_" # prefix for converted images
TARGET_SIZE_KB = 300 # target file size
def convert_to_webp(input_path, output_path, quality=90):
"""
convert image to webp and scale to target size while maintaining aspect ratio
"""
temp_output = output_path + ".temp.webp"
# first conversion
cmd = [
"ffmpeg", "-i", input_path,
"-vf", "scale='if(gt(iw,ih),min(1024,iw),-1)':'if(gt(iw,ih),-1,min(1024,ih))'",
"-c:v", "libwebp", "-quality", str(quality), "-compression_level", "6",
"-preset", "photo", temp_output
]
subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# check file size
file_size_kb = os.path.getsize(temp_output) / 1024
# adjust quality to target size
while file_size_kb > TARGET_SIZE_KB and quality > 10:
quality -= 5
cmd = [
"ffmpeg", "-i", input_path,
"-vf", "scale='if(gt(iw,ih),min(1024,iw),-1)':'if(gt(iw,ih),-1,min(1024,ih))'",
"-c:v", "libwebp", "-quality", str(quality), "-compression_level", "6",
"-preset", "photo", temp_output
]
subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
file_size_kb = os.path.getsize(temp_output) / 1024
# final output
os.rename(temp_output, output_path)
print(f"conversion complete: {output_path} ({file_size_kb:.2f} KB , quality {quality})")
def process_images():
"""
convert images in a folder to webp and save them to the output folder
"""
if not os.path.exists(INPUT_FOLDER):
print(f"input folder {INPUT_FOLDER} not found。please create it")
return
if not os.path.exists(OUTPUT_FOLDER):
os.makedirs(OUTPUT_FOLDER)
for filename in os.listdir(INPUT_FOLDER):
input_path = os.path.join(INPUT_FOLDER, filename)
# skip images that have already been converted
if filename.startswith(PREFIX):
print(f"skip: {filename} (already converted)")
continue
# image format check(jpg, png, jpeg , webp only)
if not filename.lower().endswith(('.jpg', '.jpeg', '.png',".webp")):
print(f"skip: {filename} (not image file)")
continue
output_filename = PREFIX + os.path.splitext(filename)[0] + ".webp"
output_path = os.path.join(OUTPUT_FOLDER, output_filename)
# excute conversion
convert_to_webp(input_path, output_path)
# change input file name to prefix`converted_` after conversion
new_input_path = os.path.join(INPUT_FOLDER, PREFIX + filename)
os.rename(input_path, new_input_path)
print(f"rename input file: {input_path} > {new_input_path}")
if __name__ == "__main__":
process_images()
5. compressing video using terminal commands#
To compress videos using terminal commands I created two commands using ffmpeg. I registered these two commands in a snippet tool (I use Alfred) for easy compression.
ffmpeg -i filename -vf “scale=-2:720,fps=30” -c:v libx264 \
-preset slow -crf 28 -c:a aac -b:a 128k -an -movflags +faststart \
-fs 10M converted_ filename.mp4
ffmpeg -i filename -vf “scale=-2:720,fps=30” -c:v libx264 \
-preset slow -crf 28 -c:a aac -b:a 128k -movflags +faststart \
converted_filename.mp4
The difference between the two commands is that ‘-an’ (audio none) and -fs (file size) are not specified.