Skip to content

20. Final Project Requirements


Weekly Assignment:

Document a final project masterpiece that integrates the range of units covered, answering:

What does it do? Who’s done what beforehand? What did you design? What sources did you use? What materials and components were used? Where did they come from? How much did they cost? What parts and systems were made? What processes were used? What questions were answered? What worked? What didn’t? How was it evaluated? What are the implications?

Prepare a summary slide and a one minute video showing its conception, construction, and operation. Your project should incorporate 2D and 3D design, additive and subtractive fabrication processes, electronics design and production, embedded microcontroller interfacing and programming, system integration and packaging.

Where possible, you should make rather than buy the parts of your project. Projects can be separate or joint, but need to show individual mastery of the skills, and be independently operable. Present your final project.

I have my documentation on my Final Project page of my site. The rest of the questions are answered below.


Question Answers

What does it do?

It is a 3-axis voice-activated pen plotter that executes curated, prewritten G-code drawings based on spoken prompts.

Who’s done what beforehand?

My project is inspired by Jack Hollingsworth's Ouiji Board, which uses ChatGPT to generate responses and control stepper motors to physically "move" a planchette. I learned about this through the ouiji board group project from last year’s Fab Academy cycle. Both projects demonstrated how artificial intelligence could be used for real-world motion control through G-code to command stepper motors. I was especially drawn to the voice-activated aspect, which made the interaction feel more natural and autonomous. I wanted to explore that further in a visual way.

What did you design?

I designed the 3D components such as the pen lift mechanism, the carriage that holds the pen, and the motor mounts and attachments for the linear rail. I also design the cable management system, the housing for the Raspberry Pi and touchscreen, and the base to stabilize my machine. Additionally, I designed a custom PCB to manage motor control and driving tasks.

What sources did you use?

I used some tutorials found online like this one for the ReSpeaker Microphone, this one for the Raspberry Pi, as well as How To Mechatronic's pen plotter for structural inspiration. I also conversed with ChatGPT for troubleshooting and advice at certain stages.

What materials and components were used? Where did they come from? How much did they cost?

This is my bill of materials that lists out all the components I used and answers the above questions.

What parts and systems were made?

3D printed parts will be fabricated in-house and the custom PCB will be designed and milled using lab equipment. The ShopBot is also used to create the wooden base of my machine, to give it more stability.

What processes were used?

  • CAD for designing 3D parts and the custom PCB.

  • 3D Printing for fabricating custom mechanical components like the pen lift mechanism, motor mounts, and structural brackets.

  • CNC Milling to manufacture the custom PCB and the milled base. Soldering for assembling the custom PCB and making reliable electrical connections between components.

  • Laser and maybe vinyl cutting for aesthetic base structural parts and enclosures.

  • Embedded Programming – to configure the GRBL firmware on the Arduino and interface with the custom motor driver PCB

  • Python Scripting to build the touchscreen interface, handle audio transcription (Whisper), interact with the ChatGPT API, and send G-code to GRBL.

What questions were answered?

  • Where does the custom PCB best fit in the system? Should it act as a signal breakout board, or handle logic beyond what the CNC shield provides?

I initially considered using a custom PCB as a simple signal breakout board, but I ultimately decided to design a PCB that functioned more like a motor driver.

  • How reliable is Whisper’s transcription in real-world (noisy) environments and will it be consistent in general?

[N/A]

  • How robust is ChatGPT’s G-code generation? What can I do so that it doesn’t output invalid toolpaths?

I learned that it is not robust enough for my project. This is what I learned from Neil when he reviewed my project.

  • Should Inkscape be used as an intermediary step between AI and G-code output? Would it be better to have a secondary option where ChatGPT generates an SVG that’s then processed into G-code using Inkscape?

Adding onto the previous question, yes, it would be good as an intermediary. That would be a level above pre-written G-code as it is more automated and AI-intensive.

  • What is the best way to display and preview G-code on the Raspberry Pi touchscreen? How would I visualize it?

[N/A]

  • What fail-safes or feedback systems are needed (limit switches or emergency stop)?

As of now, I have added limit switches. I think I will connect an emergy stop with them if given the time.

What worked? What didn’t?

My original idea for AI to G-code was not really plausible. Mr. Nelson made the suggestion that I could have AI generate an image then use the image in a g-code making software. After the meeting, Mr. Dubick also told me I could go the route where I pre-code certain images that the machine draws properly. Then, I'd just voice the command "Draw me a ___" with one of the pre-developed images. This would make it more reliable.

How was it evaluated?

This project was evaluated based off the accuracy of the voice recognition, mechanical performance, the user interface usability, system integration/stability/quality, safety, and overall user experience.

What are the implications?

I think this project bridges a gap between human intention and machine action. It opens up possibilities for more intuitive human-machine collaboration, especially as/when AI tech advances. It makes digital fabrication more accessible if its only based on voice commands, although my project is just a nascent version of the potential with voice to output machination.

But additionally, the modular structure of the system (voice input, Raspberry Pi processing, motor control) makes it extensible beyond drawing. With modifications, it could evolve into a general-purpose voice-driven fabrication platform for laser cutting, CNC milling, or even robotic interaction.


Last update: June 3, 2025