Skip to content

19.Invention, Intellectual Property and Income

Weekly Assignment:

Develop a plan for dissemination of your final project.

Prepare drafts of your summary slide (presentation.png, 1920x1080) and video clip (presentation.mp4, 1080p HTML5, < ~minute, < ~25 MB) and put them in your website’s root directory

Dissemination

This project, VoxPlotter, is a voice-controlled pen plotter that combines CNC hardware with AI-based voice recognition and GRBL. The primary goal of this project is to demonstrate the creative and technical potential of integrating AI-based voice recognition with CNC motion control. As such, dissemination is focused more on inspiring others than on monetization. Currently, it is more of a personal exploration that bridges two of my core interests: fine arts and engineering.

Much of the funding came from the Charlotte Latin Fab Lab, where I was able to access most of the necessary components and fabrication tools. Any additional materials not available were purchased through the lab's funds as well. At this stage, VoxPlotter is not intended for commercialization, so there is no formal business plan. However, that may evolve if future opportunities or interest arise. If I were to scale up or iterate on the project beyond its current form, I would most likely fund that development personally or potentially seek grant opportunities aligned with open-source art/tech initiatives.

The target audience would include:

  • Makers and engineers interested in CNC, GRBL, or Raspberry Pi applications
  • Artists looking to experiment with automation and generative tools
  • Students and educators in STEAM fields
  • Members of the Fab Lab and open-source hardware/software communities

To support accessibility and reuse, I plan to share:

  • Demonstration videos showing how VoxPlotter works in practice
  • Source code (Python, G-code examples, GRBL configuration)
  • CAD design files for the mechanical components and custom PCBs
  • Photos and diagrams to visually explain system architecture
  • Licensing and attribution information for transparency and clarity on reuse

I will share the project through the following channels:

  • My Fab Academy project website
  • GitLab or GitHub
  • YouTube or Vimeo

This project is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0), although that might change depending on future progressions and developments. If so, that will be clearly communicated. I chose to allow changes and remixes in part because of feedback from Neil, who noted that the AI portion of the project is still under development as part of some ongoing research efforts. That opened up the exciting possibility of others exploring new directions or improvements using this work as a foundation, should they be interested.

Future Considerations

Based on Neil’s comments and my own reflections, I’ve decided to shift my focus from full AI-driven G-code generation to a more reliable and practical hybrid approach. Instead of having the AI generate raw G-code directly (which is still an active area of research and technically complex), the system will initially use a curated set of pre-written G-code files that I know will work well with the plotter. These can be triggered by voice or touchscreen input to produce consistent, successful results.

In the next phase, I may integrate a workflow where AI generates or selects an image, which is then processed through Inkscape or a G-code slicing tool to prepare it for the machine. I would use AI to generate or help select a visual image—either through voice prompts or touchscreen input—which would then be passed into Inkscape for processing. I have looked into the Inkscape G-code extension (such as the J Tech Photonics Laser Tool or Gcodetools) to convert vector artwork into G-code compatible with my GRBL-controlled plotter.

The image would first be converted to SVG format if it isn’t already, then opened in Inkscape, where I’d configure toolpath settings like travel speed, pen-up/pen-down commands, and coordinate offsets based on my machine’s specifications. After previewing the paths and verifying that the generated G-code aligns with my machine's limits and behavior, the output would be exported as a .gcode file and sent to the plotter via the Raspberry Pi’s control interface.

If AI technology in this regard advances, I might revisit my original concept of AI generating G-code autonomously.

Presentation Drafts

Slide

This is a sample slide of what my final one might look like. For that one I'll definitely make it less clustered and I'll have a finished hero shot of my machine in the middle instead of my CAD design.

Video Sample

This is a pretty short video of some of the progress I have made so far. It covers CAD, the microphone, the touchscreen, and one of the axes. I also realized that you can see my mouse hovering over some apps while I was using OBS to record as well as a red line at the top thats because of the recording. When I do my final video, I'll make sure they are not there.


Last update: June 8, 2025