This week, I was able to dive deeper into my project by focusing on many of the "how" questions. I worked on different aspects of my final project, including the basic reference circuit, the key components needed, and a sketch of the final design. It wasn't much, but I am happy with the progress made this week and I was able to learn a lot this week from all those activities.
We made a basic circuit for practice in Canva using the key components we listed. I was a complete beginner when it came to circuits and how the connections worked so I had a hard time understanding everything but my instructor helped a lot with everything. This is the reference circuit we made in canva:
These are some of the things I learned that time:
I was also able to make a draft sketch for my final project. Although it isn't very detailed, it represents what I have in mind for now, even though it will likely change a lot as the project develops. I then refined the image using Gemini AI, so credit goes to Gemini AI.
That's all for this week.
This week, we went to the lab to do cardboard prototyping. We created detailed cardboard versions of our final projects to better understand how all the components would work together, visualize the final design, and identify gaps that needed improvement.
This was my prototype of my final project which is a desk bot that trains your focus. For more details about my project, you can visit the Project Proposal page on my website.
Through this experience, I was able to identify several gaps in my project idea and realized that I needed to work out many details instead of keeping everything abstract in my head.
Gaps such as:
Reviewing these gaps, I can categorize them into three core challenges for Phase 2:
1. User Interaction & Interface (UI): Input method and menu structure.
2. Mechanical Design: Camera placement, casing, and internal layout.
3. Data and System Architecture: Core logic and efficient logging.
For next week, I will try to focus on resolving the UI and mechanical design gaps as much as I can at first, since those decisions will directly affect data logging and internal connections.
Note to self: Make a detailed sketch of the final project from all views (top, side, back, front) and label as many components as possible once the camera placement and input button decisions are finalized.
This week focused on developing the menu architecture and overall interface layout. My initial approach was intentionally minimal, without defining detailed interaction logic. However, I recognized that neglecting interface design could weaken the usability of the final project.
To improve the design, I shifted my perspective from creator to end user, prioritizing clarity, simplicity, and intuitive navigation.
Hello!
→ Start Focus Timer
→ Show Blueprint
→ Set Custom Timer
While functional as a concept, this structure lacked essential interaction details, particularly:
To improve the structure, I asked Claude AI for feedback on the menu organization. Based on the suggestions, I redesigned the interface by dividing features into different screens and organizing them more clearly. This was the prompt I used and I even pasted my project proposal and initial documentation on the project to help the AI get a clearer understanding on my project.
Screen 1 – Home
A → Start Focus
B → Create Timer
C → Focus Blueprint
Screen 1-A – Start Focus
I → Pomodoro Mode
II → Custom Timer
I made a draft on paper to help me get a clearer idea:
I made a flowchart to make it easier to understand as well.
Interface Logic & Navigation
-Input Method Exploration
I am currently exploring flex sensors as the main input method.
Questions I'm still figuring out:
Single press / bend → Scroll
Double press / bend → Select
Long press / sustained bend → Return Home
If flex sensors are confirmed for the final project, I will need to:
For this week's project development, since we learned about embedded systems, I wanted to work on my final project simultaneously as well. Instead of starting something completely new, I decided to make a simple demonstration of the menu structure I had designed earlier.
I began by creating a basic simulation in Wokwi, using the draft menu layout I made last week.
In Wokwi, I added the components needed for the simulation: an ESP32-C3 microcontroller, LEDs, resistors, a breadboard, and an OLED display. The simulation itself is very simple, it just cycles through the different menu screens to give a rough idea of how the interface might look. At this stage, the goal was mainly to visualize the flow rather than build a fully interactive system. I plan to improve this later by adding proper inputs and navigation logic.
This is the first simulation I made, which only had a few components:
This is the second video, which has more components, and I updated the code to cycle through different menu structure for the demonstration:
After testing the simulation, I tried building the same circuit using real hardware. I first used a Seeed Studio XIAO ESP32-C3 (bare module), since there wasn't enough time to design and fabricate a custom board. I placed it on a breadboard and wired the connections similar to the simulation. However, the setup didn't work as expected. To keep the demonstration moving, I switched to an Arduino Metro board instead. The hardest part was to program the OLED display, instead of working like the simulation, it got stuck at 'Hello'. After a lot of time spent on trying to get the OLED to cycle through different screens, it turns out the code was too long for the OLED to work so I used Claude AI to shorten the code. And it finally worked.
Here is the video:
This week, since I had to browse through a datasheet for a microcontroller, I browsed through the datasheet for the microcontroller I'll be using for my final project as well, which is the Xiao ESP32 C3 microcontroller.
The image below shows the ESP32 microcontroller's functional block diagram:
The chip is divided into 6 main sections:
For next week, I plan to work on both the exterior design and the internal assembly of the bot.