Development Log
This page tracks the ongoing development of my final project: EEG + AR Glasses with SNN processing. Each entry links back to the relevant weekly documentation where the work is described in more detail.
The Project
The idea is a wearable that combines two technologies that are being rapidly adopted: EEG and augmented reality. The glasses measure EEG from the temples, process it using a simple SNN (spiking neural network) on a small MCU, and display a minimal AR output via a micro-OLED mounted into the frame. At its core, this is about running neuromorphic and neuromorphic-like processing, both in software on the MCU and eventually in hardware, on something you actually wear. You could view your AR display while tracking your brain activity, all in one device instead of two.
The system breaks down into four main parts: the EEG sensing at the temples using soft-dry electrodes, the analog front-end that conditions and digitizes the signal, the MCU that does the spike encoding and SNN classification while also driving the display, and the AR display module itself. To keep things manageable, I am building each part separately first and connecting them as the project matures, which is the spiral development approach.
Here are the initial sketches from my proposal:




The full reasoning behind choosing this project over the other idea is in the initial proposal.
Week 02: Initial CAD Design
This was the first week where I started actually building something for the final project. The assignment was CAD, and I used it as an excuse to get a head start on the glasses frame design, both the 2D logo and the 3D model.
Why this matters for the project
The glasses frame is the physical backbone of everything. The EEG electrodes need to sit at the temples, the micro-OLED display needs to be mounted near one lens, and the electronics need somewhere to live. Getting the frame shape right early means I have something concrete to work from when I start designing the housings and mounts in later weeks.
2D Logo Design
I designed a logo in Inkscape that represents the project visually, AR glasses with a brain in the middle. This isn't just a logo though, I used it as the basis for the glasses frame shape in Fusion 360. The design needed to look good as a logo but also translate into a shape that could actually fit a face, which meant a lot of back and forth between the two.

3D Glasses Frame
I started the 3D model in Fusion 360 by importing my logo SVG as a reference, then traced and built a parametric sketch on top of it. The key parameters I set up were frame_width (135 mm) and frame_sides (frame_width/11), which I can tweak later as the design evolves.

The main challenge this week was getting the frame to sit on a curved surface rather than being flat. I created a construction curve, extruded it into a 3D surface, then projected the glasses design onto it. This is what makes the frame actually wearable instead of just a flat shape.

After patching and trimming the lens openings, I thickened the frame to give it actual volume. This is the version I will be iterating on in the weeks ahead as I start figuring out where the electrodes and display mount.

The full walkthrough of the 3D design process, including all 9 steps, is in Week 02.
Week 03: Vinyl Cutting the Project Logo
This week was computer-controlled cutting, and I used the vinyl cutter to produce physical stickers of my project logo. This served two purposes: learning the vinyl cutting workflow and creating something tangible that represents the project.
Why this matters for the project
While vinyl cutting isn't directly part of the final glasses build, the process taught me about layering, precise alignment, and working with the design files I created in Week 02. The multi-layer sticker required careful registration between colors, which is similar to the kind of precision I'll need when assembling the glasses components later.
Multi-layer Vinyl Cutting Process
The logo has multiple colors (brain, glasses frame, text), so I had to separate each layer in Inkscape, export them individually as DXF files, and cut them on separate vinyl sheets. The key challenge was adding alignment markers to each layer so they could be stacked accurately.

After cutting and weeding each layer (removing the excess vinyl), I used transfer tape to layer them one by one, using the alignment markers to ensure everything lined up correctly.

The full process, including the mistakes I made with blade depth and force settings, is documented in Week 03.
Week 04: Embedded Programming: First Electronics Week
This was the first week I actually worked with electronics hardware, and I deliberately used it as a way to prototype every major subsystem of the final project, on real hardware, before I commit to any PCB design.
Why this matters for the project
The final project has four demanding sub-systems that all need to coexist on a tiny board: EEG signal acquisition, BLE radio, on-device ML inference, and a camera. This week I validated each one independently, and one of them together, to establish that the architecture I've been planning is actually achievable before I spend weeks on custom PCB design.
Choosing the XIAO ESP32-S3 Sense
The first major decision this week was hardware selection. I went through all the available architectures (AVR, ARM Cortex-M, Xtensa LX7, RISC-V) and mapped my project requirements against each one. The constraints that matter most are: 8 MB PSRAM for TinyML inference, a native camera interface, BLE 5.0, and a sub-25 mm form factor. Only one board in the lab lineup satisfies all four: the XIAO ESP32-S3 Sense.
The reasoning is documented in detail in Week 04.
Project 01: EEG → Stepper Motor (Arduino + Python + Muse Headband)
The first build this week connected the Muse 2 EEG headband to an Arduino Uno via a Python script. The script streams EEG via BLE, detects eye blinks in the frontal electrode signals using a peak + variance threshold, and sends a BLINK command over USB serial. The Arduino turns a stepper motor 90 degrees per blink.
This validated the blink detection pipeline end-to-end. The two-channel confirmation approach (spike must appear on both AF7 and AF8) proved effective at rejecting noise. I also 3D printed a part that attaches to the motor shaft to make the rotation visible during the regional review.

Project 06: Full EEG On-Device with BLE, Blink Counter, Live Waveform (XIAO ESP32-S3)
This is the result that directly advances the final project. The XIAO ESP32-S3 connects to the Muse headband directly via BLE NimBLE stack, decodes the raw 12-bit EEG packets, applies high-pass filtering to remove baseline drift, runs adaptive blink detection, computes DFT-based frequency band powers (delta/theta/alpha/beta/gamma), and renders everything on an SSD1306 OLED — no laptop involved.

The key architectural pattern I validated here — and that I will carry forward — is the ISR-safe ring buffer between the FreeRTOS BLE task (Core 0) and the main processing loop (Core 1). This is the pattern I'll need for the final project, where Core 0 handles BLE + WiFi and Core 1 handles EEG processing + inference + display.
The adaptive blink threshold (noise floor tracker that adjusts dynamically) also replaces the fixed 100 µV threshold from Project 01, and it performs significantly better in real conditions.

The full technical writeup, including the DFT implementation, the BLE UUIDs, and the waveform renderer, is in Week 04.
Camera Streaming (Project 05)
I also validated the OV2640 camera interface on the XIAO Sense this week. The camera streams 640×480 JPEG frames at ~6 fps over 3 Mbaud USB serial to a Python viewer. This confirmed that the camera pipeline works and that the DVP interface on the ESP32-S3 is accessible through the board-to-board connector on the Sense variant.

Before this week, the camera and BLE subsystems were theoretical. Now I've run them both in firmware. The EEG processing pipeline went from a Python prototype to running entirely on a microcontroller. The PSRAM is accessible and performant enough to hold real data structures.
The biggest remaining unknowns for the final project are: (1) power budget when BLE + camera + OLED all run simultaneously, and (2) whether I can fit a useful TinyML model (or an SNN) into the 8 MB PSRAM alongside the EEG buffers and BLE stack. Both of those will get validated in later weeks as I start designing the full board.