Merin Cyriac
Maker. Cinephile. Chronic re-listener.
Kerala, India
I'm doing Fab Academy right now, which means I spend a lot of time breaking things and figuring out why they broke. Circuits, CAD, CNC -- stuff I had no idea how to do six months ago.
Outside of that I take photos, watch too many films, and have a habit of playing the same song on loop until everyone around me loses their mind.
What I'm into
Nothing fancy. I just like pointing a camera at things and seeing what comes out. I'm weirdly particular about light and composition but I couldn't really explain why... It's more of a feeling.
I watch a lot of films. A lot. I'll sit with a movie for days after... Thinking about a shot or a line that didn't land until later. Subtitles are fine. Slow burns are fine. Bad endings are not.
I will find one song and play it 50 times. Not exaggerating. It's how my brain works.
I got into electronics because I wanted to understand how stuff works, and now I can't stop. There's something satisfying about building something physical that actually does a thing.
What I work with
- Arduino & Raspberry Pi
- Circuit design & embedded prototyping
- Python basics
- Linux workflow
- 3D printing, laser cutting, CNC (still learning)
- Git
Things I've built
My first real electronics project. The idea was simple -- read a heartbeat signal and make sense of it. In practice, small biological signals are incredibly noisy and finicky, so a lot of time went into just getting clean data.
Patience, mostly. You test, it fails, you adjust one thing, test again. Eventually something works and you're not even sure what you changed. Also: debugging hardware is a completely different skill from debugging code.
That a simple idea -- monitor a heartbeat -- can actually be hard to do well, and that the gap between "working" and "reliable" is where most of the real work is.
A wearable device for deaf and hard of hearing people that converts speech into tactile vibration patterns. The idea was to give people a way to sense sound through touch -- no audio, no screen, just feel.
Raspberry Pi, a microphone, and a vibration motor. It picked up spoken words offline and translated them into haptic feedback -- so it could work without internet, without sound, and without needing to look at anything.
Background noise wrecked the recognition accuracy constantly. I didn't have much training data, so I kept re-testing with different conditions. Getting the software and hardware to sync timing-wise was also more annoying than expected.
Building for a specific person's need changes how you make decisions. You stop asking "does this work" and start asking "does this actually help." That shift stuck with me.