Wildcard Week: Fish Recognition with Teachable Machine

Overview

For Wildcard Week, the goal is to design and produce something using a digital process not previously explored during the course. I decided to train a machine learning model using Google's Teachable Machine — a web-based tool that allows anyone to create simple ML models without coding.

Teachable Machine offers an interface for training image, audio, or pose classification models directly in the browser. It’s designed to be user-friendly and runs entirely client-side, making it surprisingly accessible even for absurd use cases — like training a model to recognize individual fish in a home aquarium. So I did exactly that.

Teachable Machine

This assignment documents the process of training a fish classifier model using Teachable Machine, including data collection, training, and live previewing. The results aren't flawless, but the workflow is real, reproducible, and just strange enough to belong in Wildcard Week.

This process is not covered in any other assignment, as it focuses on creating, training, and deploying a machine learning model. While I’ve worked with sensors, interfaces, and embedded boards in other weeks, this is the only assignment that deals with artificial intelligence — specifically image classification — and browser-based model training. The dataset creation, model logic, and previewing workflow are all unique to this week.

The Usual Suspects

These are the five fish who unwillingly participated in the experiment. Their names were chosen by my partner in crime — the 7-year-old mastermind of this joint venture — whose naming strategy lies somewhere between food cravings and cartoon chaos. I handled the tech; he ran branding and operations.

Tomatito
Tomatito
Limón
Limón
Fruta del Dragón
Fruta del Dragón
Gol
Gol
Fast Fast
Fast Fast

Checklist

Teachable Machine and Fishes

Below is the full process, broken down into three simple steps — because nothing says scientific rigor like trying to photograph fish who actively avoid being helpful. It’s not exactly rocket science, but it did involve at least one species that moves faster than my camera’s autofocus.

Step 1: Capture and Upload

I took approximately 15 photos of each fish and uploaded them into Google’s Teachable Machine under five different classes — one for each fish. The platform makes this part intentionally simple, because they assume your data will be clean and your subjects will cooperate. They are wrong. Fish don’t follow instructions.

Uploading 1 Uploading 2 Uploading 3 Uploading 4

Step 2: Train the Model

Using the default training settings, I generated a basic image classification model. I did test other configurations, but the default one produced the best balance of performance and not crashing. The model started to recognize the general shape, color, and existential despair of each fish.

Training in progress

It’s important to note that training happens entirely inside the browser. This means: if you close the tab, open TikTok, or even sneeze too hard — training stops. Teachable Machine will kindly warn you to “not switch tabs,” which is basically its way of saying “please respect my fragile state.”

warning

Step 3: Preview the Result

After training, I used the live preview mode to test the classifier. Teachable Machine gives you two options: you can upload a static image for testing or use your webcam for a real-time preview. I tried both — but come on, we do it live, mother fucker. The webcam mode was much more satisfying (and chaotic).

option preview preview image preview video

Each fish was consistently identified with decent accuracy — at least when they weren’t hiding behind plants, photobombing each other, or darting around like caffeinated toddlers. Still, the model held up surprisingly well under these highly uncooperative conditions.

Live Test

Test 1 Test 2

Future Integration: Giving the Fish a Voice

While this started as a Wildcard experiment, the plan is to extend this into the final project. Each fish will be assigned a distinct voice and personality, using the classification result to trigger audio output. The fish will be sarcastic, judgmental, and deeply suspicious of humans — until someone brings food, at which point they become slightly less hostile. This is not just a joke; it’s also an attempt to explore the boundary between AI, physical interaction, and ambient character design.

Problems and Fixes

Problem 1: Capturing the fish

Fish aren’t known for their cooperation. Trying to get good, clear images of each fish was by far the most annoying part of the process. They swim, they blur, they hide behind decorations, and sometimes they vanish entirely like they know you’re training a robot to track them.

Fix: Be patient. Take a ridiculous number of photos. Eventually you’ll get at least 10–15 usable ones per fish. It's not efficient. It's not elegant. It just works.

Capturing wrong 1 Capturing wrong 2

Problem 2: Tweaking training settings

Teachable Machine allows you to modify training parameters like Epochs, Batch Size, and Learning Rate. The temptation to play with these settings is high, but unless you’re trying to break your browser or burn out your GPU fan, I suggest leaving them at default.

I tried adjusting these values — specifically reducing the learning rate and increasing the batch size — but the results were inconsistent. The model got either too confident in nonsense (overfitting) or forgot how to identify the fish it just saw five seconds ago (underfitting).

In the end, the default settings worked best:

Model Performance Interpretation

Below are some of the training results from the final model. These charts and tables show how well the model learned over time and how each fish performed individually.

Accuracy and loss per epoch

The first graph shows accuracy per epoch — the model hit a high accuracy early on and stayed there. This means the network learned to distinguish between the classes without overfitting. The loss per epoch graph shows how much the model was “wrong” over time — and as seen here, it dropped quickly and stayed low. That’s a good sign.

Accuracy per class and confusion matrix

The accuracy per class was solid for all fish except Fruta del Dragón — who apparently refuses to play along even with AI. The confusion matrix shows the model predicted each fish almost perfectly, with a couple of misclassifications between Fruta del Dragón and Fast Fast (which honestly is understandable, they’re both fast and red).

These metrics confirmed that the default settings were more than enough for this specific use case — quick, effective training with surprisingly high accuracy considering the limited and chaotic dataset.

Download Teachable Files

Teachable Machine allows you to export your trained model for use in different environments, depending on your preferred platform or programming context. Options include TensorFlow, TensorFlow.js, and TensorFlow Lite — covering web-based deployment, Python apps, and mobile edge devices. In my case, I used the TensorFlow.js version, which runs directly in the browser and requires no back-end setup.

Export options from Teachable Machine showing download of TensorFlow.js

You can download the complete model as a ZIP and host it locally or connect to it from a project. The ZIP contains: