Skip to content

10. Machine Week - David Vaughn, Evan Park, and Richard Shan

This week we were tasked to create a machine that includes a mechanism, actuation, automation, and application.

Work Distribution

People Description
Richard All programming and software development
Richard Raspberry Pi work
Richard Microphone, keyboard, touchscreen setup and config
Evan CNC
David Motors setup
Richard, David Motors programming, debugging
Evan, David CAD
Evan, David Assembly
Evan, David Test laser cut
All Ideation
All Documentation

Individual Documentation Pages

Richard Shan

Evan Park

David Vaughn

Research

While researching for what type of machine we wanted to do, Mr. Dubick told us about a Automated ChatGPT Ouija Board by Jack Hollingsworth. That machine uses magnets and ChatGPT to create a self-moving ouija board.

We decided that we wanted to create this machine and add some modifications. The first thing we wanted to add was a Speech-to-Text so that you didn’t have to type your question into ChatGPT to get a response. We planned to utilize Whisper API to accomplish our Speech-to-text feature.

Designing

Fusion 360

To start out creating the box, we designed a file in Fusion360 that will assemble the box that will hold our gantry and the rest of our electronic components. The top of the box will also have the Ouija Board engraved on top. We designed it in Fusion360 so that we could use parameters to change the size of the box if needed. We added a small panel for a button that we would use to activate the microphone that would pick up our speech and turn it into text for ChatGPT. We also added a hole for a USB for debugging and a hole to put your finger in and push the top off.

This is the file we created and a photo of the parameters

We then exported this file as a DXF.

Laser Cut Tests

We decided that we wanted to laser cut our design on cardboard first to make sure everything fit as we wanted.


After the first cut, we realized that we had not made the sides longs enough for each screw to fit in so we mades the sides longer by increasing the two parallel sides by 2 times the material thickness.
We fixed this issue then laser cut again. This time the box was assembled with no problems.
Mr.Dubik also suggested that we add joints to our design so we went into Fusion and added joints
We then laser cut this file to test our joints.
Now that our tests looked good, we started to create the toolpath for our file in Aspire.

Aspire

We then took our DXF file into Aspire. We took a tape measurer and found that the length and width were 96’’ and 48’’ respectivly and with calipers we measured that the width was .46’‘. This was our Job Setup in Aspire

Once our job was setup, we imported the vector for our box. We then started to create our toolpaths.
There was a problem that not every vector was connected so we used the join tool to join them together.

Then we added dogbones by using the filet tool and specified our tool radius and went along each finger joint and added dog bones so that they would fit when we milled them. We then used a profile toolpath. We started the profile toolpath by adding all the tabs we needed for our mill.

We used a 3/8’’ compression bit because they prevent chipping the top or bottom of our wood and they are overall suitable for our job. We also decided to add ramping to prevent harming our bit. This is a photo of our settings for our toolpath.
Then we clicked calculate at the bottom. This gave us a preview of what the toolpath would do if we used the “Preview Toolpath” tool and ran the simulation. This is what the cut would look like according to the simulation
Since everything looked good, we sent the file over to the ShopBot to mill.

ShopBot

We were milling a wood that had already been milled before so we had to organize our file so it went around the already milled out parts on the wood.

Once organized, we downloaded the toolpath and ran an aircut on the ShopBot software to make sure everything looked good and then ran the full cut.

MILLING VIDEO

Post Mill

There were a few issues that we found out after we milled. The first issue was that we had milled the magnet holder on the box the wrong way.

This meant that we would have to re-mill the long sides after we flipped the long sides horizontally on Fusion360. This is us milling the long sides again, the process was the same as last time.

We were working around an already milled plywood so we had to rearrange our file so that it would fit on the wood. I used the same process as before to mill out the file but, while the file was milling, I realized that the tabs weren’t present. This was most likely due to my cut depth being too deep for the wood. This made the wood shift a little when the pocket toolpaths started. I stopped the ShopBot and put brads in each of the cut out pieces so that they wouldn’t move for the pocket toolpaths. Unfortunately, I stopped the cut a little late so one of the sides were a little bit scuffed but it didn’t cause any issues so we kept the piece.

Also, we realized we had not given a sufficent offset for the joints:
We needed to sand them down so that they would fit together.

Then we glued the magnets using hot glue onto the pieces so that they would snap the cover on the top whenever we put them together.

We also engraved our Ouija design onto our lid, modifying a design found online:
We converted this image to an svg using CorelDraw’s Quick Trace tool, removed some of the unnecessary responses such as “good bye”, and added in some punctuation so as to make the ghoul a little more literate. We would also lasercut a design that we would use to cover the magnet we were moving on an 1/8’’ plywood.
SVG OF SUN (GET LATER)
Now our box was ready to be assembled.

Assembly

Assembly of Box

Now it was time to assemble our pieces to create our box.

The first thing I did was screw the magnet holders onto the sides of the boxes. I did this by using a measuring tape to figure out the center of the magnet holder on the other side, marking it with a pen, and then screwing a screw into that mark. I did two screws for each magnet holder.

I then screwed each side of the box together by screwing it on the edges and also screwed it on the bottom to hold the box together

Gantry

For this project, we wanted a 2D axis gantry system which could be programmed using G-Code.

Mechanics and Wiring

First, we set up the CNC Shield and Arduino to connect to the stepper motors using this video and a few others as reference. To sum up the process, we attached the CNC Shield to an Arduino Uno, attached stepper drivers to the CNC Shield, used ribbon cables to connect stepper motors to the CNC Shield, and connected power to the Arduino through some stripped wires. Eventually, we exchanged these stripped wires for jumper cables as these are more reliable.

I plugged the Arduino into my computer and uploaded the GRBL library to the Uno.
From here, we downloaded the Universal G-Code Controller, and we followed the setup directions there to get it connected. Finally, after much trial and error, we were able to move the stepper motors. We went back to Arduino IDE, opened Serial Monitor, and tested a line of G-Code, which also moved the motors. However, later on I realized that one of the motors (connected to the z-axis) did not move as far as the one connected to the x-axis when sent the same distance. I then tried connecting it to the y-axis instead, but this made both motors move, which also didn’t seem right. After switching out many of the components, trying to figure out what the problem was, I later found out that it moves in a way called CoreXY, which uses both the stepper motors to move, even only in one direction. The only time CoreXY only uses one stepper motor is to move in 45 degree angle, as each motor moves two axes equally. It follows the basic equations shown below with A and B being how much each of the stepper motors are moved:

3D Printing Parts for the Installation

Many of the 3D printed parts we used in this build we got from Sand Table video. Below is a table of which designs we got and how many we used.

Part Image Quantity
X-Carriage Mounts 2
Idle Mounts 1
Belt Grip 1
Magnet Mount 1
Motor Bases 2
Wire Loops 2

*Note that one of the motor bases must be mirrored horizantally in Prusa or other slicing software before print.

Although we got downloaded many of the 3D files from the Jack Hollingsworth Instructables site (which got these files from the Sand Table video by DIY Machines), we wanted to make at least some of these ourselves. To hold the CNC Shield and the Arduino Uno, I created a simple box with holes in the sides for various cables and wires. I also created the part which would hold the magnet. This part originally fit directly onto the top of the mount and went straight up, before we realized that it would not fit there as the screws used to hold the belt grip would intrude on its space. After this, I recreated this with an offset, however I forgot to turn the slot which the part sticking out went into and we neglected to think about how it went right through the path of the belt, but these things turned out fine. Additionally, either my measurements were off or I was too cautious, but the holder was about an inch short.

We eventually just printed a small cylindar to fit inside the magnet holder which we hot glued the magnet on the top of. One of the box’s holes was slightly off, but this could be easily fixed by using a soldering iron to melt away some filament, thereby extending the hole.

Assembling the Gantry

The gantry for this machine is based on one for a sand table by DIY Machines. Their video is really good for following the assembly of the gantry machine but we will still outline what we did.

We started by attaching the X carrages onto the X rails by using M38 screws for the carrage. We then attached the smooth idlers using M520 screws.

We then attacked the motors to our motor cases using M3*12 screws.

I then attached the X rails to the idler supports and motor mounts using M312 bolts. I was suggested to start using heat inserts for my screws and I used a certain tool to push the heat inserts vertically into my 3D printed motor mount. We would later attach a teethed idler onto it using M520 screws.

It ended up making the hole clogged with PLA and making a bolt impossible to go through. I had to take a hot air blower and heat the hole until I could push the heat insert out and then take an exactoknife and take material out of the hole until I could push the bolt through and put the nut at the end of it to hold it in. I decided to give up on the heat inserts after this.

We then put everything in the box at this point.

I then attached the Y axis by using M3*12 bolts.

We then attaches the magnet carriawge mount using M3*8 bolts onto the Y axis.

We then ran our belt around the mount going around each motor connecting everything using one belt

We then wood screwed our Arduino and motor mounts into the box. To make sure the motor mounts were squared, we used a triangular ruler to measure each side and make sure everything was at a 90 degree angle. We then hot glued the Rasberry Pi to the side of the box near the outlet hole. We also attached the magnet holder to the magnet holder carriage mount using M3*8 screws.

We then installed some wire loops with wood screws for wiring organization.

Our box was ready to be programmed and run.

Installing the Gantry

To install the gantry system into the box (which we only did after we got all the programming working with the motors properly), we screwed some what kind of screws into the holes in the 3D printed parts at the ends of the x and y axes. We used a triangular ruler to make sure these were fairly straight and symmetrical. Additionally, we screwed down the wire holders and the box for the CNC Shield and Arduino.

Programming

Planning

The final program must be able to process a user’s question through voice and display an answer on the actual ouija board by moving the magnet to corresponding letter locations.

The whole program for the board can be broken down into a few key tasks: receiving audio input from the microphone, converting audio to text using Whisper API, querying ChatGPT (OpenAI API) for a response to that same text, sending the generated text from the chip to the Arduino, and moving the magnet to a specified set of coordinates that correspond to each letter within the text string.

With these goals established, it directly follows that our chosen chip must be able to:

  • Connect to the internet to query APIs
  • Receive audio input from a USB microphone and/or keyboard
  • Use serial communication with an Arduino

As a personal goal, I wanted the entire machine to be self-contained and run without the need of any laptop. This self-imposed goal definitely made the entire process a lot more difficult, necessitating the following major changes (and many more minor ones!).

With laptop Without laptop
Inputting coordinates into Arduino IDE's Serial Monitor Serial communication between Raspberry Pi and Arduino
Using computer capabilities for API calls Chip connection to wireless network and querying APIs
Builtin keyboard for text input Support generic USB keyboards for text input (not PS/2 specific)
Builtin computer microphone for audio input Script to record sound through USB microphone and save file to a static local path for processing

However, I’m glad that I decided to run the whole machine without a laptop. First off, it simply is cooler. Secondly, even with this extra challenge, I still spent a lot of time waiting for the mechanical part of the machine to be built and without this modification, I likely would have finished the code on the first or second day. Lastly, running the code without a laptop (albeit with a Raspberry Pi) taught me a lot of important skills on networking, serial communication, and managing inputs that will be of massive help in later weeks.

Finding a Chip

ESP

I initially tried both an ESP32 and ESP8266 chip, but abandoned them due to issues connecting to a WiFi network. Although I know that these chips are capable of connecting to a wireless network, I wasn’t able to do it after about an hour of work.

Pico

My next (and more promising) chip was the Raspberry Pi Pico. I found a Pico with a built-in WiFi chip, which allowed me to connect to WiFi to query OpenAI. Using the Pico’s TX/RX pins, I was also able to confirm serial communication with this chip. Additionally, using a Pi-family chip allowed me to easily integrate Python/MicroPython, which made querying OpenAI and Whisper APIs exponentially easier. The following code connects the Pico to WiFi, queries OpenAI, and sends the response to an Arduino having been connected through TX/RX pins.

import network
import urequests
import ujson
import machine
import time

ssid = "NETWORKNAME"
password = "NETWORKPASSWORD"

api_key = 'USER API KEY'
prompt = 'PROMPT'
url = 'https://api.openai.com/v1/chat/completions'

def connect_to_wifi():
    wlan = network.WLAN(network.STA_IF)
    wlan.active(True)
    wlan.connect(ssid, password)

    while not wlan.isconnected():
        pass
    print('Connected to Wi-Fi')

def send_prompt_to_openai():
    headers = {
        'Authorization': 'Bearer ' + api_key,
        'Content-Type': 'application/json'
    }
    data = {
        'model': 'gpt-3.5-turbo',
        "messages": [{"role": "user", "content": "Answer the following query in an ominous manner, and keep your response to under 20 characters:" + prompt}],
        'max_tokens': 50
    }
    response = urequests.post(url, data=ujson.dumps(data), headers=headers)
    return response.json()

def main():
    connect_to_wifi()
    response = send_prompt_to_openai()
    response_text = response['choices'][0]['message']['content']
    print(response_text + "\n")
    arduinoify(response_text)

def arduinoify(response):
    uart1 = machine.UART(1, baudrate=9600, tx=4, rx=5)
    uart1.write(response)

if __name__ == '__main__':
    main()

However, the Pico had a critical flaw in that I couldn’t easily integrate any USB-based input device - namely, in this case, a USB microphone or USB keyboard. After doing some research online, I discovered that in order to use a keyboard, I either needed to switch from using MicroPython to CircuitPython (which still only may or may not work) or to buy an entirely new keyboard with PS/2 compatibility, neither of which were realistic solutions.

Raspberry Pi 4B

I was a little hesitant at first to use a Raspberry Pi, both because using it would require an entire redesign of my current code (connecting to OpenAI via completion model URL changed to querying via a completion prompt), and because the Raspberry Pi itself could qualify as a computer. However the Raspberry Pi just seemed to check all of the boxes, given that it has builtin WiFi connectivity, runs Python, has 4 USB ports for inputs (keyboard, microphone), and can plug directly into the Arduino, effectively eliminating the need for TX/RX pin wiring for serial communication.

However, changing to a Raspberry Pi required a few modifications:

  • Since the Raspberry Pi is more a computer than a chip (runs Linux), a code redesign is necessary. As mentioned earlier, I no longer used a URL endpoint to query OpenAI but rather assembled the entire prompt locally and sent it out to a model (either GPT3.5 or GPT4 - for the purposes of keeping my API costs low, I use GPT3.5 but there should be no significant difference given that all generated responses are soft capped at 20 characters.)
  • A screen. Once more, because the Pi resembles an actual computer, I have to actually execute and call a script on the Pi as opposed to loading a script onto it. I actually always wanted to add a screen to this project, but with a Pi, adding a screen allows me to run and view the status of the program from the actual machine. The screen also functions as the HDMI output of the Pi - for many hours, I was coding on a small 5 inch screen.

Software Development

Now that I have decided on a chip and have a more clear idea of the entire system, it’s time to start programming. Again, envision the final program as a conglomeration of several key tasks:

  • Receiving audio input from the Microphone
  • Saving the audio file and converting to text (Whisper API)
  • Generating the oujia board’s response to the user query (OpenAI API + some minor prompt engineering)
  • Processing the response text into movements for the gantries
  • Moving the gantries and therefore the magnet

Audio Input

As I had little prior experience in both Linux and working with microphone input devices, I developed the following handy program that allowed me to view all connected audio devices along with some of their specs.

import pyaudio

audio = pyaudio.PyAudio()

def print_device_info(device_index):
    device_info = audio.get_device_info_by_index(device_index)
    print(f"Device {device_index}: {device_info.get('name')}")
    print(f"  Input Channels: {device_info.get('maxInputChannels')}")
    print(f"  Output Channels: {device_info.get('maxOutputChannels')}")
    print(f"  Default Sample Rate: {device_info.get('defaultSampleRate')}")

num_devices = audio.get_device_count()

print(f"Found {num_devices} device(s)\n")

for i in range(0, num_devices):
    print_device_info(i)

audio.terminate()

As a demonstration, when ran on my computer, the above code yields the following output:

Device 0: Microsoft Sound Mapper - Input
  Input Channels: 2
  Output Channels: 0
  Default Sample Rate: 44100.0
Device 1: Microphone Array (Intel® Smart
  Input Channels: 4
  Output Channels: 0
  Default Sample Rate: 44100.0
Device 2: Microsoft Sound Mapper - Output
  Input Channels: 0
  Output Channels: 2
  Default Sample Rate: 44100.0

[...]

Device 19: PC Speaker (Realtek HD Audio 2nd output with SST)
  Input Channels: 2
  Output Channels: 0
  Default Sample Rate: 48000.0
Device 20: Stereo Mix (Realtek HD Audio Stereo input)
  Input Channels: 2
  Output Channels: 0
  Default Sample Rate: 48000.0

As is shown in the printout, this script finds the input channels, output channels, and default sample rate of any connected audio device, along with its device number.

When running the diagnostic script on the Raspberry Pi, I learn that the USB microphone is recognized as Device #2, with 2 audio input channels and a default sample rate of 44100.0. These specifications are important for recording audio.

Now that the device specifications are cleared up, the actual coding can begin. The following script records 10 seconds of audio and saves it to “output.wav”.

import pyaudio
import wave

# Audio recording parameters
FORMAT = pyaudio.paInt16  # Audio format (16-bit PCM)
CHANNELS = 2              # The microphone has 2 audio input channels
RATE = 22050              # Reduced sampling rate from default of 44100 to minimize overflow errors (they are bad)
CHUNK = 2048              # Increased buffer size to minimize overflow errors (they are bad)
RECORD_SECONDS = 10        # Duration of recording
WAVE_OUTPUT_FILENAME = "output.wav"  # Output filename

def record_audio():
    audio = pyaudio.PyAudio()

    # Open stream
    stream = audio.open(format=FORMAT, channels=CHANNELS,
                        rate=RATE, input=True, input_device_index=2, # Device index of 2 because the microphone is recognized as Device #2 
                        frames_per_buffer=CHUNK)

    print("Recording...")
    frames = []

    # Record for specified number of seconds
    try:
        for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
            try:
                data = stream.read(CHUNK)
                frames.append(data)
            except IOError as e:
                if e.errno == pyaudio.paInputOverflowed:
                    print(f"Overflow at iteration {i}. Continuing...")
                    time.sleep(0.1) 
    except Exception as e:
        print(f"Error during recording: {e}")

    print("Recording finished.")
    stream.stop_stream()
    stream.close()
    audio.terminate()

    # Save the recorded data as a WAV file
    try:
        with wave.open(WAVE_OUTPUT_FILENAME, 'wb') as wf:
            wf.setnchannels(CHANNELS)
            wf.setsampwidth(audio.get_sample_size(FORMAT))
            wf.setframerate(RATE)
            wf.writeframes(b''.join(frames))
    except Exception as e:
        print(f"Error saving WAV file: {e}")

Audio to Text

Converting audio to text is surprisingly easy, although the process can take upwards of 20 seconds for a short clip.

import whisper

def transcribe_audio(file_path):
    model = whisper.load_model("base")
    result = model.transcribe(file_path)
    return result["text"]

Side note: the code’s implementation is set up so that every time the recording occurs, it overwrites the previous recording in the output.wav file. This means that there is no wasted storage on old audio clips, however, this could easily be tweaked if you are trying to implement this with saving.

Generative Text Response

Now that we have extracted text from the short microphone audio clip, we can send that text to OpenAI’s API to generate a response. The following function queries GPT3.5 with a parsed text prompt and returns the output of the query. This process is surprisingly fast.

from openai import OpenAI

def call_openai_api(prompt):
    client = OpenAI(api_key='REDACTED') 
    completion = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[
            {"role": "system", "content": "You are a helpful assistant trapped in a ouija board and are a ghost."},
            {"role": "user", "content": "Answer the following query in an ominous tone and under 25 characters: " + prompt}
        ]
    )
    return completion.choices[0].message.content

Serial Connection

This code snippet, in isolation, initializes a new serial connection. Since the Pi is wired straight to the Arduino’s USB port, writing to the serial here is equivalent to typing something into the Arduino IDE’s Serial Monitor. Obviously, the code is not used in isolation in the actual program.

from serial import Serial

ser = Serial('/dev/ttyUSB0', 115200, timeout = 1)
ser.write(("Whatever you want to write").encode())

The lsusb builtin command on linux displays all connected USB devices. When unplugging the Arduino and rerunning lsusb, I can see that the ttyUSB0 device has disappeared. Therefore, the Arduino is the tty0USB device.

The .encode() function is important because it turns the string into bytes before sending it through serial. Otherwise, Arduino cannot interpret the string.

Moving the Magnet

To move the magnet based on each character in the text string, I first need to find physical positions of each character and map them to that character. This process took about 20 minutes to do, as it was just some manual labor.

alphabet_dict = {chr(65 + i): (0, 0) for i in range(26)}

alphabet_dict['A'] = (1, 20)
alphabet_dict['B'] = (5, 25)
alphabet_dict['C'] = (10, 27)
alphabet_dict['D'] = (16, 28)
alphabet_dict['E'] = (20, 30)
alphabet_dict['F'] = (27, 30)
alphabet_dict['G'] = (34, 30)
alphabet_dict['H'] = (37, 31)
alphabet_dict['I'] = (41, 31)
alphabet_dict['J'] = (44, 28)
alphabet_dict['K'] = (51, 27)
alphabet_dict['L'] = (57, 25)
alphabet_dict['M'] = (64, 22)
alphabet_dict['N'] = (0, 10)
alphabet_dict['O'] = (3, 12)
alphabet_dict['P'] = (9, 17)
alphabet_dict['Q'] = (14, 20)
alphabet_dict['R'] = (19, 22)
alphabet_dict['S'] = (24, 23)
alphabet_dict['T'] = (30, 23)
alphabet_dict['U'] = (36, 23)
alphabet_dict['V'] = (41, 21)
alphabet_dict['W'] = (47, 20)
alphabet_dict['X'] = (55, 17)
alphabet_dict['Y'] = (59, 14)
alphabet_dict['Z'] = (64, 10)
alphabet_dict['1'] = (9, 5)
alphabet_dict['2'] = (15, 5)
alphabet_dict['3'] = (19, 5)
alphabet_dict['4'] = (24, 5)
alphabet_dict['5'] = (29, 5)
alphabet_dict['6'] = (34, 5)
alphabet_dict['7'] = (39, 5)
alphabet_dict['8'] = (43, 5)
alphabet_dict['9'] = (48, 5)
alphabet_dict['0'] = (53, 5)
alphabet_dict['.'] = (24, 0)
alphabet_dict[','] = (40, 0)
alphabet_dict[' '] = (33, 13)

The dictionary maps the physical coordinate values of each letter on the actual engraved ouija board to the character in the text string. Now to actually move the magnet to the right location:

def move(letter):
    time.sleep(4)
    x, y = alphabet_dict[letter.upper()]
    ser.write(("g1x" + str(x) + "y" + str(y) + "f2000" + "\n").encode());
    time.sleep(4)

The command format for moving the gantries is shown in the ser.write command. For example, writing g1x40y25f500\n would move the magnet to (40, 25) at a frequency (speed) of 500.

The newline character is important in actually sending the command across. Without it, this is analogous to typing something into the Arduino IDE’s Serial Monitor and not clicking the enter button.

With the move function defined, a simple for each loop that iterates through the string that we will display on the ouija board is all I need.

The delays are important so that the gantries actually have time to move. These values can be further optimized - increasing or decreasing them depending on how much time you want the magnet to stay on each letter for and how much leeway you want to give the machine.

for char in response:
    move(char)

Aggregation

To tie up all the individual parts:

import pyaudio
import wave
import time
import whisper
import os
import curses
from openai import OpenAI
from serial import Serial


alphabet_dict = {chr(65 + i): (0, 0) for i in range(26)}

alphabet_dict['A'] = (1, 20)
alphabet_dict['B'] = (5, 25)
alphabet_dict['C'] = (10, 27)
alphabet_dict['D'] = (16, 28)
alphabet_dict['E'] = (20, 30)
alphabet_dict['F'] = (27, 30)
alphabet_dict['G'] = (34, 30)
alphabet_dict['H'] = (37, 31)
alphabet_dict['I'] = (41, 31)
alphabet_dict['J'] = (44, 28)
alphabet_dict['K'] = (51, 27)
alphabet_dict['L'] = (57, 25)
alphabet_dict['M'] = (64, 22)
alphabet_dict['N'] = (0, 10)
alphabet_dict['O'] = (3, 12)
alphabet_dict['P'] = (9, 17)
alphabet_dict['Q'] = (14, 20)
alphabet_dict['R'] = (19, 22)
alphabet_dict['S'] = (24, 23)
alphabet_dict['T'] = (30, 23)
alphabet_dict['U'] = (36, 23)
alphabet_dict['V'] = (41, 21)
alphabet_dict['W'] = (47, 20)
alphabet_dict['X'] = (55, 17)
alphabet_dict['Y'] = (59, 14)
alphabet_dict['Z'] = (64, 10)
alphabet_dict['1'] = (9, 5)
alphabet_dict['2'] = (15, 5)
alphabet_dict['3'] = (19, 5)
alphabet_dict['4'] = (24, 5)
alphabet_dict['5'] = (29, 5)
alphabet_dict['6'] = (34, 5)
alphabet_dict['7'] = (39, 5)
alphabet_dict['8'] = (43, 5)
alphabet_dict['9'] = (48, 5)
alphabet_dict['0'] = (53, 5)
alphabet_dict['.'] = (24, 0)
alphabet_dict[','] = (40, 0)
alphabet_dict[' '] = (33, 13)

ser = Serial('/dev/ttyUSB0', 115200, timeout = 1)

FORMAT = pyaudio.paInt16
CHANNELS = 2
RATE = 22050
CHUNK = 2048
RECORD_SECONDS = 10
WAVE_OUTPUT_FILENAME = "output.wav"

def record_audio():
    audio = pyaudio.PyAudio()

    stream = audio.open(format=FORMAT, channels=CHANNELS,
                        rate=RATE, input=True, input_device_index=2,
                        frames_per_buffer=CHUNK)

    print("Recording...")
    frames = []

    try:
        for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
            try:
                data = stream.read(CHUNK)
                frames.append(data)
            except IOError as e:
                if e.errno == pyaudio.paInputOverflowed:
                    print(f"Overflow at iteration {i}. Continuing...")
                    time.sleep(0.1)
    except Exception as e:
        print(f"Error during recording: {e}")

    print("Recording finished.")
    stream.stop_stream()
    stream.close()
    audio.terminate()

    try:
        with wave.open(WAVE_OUTPUT_FILENAME, 'wb') as wf:
            wf.setnchannels(CHANNELS)
            wf.setsampwidth(audio.get_sample_size(FORMAT))
            wf.setframerate(RATE)
            wf.writeframes(b''.join(frames))
    except Exception as e:
        print(f"Error saving WAV file: {e}")

def transcribe_audio(file_path):
    model = whisper.load_model("base")
    result = model.transcribe(file_path)
    return result["text"]

def move(letter):
    time.sleep(1.5)
    x, y = alphabet_dict[letter.upper()]
    ser.write(("g1x" + str(x) + "y" + str(y) + "f2000" + "\n").encode());
    time.sleep(1.5)

def call_openai_api(prompt):
    client = OpenAI(api_key='REDACTED')
    completion = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[
            {"role": "system", "content": "You are a helpful assistant trapped in a ouija board and are a ghost."},
            {"role": "user", "content": "Answer the following query in an ominous tone and under 20 characters: " + prompt}
        ]
    )
    return completion.choices[0].message.content

def main():
    record_audio()

    transcription = transcribe_audio(WAVE_OUTPUT_FILENAME)
    print("Transcribed: ", transcription)

    response = call_openai_api(transcription)

    print("Response from OpenAI:\n", response)

    for char in response:
        move(char)

    t_end = time.time() + 10
    while time.time() < t_end:
        ser.write(("g1x0y0f500\n").encode());

if __name__ == '__main__':
    main()

The only discrepancy between this code and the sum of each individual task is the g1x0y0f500 command at the end, which brings the machine back to (0, 0). An interesting phenomenon occurs - that when the program ends, the machine will set its current point as (0, 0). As such, after the program finishes, it is hard coded to return to the true home point so it is ready for the next query.

Hero Shot

Next Steps

We had a couple ideas on how this project could be improved in the future:

  • Incorporate a magnet pointer instead of a pure magnet on the top of the board, like regular ouija boards often do
  • Use a longer belt, as the current belt we used caused a lot of problems as it was just barely long enough to fit the entire track length when stretched out
  • Add more multimodal input methods

File Downloads

Our group’s files for this week can be downloaded here.


Last update: July 13, 2024