WEEK 10: Applications and implications

Assignment:

Learning outcomes:


Propose a final project masterpiece that integrates the range of units covered, answering:

What will it do?

“The Fab Camera”, is simply in instant translator. Aimed to inspire an easier, more enjoyable process of learning new languages. The camera shaped device, identifies objects in your surroundings with simple click of a button, displaying the result in English, and then in Arabic. The device also incorporates a speaker that pronounces the translated word to the user.

Fab Camera basic operation:

  1. connect to the internet.

  2. Take snapshots of the object in mind.

  3. Upload the image to the server.

  4. Use google vision API to identify the object in the uploaded image.

  5. Return the labels.

  6. Correlate the term to its equal in Arabic.

  7. Display the word on the screen.

  8. Pronounce the translated word through the speaker.


Who’s done what beforehand?

There are several applications and websites that provide direct translation of images, google translate for example. However, these images are text based. The application scans the text and translates the content. What “The Fab Camera” is aiming to do is considered different in operation.


What will you design?

  1. The camera case itself (3D design).

  2. The electronic circuits needed for the processor, inputs and outputs.


What materials and components will be used?

The case

There are various options for the materials when it comes to 3D printing the case. I am yet to decide between ABS which provides more durability and resistance to changes in the environment, but concurrently is harder to print and perfect, and PLA which is easy to print, biodegradable and versatile but is prone to deformation under certain temperatures . OR I could go completely off center and create a unconventional transparent case using the resin 3D printer!

For the 3D printing part, I would either use the Zortrax 3D printer to print my model in ABS, as it provides a controlled environment in which the finishing of the final product is incredible.

Alternatively, if PLA is what was decided upon, I would use the Ultimaker 3D printer as it is available in our lab.

I will also be using 16 mill Aluminum blocks to create the top and bottom parts of the camera.

Printed circuit boards

I will be using Eagle software to design the electronic boards I need for my final project, and The Roland milling machine to produce the PCBs on FR1 boards.

Electronics

The main electronic parts of the Fab Camera are the camera (input) which is attached directly to the esp32 board (processor), and the TFT screen (output). The camera also holds a single button to enable the user to take snapshots, a speaker to play the audio, an SD card to store the audio, and a battery.


Where will come from? How much will they cost?

Most of the components are already available in the lab. Other elements of the projects will be purchased online or from a local electronics shop if needed.

Material Source Cost(USD)
ABS printing filament Fab Lab UAE $78
FR4 sheet Fab Lab UAE, local supplier $3
Aluminum block Local supplier $16.34 per meter
ESP 32-cam amazone $13.61
ILI9341 TFT display banggood $6.66
speaker Amazon $11.44
Audio Amplifier Edwin robotics $0.82

What parts and systems will be made?

  1. The case.

  2. The electronics.

  3. The integrated system.


What processes will be used?

For the case, I will be using fusion 360 t0 design the camera shaped case, and print the 3D design using the Zortrax printer.

The top and bottom parts of the camera designs will be made out of aluminum in which I will be using the shopbot to mill and produce the parts.

The electronics will be designed in Eagle, and produced using the Roland milling machine.

For the programming part, I will be using several interfaces and environments. Google API for the object recognition part, Nodes.js to create and run the server, online Ubentu with nodes.js installed to run the virtual machine, Arduino IDE to run the ESP 32 and the screen and to integrate all the systems together.


What questions need to be answered?

Many questions need to be answered.

  1. what are the languages that would be chosen?

  2. Can I use languages with non-Latin characters in programming?

  3. Should I use built in processing? or cloud processing to identify the objects?

  4. How much training does the system need?

  5. Can real time processing be done?

  6. What is the capacity of the processing?

  7. Should I use digitalized voices to pronounce the terms or my own voice?


How will it be evaluated?

“The Fab camera” will be evaluated by its ability to identify different objects in the a room, correlate the image processed with the terms, translate correctly, display the word and pronounce the word then repeat the process for different objects, in several languages eventually.


Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Based on a work at http://academany.fabcloud.io/fabacademy/2020/labs/uae/students/meha-hashmi/