Assignment:

Individual

-Develop a final project that satisfies the standards of FabAcademy

 

 

Software :

-Sketchup

-AutoCad

-Fusion 360

-Arduino IDE

-Atom

-Movavi

 

Materials :

-MDF

-Acrylic sheet

-Electronic Components

-Sensor

 

Accomplished

 

-Developed a final Project

-Incorporated various process like additive, subtractive manufacturing process, electronic design, programming and interfacing.

 

 

 

 

 

Download Files

To work on my final project I used the same schedule that I had prepared for my application and implications weeks. This was good cause I had structured it in such a way that each process was correlated to the next in line and change in one process would result in change in the others that follow. The Process are as follows.

 

-Interface and Coding

-Designing

-Electronics

-Integration

 

 Schedule

 

Interface and Coding

 

The most difficult part of the whole project for me would be the coding. I have used python in Interface and Communication week to do background subtraction for the person who sits in front of the camera, but then to use the code for hologram the video stream has to be rotated in four directions so in a pyramidal form the hologram can be viewed. With this task pending in the coding section, I wanted to solve this first.

 

 

I went through a lot of tutorials and video to find out the right code to rotate the video. That when I came across an Affine Transformations which uses functions like warpAffine and getRotationMatrix2D to rotate images and videos. The functions got too mathematical and it was hard to grasp. I tried using the code of Ron a fellow FabAcademian but was unable to grasp the logic.

https://www.learnopencv.com/rotation-matrix-to-euler-angles/

https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/warp_affine/warp_affine.html

I finally sought out help from a friend Karthik. We tried figuring out what the problem was and how to sort it out. We used the cv2.getRotationMatrix2D function and tried rotating the feeds, and then using cv2.warpAffine to flip the feeds. This was the result.

GetRotationMatrix

I really have no idea what resulted in the images to flip the way they were flipped. So I started deleting codes line by line to check what went wrong. The culprit was the numpy.absolute command. The function is used to calculate the absolute value of the provided element. That was basically complicating the matter. Removed all those segments of the code and just used cv2.getRotationMatrix2D function and warpAffine.

GetRotationMatrix

This also resulted in failure. When the rotational degrees are set to 0,90 the video rotates about its axis and when set to 60,120 there is a diagonal formation in the center yet the desired result is not achieved. The research to do this took a lot of my time and was still unable to figure out. Another problem is the ratio of the video. The video streams in a fixed ratio, which may result in problems while streaming on different devices.

 

Coming from a background with 0 programming skills it would just be a waste of time trying to figure out the code that I'm still unable to fully understand. Hence I had to think of alternatives.

 

Now instead of using pyramidal structure to project my hologram, the next alternative would be to use a single plane hologram

The code that I already have(Background subtraction) would work perfectly fine with this, the only addition needed is to stream the data from the cam to a mobile phone.

 

I tried googling on how to do streaming using python, and there most of the options pointed to using Django, Tornado or flask. I had help from a friend Jimesh to figure this out. We finally used Flask to stream data.

Flask:

Flask is a micro web framework written in Python. It is classified as a microframework because it does not require particular tools or libraries. It has no database abstraction layer, form validation, or any other components where pre-existing third-party libraries provide common functions.

I just didn't have the time to check out the complete documentation of Flask, as time was of the essence. Googling video streaming through flask resulted in some very good links that are were super useful in helping me use flask.

https://blog.miguelgrinberg.com/post/video-streaming-with-flask

http://www.chioka.in/python-live-video-streaming-example/

I pretty much used the same code from the above links and combined them with my background subtraction code. I first used a test code to check if the data is being sent and it worked perfectly fine.

Testing (Literally)

To load the HTML page one needs to use the IP address of the network that the device is connected to and specify the port no. In this case, I used 8080 as my port.

 

Wirth the help of Jimesh I was able to complete my code and also add a button on the browser to make the video full screen.

Screenshot of phone and com.

 

And the same was opened on my phone browser as well.

 

Designing

 

I started sketching the basic form of the prototype that I wanted to build. There are several "Z" shaped devices in the market that works on the same principle. I didn't want to use that form for my design. I used multiple software to develop my project depending on my comfortability to use them. I started forming rough shapes on Sketchup. I wanted to know how the design is going to look like first and SketchUp being one of the easiest software to use, I used the same to develop my form.

SketchUp

I iterated the shape and size until I reached the desired form. The initial design was for a tablet but later I changed the design to suit a phone of a maximum screen size of 9", where most of the phones today are 5.5" - 6".

 

Once I was sure of the form I started drafting the panels in AutoCAD. Autocad is like pen tool for mem coping from Architecture background. It's easier for me to realize my design in 2D first and the best tool suited for that is AutoCAD.

AutoCAD

Now that I have drafted all the sizes and panel I need to convert them to 3d again and assemble them to make press-fit notches. Its easier to do it this way as manual press fit notches may result in an error. I can use fusion and sketch up to convert them to 3d. I used Fusion as its accurate and exporting .dxf file is easier.

Fusion Assembly

I assembled all the parts in Fusion and also rendered it to see how the end product may look like. I used black on all surfaces because the reflection of the image is seen better this way. Once I assembled all the parts I created notches in Fusion using extrude command. This cuts away the material resulting in notches and keys at the precise locations. I exported the dxfs and took it to Autocad again to give tolerance and nest it properly for cutting.

Tolerance and Nesting

Laser cutting

 

The design was made for 5mm MDF, but unfortunately, only cheap MDF was available and the thickness was quite uneven. The average thickness of the material was 4.4 mm and I had provided a tolerance of .5mm after testing a small piece first. Using Power 65 and Speed 15 I cut my parts.

RD Works

Laser Cut

I assembled the parts, most of the parts fit in perfectly, but certain areas required slight hammering to fit in the pieces due to the uneven thickness of the material. The material was also bent adding to the difficulty in press-fitting it, but once assembled the for looked neat.

Assembly

I tested the form by placing a wasted acrylic sheet as a screen and played videos to check the quality. The quality of the build was really fine and the projection was good, except that there was a double reflection. That might have been due to the thickness of the acrylic sheet. I had used a 4mm thick acrylic.

Apart from this, these are the design corrections that needed to be made:

 

-The viewing panel needs to be moved in order for the reflection to fall in the center.

 

-The Punctures needed to be provided to accommodate led strip lights

-the height needs to be slightly increased to facilitate 3d printed cover for camera s sensor

 

-patterns on the bottom panel for light

-area for circuit boards to be placed

-cutouts for the camera and ultrasonic sensors.

 

Once all the corrections were noted I took it back to Fusion, made the corrections in the assembly file and exported the dxfs to be cut.

Iterations

I used the same parameters as before to cut the file but this time slightly reduced the tolerance.

Laser Cut

Since I wanted the entire product in black I used Black (Matt)Acrylic spray paint to coat the MDF.

 

Note: Use spray paint with necessary safety precautions and in a well-ventilated environment.

Spray Paint

The next task was to put all the parts together. There was slight difficulty in press-fitting again due to the varying thickness, but nothing that could not be fixed.

Assembly

By doing so my final form was ready.

 

3D printing

 

Certain parts of the design required 3d printed components. The screen holder and the casing for the camera and ultrasonic sensor to name a few.

 

To design for the camera and ultrasonic sensor, I first measured the exact dimensions of the units and cadded them.

Designing

From here on things became very simple. I made an outer casing and an inner bracket to hold the camera with an opening on one side for the wire. The sensor design was inspired from one of the open source designs.

Designing

Also made an attachment to screw them to the front panel with 2.5mm screws. The dxf was then imported into fusion and extruded.

Designing

Took it to cura to slice and then printed it. The first print was unable to fit in the components properly. So I had to adjust the parameters and print again. And this time they all came along pretty well.

3D Printing

Spray painted the same to give it a uniform look.

3D printed Component

The glass clamp was designed in auto cad first. Then extruded in Fusion.

Designing

The print came out quite nicely but was too small and was in extreme corners.

3D Printing

So I decided to modify the design and print it for a full length but in two parts.

Designing

Printing them was no difficulty.

3D Printing

I spray painted them and fixed them onto the device with 2.5mm screws.

3D Printed Component

This sums up all the design needed for my project.

 

 

Electronics

 

The basic concept of my electronic part of the design is, When the user comes in front of the device, an ultrasonic sensor(Input device) measures the distance of the user. When the condition is satisfied, LED strip light is switched on (Output device). So the components required are:

 

-Modular board (To control all the input and output functions)

-Ultrasonic sensor

-LED light

-Relay(to switch between Voltage for LED)

-Power board (To power up the device and to convert 12v to 5V)

 

Out of these, I have designed the modular board in week12 as a part of my output week. The process can be found here.

Soldered Board

Now coming to the power board. I used LM1117 to convert 12V to 5V. I designed the circuit in eagle.

Trace, Drill and Cut

The drill bit that was used in the process is a very bad one. Unfortunately, we were not able to source any more bits. The milling was not all that great. It was quite rough. However, the boar was useable. This is the final board.

Power Board

The pinout for the entire system is as follows.

Pin out

I tested the circuit by connecting all the components. Everything worked really fine. The only thing that had to be calibrated was the distance sensor. To know what distance the led was suppose to glow. The fixed distance now is 65cms.

Code

// defines pins numbers

const int trigPin = 11;

const int echoPin = 12;

 

// defines variables

long duration;

int distance;

int led1 = 4;

int relay = 10;

 

void setup() {

pinMode(trigPin, OUTPUT); // Sets the trigPin as an Output

pinMode(echoPin, INPUT); // Sets the echoPin as an Input

pinMode(led1, OUTPUT);

pinMode(relay, OUTPUT);

Serial.begin(9600); // Starts the serial communication

}

 

int getDist(){

   int cm;

   digitalWrite(trigPin, LOW);

  delayMicroseconds(5);

  digitalWrite(trigPin, HIGH);

  delayMicroseconds(10);

  digitalWrite(trigPin, LOW);

 

  // Read the signal from the sensor: a HIGH pulse whose

  // duration is the time (in microseconds) from the sending

  // of the ping to the reception of its echo off of an object.

  pinMode(echoPin, INPUT);

  duration = pulseIn(echoPin, HIGH);

 

  // convert the time into a distance

  cm = (duration/2) / 29.1;

  delay(10);

  return cm;

  }

 

void loop() {

// Clears the trigPin

distance = (getDist() + getDist()+ getDist()+getDist()+getDist()) / 5;

Serial.println(distance);

if(distance <= 65)

{

  digitalWrite(led1,HIGH);

  digitalWrite(relay,LOW);

 

}

else

{

  digitalWrite(led1,LOW);

  digitalWrite(relay,HIGH);

}

}

 

Integration

 

Now that all the systems are sorted, its time to put everything together and test it. The following picture depicts all the components used in the project

Parts

The first part is to attach the ultrasonic sensor and the camera.

Integration of electronics

The wires are routed on the sides and are taken at the back through a channel specially made to clear the wire clutters. This makes the packaging better.

Final Packaging

I had made a sliding cover on top to conceal the product fully.

The led lights were stuck on the inner surface and on the bottom panel.

LED strip light at the bottom

The bottom panel holds all the circuits and the power board. The wires are well concealed there.

 

On integration, I noticed that the led in the front was too bright. This disturbed the reflection on the acrylic. So I 3d printed a small cap with transparent PLA.

Printing in Flash Forge

I spray painted it on one side to match the tone of the device and double coated the top with tape to disperse the light.

LED cap

This is the comparison between the two lighting conditions.

LED cap

The bottom panel also lit up quite nicely.

 

Now the final part is to test the device and its functionality. The following video explains the making and the working of the device.

 What tasks have been completed, and what tasks remain?

The designing, electronics, and integration of systems have been done. The remaining tasks would be to make it more aesthetically please and better packaging.

 

 What has worked? what hasn't?

The major thing that failed to work is the code for the pyramidal hologram., but then I sorted that out by changing the form of the product and everything so far seems to work fine.

 

What questions need to be resolved?

The coding and interfacing part of the project. Need to find a way to make it more efficient.

 

What will happen when?

11th June is the final cut-off for weekly assignments and also to complete the final project, 15th June is my presentation date. So once the basic information is pushed, I will continue to work on the project until 14th to see how to make it more efficient.

 

 What have you learned?

Completing the project at this short time was not easy. The main learning process was how a minor change in one system affects all the other systems as well. Hence planning and understanding the project will be the first step. Once all the requirement is considered, it's then wise to start work on individual systems. Also as Neil always spiral development is a great way to go about in development of such products.

Conclusion

 

I have almost completed my final project. The design, packaging and all other aspects of the project is done. The areas that could use improvement is the aesthetics part of the design. The interface part could be more refined.  It was really crazy working on the final project, because integrating so many different processes was not easy at all. However I shall continue to work on the project and try to make it a better product.

Files

 

All files can be downloaded from HERE

WEEK 20

This week is dedicated for final project and its process. Neil spoke about different projects and how they were documented and presented. The time given for final project is very limited and this week is integration of several week’s work to form a product / project.