Interface and Application Programming

As for this week's assignment, I needed to make an interface and application programming. So I decided to use this opportunity to make the hologram video for my final project. The main goal is to take the feed from the webcam, understand the object, blur / remove the background and rotate it in four directions to make it a video for projecting on the hologram.

 

 

I am completely new to programming and was stuck not knowing what to choose. As suggested by everyone processing seemed like the easiest option, but since my project required a lot of video analysis, I came across OpenCV(Open Source Computer Vision Library) that would help me in the process. I found a lot of tutorials for it online and mostly using python language. So I started with my basic research on python.

 

 

Assignment:

Individual

- Write an application that interfaces with an input &/or output device that you made

 

Group

- Compare as many tool options as possible

 

Software :

-Python

-Atom Editor

 

Materials :

-WebCam

 

Accomplised

 

-Learned the basics of a language

-Understood the syntax and functioning of the language

-Wrote a program to process a video

-Wrote a program to suit my final project needs.

 

 

Group Work

 

Download Files

What is Python?

 

Python is a powerful high-level, object-oriented programming language created by Guido van Rossum.

 

It has simple easy-to-use syntax, making it the perfect language for someone trying to learn computer programming for the first time.Python is a general-purpose language. It has a wide range of applications from Web development (like: Django and Bottle), scientific and mathematical computing (Orange, SymPy, NumPy) to desktop graphical user Interfaces (Pygame, Panda3D).

 

The syntax of the language is clean and length of the code is relatively short. It's fun to work in Python because it allows you to think about the problem rather than focusing on the syntax.

Why is Python?

 

 

The following are some of the reasons to use Python:

 

1.Simple Elegant Syntax

Programming in Python is fun. It's easier to understand and write Python code. Why? The syntax feels natural. Take this source code for an example:

a = 2

b = 3

sum = a + b

print(sum)

Even if you have never programmed before, you can easily guess that this program adds two numbers and prints it.

 

2.Not overly strict

You don't need to define the type of a variable in Python. Also, it's not necessary to add a semicolon at the end of the statement.

Python enforces you to follow good practices (like proper indentation). These small things can make learning much easier for beginners.

 

3.Expressiveness of the language

Python allows you to write programs having greater functionality with fewer lines of code. Here's a link to the source code of Tic-tac-toe game with a graphical interface and a smart computer opponent with less than 500 lines of code. This is just an example. You will be amazed how much you can do with Python once you learn the basics.

 

 

4.Great Community and Support

Python has a large supporting community. There are numerous active forums online which can be handy if you are stuck. Some of them are:

Learn Python subreddit

Google Forum for Python

Python Questions - Stack Overflow

Stack Overflow boiled it down to one main reason: the rise of Python can be connected to the rise of interest in data science. Additionally, Python has become a go-to language for data analysis. With data-focused libraries like pandas, NumPy, and matplotlib, anyone familiar with Python’s syntax and rules can deploy it as a powerful tool to process, manipulate, and visualize data.

 

As a beginner, I did not understand much of it but seemed like a good language to learn and start with. Furthermore, the internet is filled with tutorials for beginners to learn python.

 

Downloading and Installing Python

 

The following link leads to the python download page.

https://www.python.org/downloads/

But once in the download page, you are faced with another question. Python 2.x or 3.x. I didn't know it mattered much. As usual, I thought the bigger the version number more the improvements. In case of python, that wasn't true. There is a whole debate on the internet re3garding this matter, and here is a link to one such research done by a beginner as I am. Also a video by an expert and his reasons.

https://learntocodewith.me/programming/python/python-2-vs-python-3/

https://www.youtube.com/watch?v=oVp1vrfL_w4

Of both the research and arguments what stuck to me was "Python 2.x is legacy, Python 3.x is the present and future of the language." So I concluded and decided to use python 3. I still don't understand how it's going to affect me but seems like the better option.

 

From there installing the application was straightforward. Opening up the .exe file brings up the dialogue box for installation.

Using Python

 

IDE

An integrated development environment (IDE) is a software suite that consolidates the basic tools developers need to write and test software. Typically, an IDE contains a code editor, a compiler or interpreter and a debugger that the developer accesses through a single graphical user interface (GUI). An IDE may be a standalone application, or it may be included as part of one or more existing and compatible applications.

 

There are several IDE platforms with different features. Some come with GUI and others don’t. The following link gives an idea of IDE

https://wiki.python.org/moin/IntegratedDevelopmentEnvironments

Python Comes with IDLE that's quite boring.

 

I wanted to go with something that had GUI. So I went with ATOM

https://atom.io/

Atom can be downloaded from the above link. Installing atom is quite simple, but atom does not come with a compiler to debug and run the program. That has to be installed manually.

 

Go to file-setting

Opeining Atom

Click on the install tab and type in the script in the search box

Settings

Click on install

Install

Once installed the script can be accessed in the packages menu. The shortcut to run the script is Ctrl+Shift+B

Script

Learning to Code

 

I found this really good tutorial page on youtube for beginners. The video explains all about Python, basics, operations, and functions.

Furthermore, the following link explains the same video with sample codes that can be tried out.

https://pythonprogramming.net/beginner-python-programming-tutorials/

I went through some of the tutorials to understand the basics of python. It is not possible to learn an entire language and then trying to code the Hologram video. It was actually quite difficult to follow because the whole process of learning a language was new. I tried to understand how much I could and then shifted to exploring openCV

OpenCV

 

What is Open CV?

OpenCV (Open Source Computer Vision) is a library of programming functions mainly aimed at real-time computer vision. In simple language, it is a library used for Image Processing. It is mainly used to do all the operation related to Images.OpenCV-Python is a library of Python bindings designed to solve computer vision problems. OpenCV-Python makes use of Numpy, which is a highly optimized library for numerical operations with a MATLAB-style syntax. All the OpenCV array structures are converted to and from Numpy arrays. This also makes it easier to integrate with other libraries that use Numpy such as SciPy and Matplotlib.

 

What it can do :

      1. Read and Write Images.

      2. Detection of faces and its features.

      3. Detection of shapes like Circle, rectangle etc in an image. E.g Detection of coin in images.

      4. Text recognition in images. e.g Reading Number Plates/

      5. Modifying image quality and colors e.g Instagram, CamScanner.

      6. Developing Augmented reality apps.

     and many more.....

 

Which Language it supports :

      1. C++

      2. Android SDK

      3. Java

      4. Python

      5.C

 

Source :

https://www.quora.com/What-is-openCV

Installing

 

Installing OpenCV was easy. You can download and install from the following link

https://opencv.org/releases.html

Once the .exe file is opened, the program self-extracts and installs all the necessary files.

 

As I had mentioned earlier, OpenCV requires some libraries such as Numpy and Matplotlib to run. So installing them is necessary. Installing libraries is not the same as we are used to in windows. To install libraries something called pip has to be installed first.

 

PIP

 

pip is a package management system used to install and manage software packages written in Python. Many packages can be found in the default source for packages and their dependencies — Python Package Index (PyPI).

To install pip open up cmd prompt in windows and run the following command

pip install some-package-name

This will automatically download and install pip

Installing PIP

In my case after installation, it prompted that a higher version was available, hence I updated my version.

Numpy

 

NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays.

To install numpy open up cmd prompt in windows and run the following command

pip install numpy

Installing Numpy

Matplotlib

 

Matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK+. There is also a procedural "pylab" interface based on a state machine (like OpenGL), designed to closely resemble that of MATLAB, though its use is discouraged.SciPy makes use of matplotlib.

To install matplotlib open up cmd prompt in windows and run the following command

pip install matplotlib

Installing Matplotlib

To check if all the libraries are loaded correctly, run python and type in:

import cv2

import matplotlib

import numpy

If there are no errors then everything is installed correctly.

 

Understanding and Learning to Code

 

To understand about OpenCV I followed these two tutorials and followed the examples codes to build my code:

https://docs.opencv.org/3.4.1/d6/d00/tutorial_py_root.html

 This page explains what OpenCV is capable of, its features and how to code them with examples.

https://pythonprogramming.net/loading-images-python-opencv-tutorial/

This page provides video and code examples of what can be done using the code. The explanations were quite simple however it was not very easy to understand what was happening. It took some time to first comprehend the syntax.

 

 

Importing image

 

The first code I learnt was how to import an image. The code is as follows.

I'll break down the code to explain it.

import cv2

import numpy as np

from matplotlib import pyplot as plt

By writing this we are importing OpenCV, Matplotlib and numpy libraries into the program and also denoting how they are going to call in the course of the program

 

It is easy to convert any image to grayscale and perform analysis, it takes less processing time and memory.

img = cv2.imread('sunset.jpg',cv2.IMREAD_GRAYSCALE)

This line is to basically read the image that is specified, in this case, ‘sunset.jpg' and converts it to grayscale.

cv2.imshow('image',img)

cv2.waitKey(0)

cv2.destroyAllWindows()

This is to show the image that is read and to close the preview window when any key is pressed.

Result

Importing Videos:

 

The next part is to import video into python. Video files are also read like images files in python. Since a video is made up of frames, python reads each frame as individual pictures to process them. However, the coding is slightly different.

Now let's break down the code

import numpy as np-

import cv2

cap = cv2.VideoCapture(0)

First, we import numpy and cv2, nothing fancy there. Next, we cay cap = cv2.VideoCapture(0). This will return video from the first webcam on your computer. The value 0 denotes the equipment in use. The default laptop cam is 0 and any externally connected camera will be 1,2.. etc

while(True):

ret, frame = cap.read()

This code initiates an infinite loop (to be broken later by a break statement), where we have ret and frame being defined as the cap.read(). Basically, ret is a boolean regarding whether or not there was a return at all, at the frame is each frame that is returned. If there is no frame, you won't get an error, you will get None.

gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

Here, we define a new variable, gray, as the frame, converted to gray. Notice this says BGR2GRAY. It is important to note that OpenCV reads colors as BGR (Blue Green Red), where most computer applications read as RGB (Red Green Blue). Remember this.

   cv2.imshow('frame',gray)

Notice that, despite being a video stream, we still use imshow. Here, we're showing the converted-to-gray feed. If you wish to show both at the same time, you can do imshow for the original frame, and imshow for the gray and two windows will appear.

  if cv2.waitKey(1) & 0xFF == ord('q'):

        Break

This statement just runs once per frame. Basically, if we get a key, and that key is a q, we will exit the while loop with a break, which then runs:

cap.release()

cv2.destroyAllWindows()

This releases the webcam, then closes all of the imshow() windows.

 

As mentioned earlier the following tutorial is what I followed to learn all this. The documentation and the explanations are great.

Result

Background Subtraction

 

As explained I want the program to detect the human and subtract the background and rotate the video in four directions. Background subtraction can be done in multiple ways. The most common one is using motion to do so. There are certain algorithms within open cv for this like the BackgroundSubtractorMOG

BackgroundSubtractorMOG2

BackgroundSubtractorGMG

These algorithms were based on a paper by different people which were adapted for this purpose.

Actual Image

MOG

MOG2

GMG

I tried edge detection and background subtraction through varying hue saturation and value.

 

Edge detection

 

The following link explains edge detection using Canny edge detection

https://docs.opencv.org/3.4.1/da/d22/tutorial_py_canny.html

Canny edge detection does

-Noise reduction

 

-Finding Intensity Gradient of the Image

-Non-maximum Suppression

-Hysteresis Thresholding

 

OpenCV puts all the above in single function, cv.Canny().

To explain the code, all other parts remain the same as importing a video and displaying it. The difference is here

edge = cv2.Canny(frame, 100,200)

edge2 = cv2.Canny(frame, 100,100)

 

cv2.imshow('sobely',edge)

 cv2.imshow('sobely2',edge2)

Where cv2.Canny(image, threshold, threshold)

So to compare different thresholds I had given 2 outputs. The one with the lesser threshold shows more grains.

Result

BackgroundSubtractorMOG2

 

As explained already MOG2 is Gaussian Mixture-based Background/Foreground Segmentation Algorithm. The code is very simple

We first define the function by adding

fgbg = cv2.createBackgroundSubtractorMOG2()

And then creating a mask for it

mask = fgbg.apply(frame)

 To make things interesting I added this piece of code to it

res = cv2.bitwise_and(frame, frame, mask = mask)

 What it basically does is applies the mask to the actual frame which is the original image. The result was that the with the mask output I get to apply that on the real image as well.

Background Subtraction through HSV

 

 

To work on background subtraction by this method I used the following tutorial and code, modified it to suit my needs.

http://pysource.com/2017/06/02/tutorial-remove-background-opencv-3-2-with-python-3/

To work on background subtraction by this method I used the following tutorial and code, modified it to suit my needs.

Apart from using HSV to isolate the image from the background, slider bars have been introduced to adjust the threshold of the image. Here is the break-up of the code.

import cv2

import numpy as np

 

cap = cv2.VideoCapture(0)

 

panel = np.zeros([100,700,3],np.uint8)

cv2.namedWindow('panel')

This part is the same but introduced a panel using numpy where the sliders are going to sit.

def nothing(x):

    return x

cv2.createTrackbar('L - h', 'panel', 0,179,nothing)

cv2.createTrackbar('U - h', 'panel', 179,179,nothing)

 

cv2.createTrackbar('L - s', 'panel', 0,255,nothing)

cv2.createTrackbar('U - s', 'panel', 255,255,nothing)

 

 

cv2.createTrackbar('L - v', 'panel', 0,255,nothing)

cv2.createTrackbar('U - v', 'panel', 255,255,nothing)

Here I created the track bar using cv2.createTrackmar(). The inside parameters talk about the heading displayed wherein the first case L is lower h is Hue, then panel, the number is the range of the hue. The same thing is repeated for HSV values.

while True:

 

    ret, frame = cap.read()

 

    crop = frame[0:640,0:860]

 

    hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)

 

 

    l_h = cv2.getTrackbarPos('L - h', 'panel')

    u_h = cv2.getTrackbarPos('U - h', 'panel')

    l_s = cv2.getTrackbarPos('L - s', 'panel')

    u_s = cv2.getTrackbarPos('U - s', 'panel')

    l_v = cv2.getTrackbarPos('L - v', 'panel')

    u_v = cv2.getTrackbarPos('U - v', 'panel')

 

    lower_red = np.array([l_h,l_s,l_v])

    upper_red = np.array([u_h,u_s,u_v])

 

 

    mask = cv2.inRange(hsv, lower_red, upper_red)

    mask_inv = cv2.bitwise_not(mask)

 

    bg = cv2.bitwise_and(crop, crop, mask=mask)

fg = cv2.bitwise_and(crop, crop, mask=mask_inv)

Here we are reading the video data, and converting the BGR(in OpenCV RGB is read as BGR) into HSV values. Then we are asking the program to obtain those values form the slider that has been created. The cv2.bitwise_and command Calculates the per-element bit-wise conjunction of two arrays or an array and a scalar. The syntax is

cv2.bitwise_and(src1, src2[, dst[, mask]]) → dst

cv2.imshow('bg',bg)

    cv2.imshow('fg', fg)

    cv2.imshow('panel',panel)

 

 

 

    if cv2.waitKey(1) & 0xFF == ord('q'):

        break

 

cap.release()

cv2.destroyAllWindows()

The last part of the code is to display the results as videos, display the panel and to release the camera when the wait key is pressed which in this case is ‘q’.

 

Of all the methods compared I think the HSV is better as I got cleaner output. The only issue is setting the right threshold for it. Once the lighting conditions are finalized and set I can work on this program to improve its subtraction capabilities.

Video Manipulation

 

This part was quite tricky, and as a beginner to python, there were no proper video or tutorial to explain how to rotate videos. After googling for a long time the only thing I was certain of is to use cv2.getrotationmatrix2D() for such types. The functions in this section perform various geometrical transformations of 2D images. They do not change the image content but deform the pixel grid and map this deformed grid to the destination image. In fact, to avoid sampling artifacts, the mapping is done in the reverse order, from destination to the source.

 

The explanation for this function can be found in the following link

https://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html?highlight=warpaffine

But the examples provided were for 2d images and not videos. I tried to modify those codes to suit my needs but was unsuccessful. Then I came across this file in Github about video generator for holograms. I started modifying the code.

https://github.com/krzysztof-trzepla/hologram

Understanding the code.

SIZE = 600

 

cap = cv2.VideoCapture(0)

cap.set(cv2.CAP_PROP_FRAME_WIDTH, SIZE)

cap.set(cv2.CAP_PROP_FRAME_HEIGHT, SIZE / 2)

 

img = np.zeros((SIZE, SIZE, 3), dtype=np.uint8)

The size denotes the size of the video frame. Cap.set() is used to change the certain properties of the video stream. In this case the height and width of the video.

 

CV_CAP_PROP_FRAME_WIDTH Width of the frames in the video stream.

CV_CAP_PROP_FRAME_HEIGHT Height of the frames in the video stream.

 

The size denotes the size of the video frame. Cap.set() is used to change the certain properties of the video stream

 

np.zeros(shape, dtype = None, order = ‘C’) : Return a new array of given shape and type, with zeros.

    ret, frame = cap.read()

    if ret:

        for i in range(SIZE // 2):

            row = frame[i, i:SIZE - i - 1]

            rowFlipped = cv2.flip(row, 0)

            np.copyto(img[i, i:SIZE - i - 1], row)

            np.copyto(img[i:SIZE - i - 1, i], rowFlipped)

            np.copyto(img[SIZE - i - 1, i:SIZE - i - 1], rowFlipped)

            np.copyto(img[i:SIZE - i - 1, SIZE - i - 1], row)

Here 'i' is the range which is size divided by 2. Then what a row is defined. Np.copyto() Copies values from one array to another, broadcasting as necessary. So the array is copying the video stream to four points and rotating it at the same time.

  cv2.imshow('frame', img)

 

        if cv2.waitKey(1) & 0xFF == ord('q'):

            break

    else:

        break

 

cap.release()

 

cv2.destroyAllWindows()

This is to display the result and ‘q’ key to terminate the operation.

Result

I've was successful in doing two different processes, the next step is to combine the two to give a final output. I will be doing that in teh following weeks.

 

Meanwhile having fun with manipulated video..=D

Building an App

1. Choosing platform for the app

2. A Gmail account is a must to use this app

3. The next step is to create the app itself

Once the app opens up this is the screen that you are greeted with.

Thunkable

Thunkable - UI

To explain the user interface of the app,(Refer image above)

 

1.The palette:

This is where all the blocks are. The blocks are categorized into their functions like user interface, connectivity, sensors. Etc. To use them one has to simply drag and drop onto the screen.

 

2.Screen.

This is an imitation of the phone screen itself. All user interface blocks that are placed here is what will make the app. The look of the app has to be arranged here.

 

3.Components

All the blocks that are used in the making of the app are displayed here in the component section. If they are nested, that also is displayed here.

 

4.Properties.

The properties of each block can be controlled from here. The controls include color, size, orientation, images etc.

 

5.Menu bar

Here the app menu provides an option with, saving the app, creating a new app etc while the test menu provides real-time testing options. The export option is to export the app as apk or provide a QR code that helps to download the app. The help sections are linked to the docs.

 

6.UI and Programming

This bar helps to shift between designing the app and working of the app. The blocks shifts to a new window where the functioning of the app is configured.

Thunkable VS MIT

 This is the comparison between the UI of MIT app inventor and thubkable. I found thunkable to be cleaner.

So now to actually make the app.

First I wanted to change the color of the main screen. This I did with the option available In the properties tab

Screen

Now to arrange icons on the screen, the app does not have a grid system. Instead of the palette menu, there is a layout which has to be configured and anything within the layout can be arranged. So made a horizontal layout and placed a list finder inside for connecting Bluetooth.

Layout

To I replaced the list finder with a custom image that I had downloaded from

https://www.flaticon.com/

Making the App

Then I made another horizontal layout, to notify if the Bluetooth is connected or not. Put a label block in it. Layout again and put an image there. Just as an icon for led array which I had designed. Now to place the buttons for the array. I placed the button and replaced with a custom image. I then spaced them out with intermediate spacers. Once the first row seemed okay repeated for the same for 5 rows

There are other elements like non-visible elements. The sensors, connectivity comes in that category. They are a part of the app but do not have a block or icon. Now since my app had Bluetooth, I have Bluetooth and clock as non-visible elements.

Non-Visible Components

The spacing and the heights and widths may not always be correct. It's by trial and error that I came to this layout and also it depends on what one wants, or how he/she intends to design it.

 Now that my user interface is done I shifted to the blocks area to do the actual "programming " of the app.

 

This is the block screen

Block Editor

Block Editor

To explain about the block window

 

1.The built-in gives more option for content and conditions. The screen is where all the components are displayed. The blocks are again basically dragged and drop.

 

2.The screen is where the blocks are placed and components are built

 

3.If there are any overlaps or warning during the building of the components, is displayed here

 

4.This provides an option to shift to screens, if there more than one, or to add and delete screen

Block Editor

Placing block

First I started with making the Bluetooth connection. I Couldn't understand the Bluetooth connection myself. So I had a few references to educate myself.

https://www.youtube.com/watch?v=JQ3tDhpmSFE&t=314s

https://www.youtube.com/watch?v=evVRCL9-TWs

http://fab.academany.org/2018/labs/fablabcept/students/tanvir-khorajiya/week-13.html

https://appinventor.pevest.com/?p=520

Bluetooth Connection

So to explain what I've done. I've chosen the list picker and set two controls to it. What it must do before a client is picked and what it must after a client is picked. So the first part is when the list picker is clicked it will show all the Bluetooth connections available, their name and address.

 

Then once the client is selected, the label will display "CONNECTED" in green color if the connection failed it will display "NOT CONNECTED" in red. Now, this establishes a connection with the Bluetooth device. Now we'll have to configure how the received data should be handled.

Clock Configuration

Here we bring in the clock parameter.By default, the clock is set at 1000ms. Which means it will keep checking for data for that interval of time. So when the Bluetooth is connected, we check for the value in the loop. If the value is greater than 0, then we check what data is available to be received and input case, we should check if the led is on or not. If the value is greater than 0 then the label will display led is on and if not led is off

Button Configuration

Now to program the buttons. Now I wanted to the button to send a character or a number when they are pressed. So when the Arduino program reads the character received it will light up that particular led. So made it in such a way that when the led is pressed a number is sent out. I assigned different numbers for each led.

Button Configuration

This is how I made the app for controlling LED array.

App & Charlieplexing

 

I used the app to control the LED light on a board designed using charlieplexing. The design and the making of the board can be accessed from here.

The following is the code I used to control the board using the bluetooth application I made.

Code

//#include    //Software Serial Port

 

#include <SoftwareSerial.h>

#define RxD 6

#define TxD 5

 

#define DEBUG_ENABLED  1

 

SoftwareSerial blueToothSerial(RxD,TxD);

#define LED_A 0

#define LED_B 1

#define LED_C 2

#define LED_D 3

#define LED_E 4

 

void setup()

{

  pinMode(RxD, INPUT);

  pinMode(TxD, OUTPUT);

  pinMode(LED_A, INPUT);

  pinMode(LED_B, INPUT);

  pinMode(LED_C, INPUT);

  pinMode(LED_D, INPUT);

  pinMode(LED_E, INPUT);

blueToothSerial.begin(9600);

 delay(2000);

 blueToothSerial.println("bluetooth connected!\n");

 

 delay(2000); // This delay is required.

 blueToothSerial.flush();

 

}

 

void loop()

{

 

  int recvChar;

  while(1)

  {

 

    if(blueToothSerial.available())

    {

      recvChar = blueToothSerial.read();

 

        if(recvChar == 1)

        {

          set_pins(LED_B, LED_A);

        }

        else if(recvChar == 2)

        {

         set_pins(LED_C, LED_A);

        }

         else if(recvChar == 3)

        {

        set_pins(LED_D, LED_A);

  }

        else if(recvChar == 4)

        {

          set_pins(LED_E, LED_A);

  }

        else if(recvChar == 5)

        {

            set_pins(LED_A, LED_B);

  }

        else if(recvChar == 6)

        {

          set_pins(LED_C, LED_B);

  }

        else if(recvChar == 7)

        {

          set_pins(LED_D, LED_B);

  }

        else if(recvChar == 8)

        {

          set_pins(LED_E, LED_B);

  }

        else if(recvChar == 9)

        {

          set_pins(LED_A, LED_C);

  }

Code Contd.

        else if(recvChar == 10)

        {

          set_pins(LED_B, LED_C);

}

        else if(recvChar == 11)

        {

          set_pins(LED_D, LED_C);

  }

        else if(recvChar == 12)

        {

          set_pins(LED_E, LED_C);

  }

        else if(recvChar == 13)

        {

          set_pins(LED_A, LED_D);

  }

        else if(recvChar == 14)

        {

          set_pins(LED_B, LED_D);

  }

        else if(recvChar == 15)

        {

          set_pins(LED_C, LED_D);

  }

        else if(recvChar == 16)

        {

          set_pins(LED_E, LED_D);

  }

        else if(recvChar == 17)

        {

          set_pins(LED_A, LED_E);

        }

        else if(recvChar == 18)

        {

          set_pins(LED_B, LED_E);

  }

        else if(recvChar == 19)

        {

          set_pins(LED_C, LED_E);

  }

        else if(recvChar == 20)

        {

          set_pins(LED_D, LED_E);

 

  }

    }

        }

 

  }

  void set_pins(int high_pin, int low_pin)

{

  // reset all the pins

  reset_pins();

 

  // set the high and low pins to output

  pinMode(high_pin, OUTPUT);

  pinMode(low_pin, OUTPUT);

 

  // set high pin to logic high, low to logic low

  digitalWrite(high_pin, HIGH);

  digitalWrite(low_pin,LOW);

}

 

 

 void reset_pins()

{

  // start by ensuring all pins are at input and low

  pinMode(LED_A, INPUT);

  pinMode(LED_B, INPUT);

  pinMode(LED_C, INPUT);

  pinMode(LED_D, INPUT);

  pinMode(LED_E, INPUT);

 

  digitalWrite(LED_A, LOW);

  digitalWrite(LED_B, LOW);

  digitalWrite(LED_C, LOW);

  digitalWrite(LED_D, LOW);

  digitalWrite(LED_E, LOW);

}

 

 

Pinout

Connection

I connected my phone to the Bluetooth and I didn't have any problem.

Screenshot of App

And then I pressed each button and YAY again, the led was glowing. I tried with each led and it was successful. !!!

Week 13 Group Work:

 

Here we are asked to compare as many tools as possible. I had used Thunkable to create my app and while the others in the lab used MIT app inventor. So I was assigned to compare the two tools and see which one stands out. The comparison of the two apps and the documentation of the same was my contribution to the group work. Instructions on how to use thunkable can be viewed on my networking page and the comparison for MIT and Thunkable in the group assignment.

Conclusion

 

This by far is a week that was quite challenging. I struggled to get a handle to learn the new language. There are so many tutorials, but it was hard to grasp so much in such short time. So I had to shift my focus to learn a very specific part of the language. I was quite successful in doing almost 3/4th of whats required for my final project which is great. Coding is something I am interested in and that I haven't explored yet. I will continue to explore this language.

Files

 

All files can be downloaded from HERE

WEEK 13

This week is all about programming. As I am totally new to programming the lecture made little sense to me. Neil spoke about different languages first and how to communicate with device interfaces. Then he spoke about the functionality of the programs and performance. He gave a brief introduction to cloud computing