16. Interface and application programming¶
Visit Group website for more imformations. Click here.
Write an application that interfaces with an input and/or output device that you made
Task in Interface and application Programming?¶
- For my final project i need dark theme, the pixel glow that should be capture in long exposure click and that make my final project output.
- to explain non tech person and unknown person i try to make visual with processing extension with j5.
-
For that i need know image processing and openCV concept.
-
let I decided to explore OpenCV and image processing, i research about that concept i got bet project with this concept/topic that was Object Dectection .
Component I used:¶
- Raspberry Pi 3
- Camera
- Wireless Keyboard | Mouse
- Monitor
- Internet (For download different Modules)
Modules are:¶
- Updating Python
- install pi
- install openCV
- installing numpy
- and other packages are inbuilt with upper given packages
Click here Try by your self. Now code opensource. - copy it! - Try it! - Download it!
Assignmet Requied Question¶
-
Input : For input i using Raspberry pi (A powerful feature of the Raspberry Pi is the row of GPIO (general-purpose input/output) pins along the top edge of the board. A 40-pin GPIO header is found on all current Raspberry Pi boards (unpopulated on Pi Zero and Pi Zero W).)
- It was built to provide a common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. OpenCV (Open Source Computer Vision) is a library of programming functions mainly aimed at real-time computer vision.
-
Output : Animation/Visualization is the most important because the animation give the life to the characters and elements which help to communicate the audience in a effective manner.
- canvas element in the document, and sets the dimensions of it in pixels. This method should be called only once at the start of setup.
What is the function of Raspberry Pi?¶
- The Raspberry Pi is a low cost, credit-card sized computer that plugs into a computer monitor or TV, and uses a standard keyboard and mouse. It is a capable little device that enables people of all ages to explore computing, and to learn how to program in languages like Scratch and Python.
What is the difference between Arduino and Raspberry Pi?¶
- Whereas Arduino is a Microcontroller board and this board is not as powerful as Raspberry Pi 3 single board computers, but a microcontroller board can be great for quick setups. The Raspberry Pi 3 is designed to run operating systems whereas Arduino is not designed to run on operating systems.
Why we use Raspberry Pi instead of Arduino?¶
- It is very easy to use. The Arduino only provides a subset of the functionality of the Raspberry Pi. A Raspberry Pi is a general-purpose computer, usually with a Linux operating system, and the ability to run multiple programs. It is more complicated to use than an Arduino.
Object detection with deep learning and OpenCV¶
-
When combined together these methods can be used for super fast, real-time object detection on resource constrained devices (including the Raspberry Pi, smartphones, etc.)
-
From there we’ll discover how to use OpenCV’s dnn module to load a pre-trained object detection network.
-
This will enable us to pass input images through the network and obtain the output bounding box (x, y)-coordinates of each object in the image.
-
Finally we’ll look at the results of applying the MobileNet Single Shot Detector to example input images.
-
When it comes to deep learning-based object detection there are three primary object detection methods that you’ll likely encounter:
-
Faster R-CNNs (Girshick et al., 2015)
- You Only Look Once (YOLO) (Redmon and Farhadi, 2015)
-
Single Shot Detectors (SSDs) (Liu et al., 2015)
-
Faster R-CNNs are likely the most “heard of” method for object detection using deep learning; however, the technique can be difficult to understand (especially for beginners in deep learning), hard to implement, and challenging to train.
Steps and Pocesses¶
-
In this section we will use the MobileNet SSD + deep neural network
( dnn )
module in OpenCV to build our object detector. -
I would suggest using the “Downloads” code at the bottom of this blog post to download the source code + trained network + example images so you can test them on your machine.
-
Let’s go ahead and get started building our deep learning object detector using OpenCV.
Open up a new file, name it deep_learning_object_detection.py
, and insert the following code:
# import the necessary packages import numpy as np import argparse import cv2 # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-i", "--image", required=True, help="path to input image") ap.add_argument("-p", "--prototxt", required=True, help="path to Caffe 'deploy' prototxt file") ap.add_argument("-m", "--model", required=True, help="path to Caffe pre-trained model") ap.add_argument("-c", "--confidence", type=float, default=0.2, help="minimum probability to filter weak detections") args = vars(ap.parse_args())
We import packages required for this script — the dnn module is included in cv2 , again, making hte assumption that you’re using OpenCV 3.3.
Then, we parse our command line arguments:
--image
: The path to the input image.
--prototxt
: The path to the Caffe prototxt file.
--model
: The path to the pre-trained model.
--confidence
: The minimum probability threshold to filter weak detections. The default is 20%.
-
Again, example files for the first three arguments are included in the “Downloads” section of this blog post. I urge you to start there while also supplying some query images of your own.
-
Next, let’s initialize class labels and bounding box colors:
# initialize the list of class labels MobileNet SSD was trained to # detect, then generate a set of bounding box colors for each class CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"] COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3))
build a list called CLASSES containing our labels. This is followed by a list, COLORS which contains corresponding random colors for bounding boxes .
Now we need to load our model:
Object detection with deep learning and OpenCVPython
# load our serialized model from disk print("[INFO] loading model...") net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"]) 26 27 28 # load our serialized model from disk print("[INFO] loading model...") net = cv2.dnn.readNetFromCaffe(args["prototxt"], args["model"])
The above lines are self-explanatory, we simply print a message and load our model and 28).
Next, we will load our query image and prepare our blob , which we will feed-forward through the network:
Object detection with deep learning and OpenCVPython
# load the input image and construct an input blob for the image # by resizing to a fixed 300x300 pixels and then normalizing it # (note: normalization is done via the authors of the MobileNet SSD # implementation) image = cv2.imread(args["image"]) (h, w) = image.shape[:2] blob = cv2.dnn.blobFromImage(cv2.resize(image, (300, 300)), 0.007843, (300, 300), 127.5)
# load the input image and construct an input blob for the image # by resizing to a fixed 300x300 pixels and then normalizing it # (note: normalization is done via the authors of the MobileNet SSD # implementation) image = cv2.imread(args["image"]) (h, w) = image.shape[:2] blob = cv2.dnn.blobFromImage(cv2.resize(image, (300, 300)), 0.007843, (300, 300), 127.5)
Taking note of the comment in this block, we load our image , extract the height and width , and calculate a 300 by 300 pixel blob from our image .
Now we’re ready to do the heavy lifting — we’ll pass this blob through the neural network:
Object detection with deep learning and OpenCVPython
# pass the blob through the network and obtain the detections and # predictions print("[INFO] computing object detections...") net.setInput(blob) detections = net.forward()
# pass the blob through the network and obtain the detections and # predictions print("[INFO] computing object detections...") net.setInput(blob) detections = net.forward()
we set the input to the network and compute the forward pass for the input, storing the result as detections . Computing the forward pass and associated detections could take awhile depending on your model and input size, but for this example it will be relatively quick on most CPUs.
Let’s loop through our detections and determine what and where the objects are in the image:
Object detection with deep learning and OpenCVPython
# loop over the detections for i in np.arange(0, detections.shape[2]): # extract the confidence (i.e., probability) associated with the # prediction confidence = detections[0, 0, i, 2] # filter out weak detections by ensuring the `confidence` is # greater than the minimum confidence if confidence > args["confidence"]: # extract the index of the class label from the `detections`, # then compute the (x, y)-coordinates of the bounding box for # the object idx = int(detections[0, 0, i, 1]) box = detections[0, 0, i, 3:7] * np.array([w, h, w, h]) (startX, startY, endX, endY) = box.astype("int") # display the prediction label = "{}: {:.2f}%".format(CLASSES[idx], confidence * 100) print("[INFO] {}".format(label)) cv2.rectangle(image, (startX, startY), (endX, endY), COLORS[idx], 2) y = startY - 15 if startY - 15 > 15 else startY + 15 cv2.putText(image, label, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
# loop over the detections for i in np.arange(0, detections.shape[2]): # extract the confidence (i.e., probability) associated with the # prediction confidence = detections[0, 0, i, 2] # filter out weak detections by ensuring the `confidence` is # greater than the minimum confidence if confidence > args["confidence"]: # extract the index of the class label from the `detections`, # then compute the (x, y)-coordinates of the bounding box for # the object idx = int(detections[0, 0, i, 1]) box = detections[0, 0, i, 3:7] * np.array([w, h, w, h]) (startX, startY, endX, endY) = box.astype("int") # display the prediction label = "{}: {:.2f}%".format(CLASSES[idx], confidence * 100) print("[INFO] {}".format(label)) cv2.rectangle(image, (startX, startY), (endX, endY), COLORS[idx], 2) y = startY - 15 if startY - 15 > 15 else startY + 15 cv2.putText(image, label, (startX, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2)
We start by looping over our detections, keeping in mind that multiple objects can be detected in a single image. We also apply a check to the confidence (i.e., probability) associated with each detection. If the confidence is high enough (i.e. above the threshold), then we’ll display the prediction in the terminal as well as draw the prediction on the image with text and a colored bounding box. Let’s break it down line-by-line:
Looping through our detections , first we extract the confidence value .
If the confidence is above our minimum threshold , we extract the class label index and compute the bounding box around the detected object .
Then, we extract the (x, y)-coordinates of the box which we will will use shortly for drawing a rectangle and displaying text.
Next, we build a text label containing the CLASS name and the confidence .
Using the label, we print it to the terminal , followed by drawing a colored rectangle around the object using our previously extracted (x, y)-coordinates and 64).
In general, we want the label to be displayed above the rectangle, but if there isn’t room, we’ll display it just below the top of the rectangle .
Finally, we overlay the colored text onto the image using the y-value that we just calculated and 67).
The only remaining step is to display the result:
Object detection with deep learning and OpenCVPython
# show the output image cv2.imshow("Output", image) cv2.waitKey(0) # show the output image cv2.imshow("Output", image) cv2.waitKey(0)
Reference- Pyimage
For instant Run¶
obj_detect-master file - After Download follow below Steps.
Step has been Follow¶
-
Download Upper given file
-
Install
- Python2.7(Recommanded)
- Python3(Optional)
- install pip
- install opencv2
- setup environment
- numpy
How to setup environment:¶
-
4 Steps:
- Install Python
- Install Pip
- Install VirtualEnv
- Install VirtualEnvWrapper-win
-
After the installation is complete double check to make sure you see python in your PATH. You can find your path by opening your control panel -> System and Security -> System -> Advanced System Settings -> Environment Variables -> Selecting Path -> Edit ->
-
Now you’re looking at your Path. Be Careful, if you delete or add to the path accidently you may break other programs.
You need to confirm that C:\Python27
; and C:\Python27\Scripts
; is part of your path.
-
To test that Pip is installed open a command prompt (win+r->’cmd’->Enter) and try
‘pip help’
-
Install virtualenv:
Now that you have pip installed and a command prompt open installing virtualenv to our root Python installation is as easy as typing ‘pip install virtualenv’
-
Just as before we’ll use pip to install virtualenvwrapper-win.
‘pip install virtualenvwrapper-win’
-
Now you’ve got some work to do. Open up the command prompt and type
‘workon fab’
to activate the environment
Time to run¶
-
Open downloaded folder
-
Verified the all needed things(python, opencv,…)
-
Open cmd
-
Run the given below command
open cmd and run this command¶
python real_time_object_detection.py --prototxt MobileNetSSD_deploy.prototxt.txt --model MobileNetSSD_deploy.caffemodel
Outputs¶
a. You are using “pre-built code” for the RPi to perform Vision recognition¶
- Yes, I am using pre-built code by one of the contibuitor at pyimagesearch.
b. You run something off a handphone and the RPi recognises it¶
- Yes, RPi recognize the 22 objects i.e. pre-trained to the model.
a. Explain where you obtained the code (from some location etc)¶
- We can obtain the model from the repo located in gihub by its contibutor so we doesn’t need to train again the model, by using the pre trained model we can use the features of it by just changing the some of its arguments passed.
b. How you interfaced your handphone to the RPi | What program you are running on your handphone¶
- I had arranged the classical method of interfacing the RPi, i.e. by setting up VNC server in RPi and operating it by using VNC client (i.e. crosss platform) in my Android Phone.
d. What happens to the program to recognise a “chair”, surely the RPi code is not smart enough to recognise objects if not taught.¶
- The RPi can only recognize the objects which are been trained to the model. The objects which are not taught to the RPi are neglected by the RPi.
Needed Answer¶
- languages- p5.js, Python tutorial.
- math- TensorFlow, TensorFlow.js.
- performance- pi.py, numpi.py.
- deploy- remote desktop
write an applications¶
- To Dectectionany object then we can train that project which by Dectection.
-
Dectection object can be scripted i future that make some auto Dectection.
-
Advantages:
- First and foremost, OpenCV is available free of cost
- Since OpenCV library is written in C/C++ it is quite fast
- Low RAM usage (approx 60–70 mb)
- It is portable as OpenCV can run on any device that can run C
-
Disadvantages:
-
OpenCV does not provide the same ease of use when compared to MATLAB
- OpenCV has a flann library of its own.
- This causes conflict issues when you try to use OpenCV library with the PCL library
must interface with a user¶
- Here the interface of this will be different will code run another tab will open with camera there is task prompt also open.That shows object similar which it trained.