Object Detection with Rasberry Pi and ML

This project is an object detection program using the Rasberry Pi. Tensorflow is used to train the neural network to identify the objects.


Chirag M

Area of Interest

Computer Science


Monta Vista High School


Incoming Senior

Final Milestone

For my final milestone,I was able to get object detection to work on the coral edge tpu, which allowed for the continuous live stream to work. I downloaded the model onto the edge tpu and the live stream worked with consistent frames and identifying all the objects in the viewable frame.

What I did?

The object detection program effectively multithreads through the different processes in order to improve the FPS. This would allow for more effective live stream.  I struggled with downloading the packages onto the raspberry pi because I have conflicting packages that led me to use a virtual environment, which is an isolated environment that contains individual independencies, thus allowing certain packages to access when I need them to be. Once I completed this, I installed all the packages I needed and was able to run the model on the edge tpu.

How it works?

The edge tpu is much faster at processing the object recognition and is independent from the live stream. So it acts as a tpu accelerator to improve the processing ability of the object detection. The TPU is made to process tensor data, which is the type of data in the Nueral Network to process the object detection.

Google Coral USB accelerator

Images for Smart Security Camera

Second Milestone

For my second milestone, I created a program with improved frame rate of the live video feed by multithreading a method which would read each frame provided by the video feed and run the object recognition model to recognize and draw boxes on the frame. 

What I did?

The object detection program effectively multithreads through the different processes in order to improve the FPS. This would allow for more effective live stream. 

How it works?

In the process of multithreading, the method which contains the object recognition is run in the background while the live video feed continues on the screen, which allows for higher frame rates. Additionally, I was able to process a full video for each frame, which allows for a continuous and complete video 

Next Steps

My next goal is to implement the Google Coral USB accelerator to make the frame rate of my object detection faster. 

First Milestone

For my first milestone, I created a working object recognition using tensorflow and openCV. I am able to view a live video from a camera which identifies the objects and sets up a box surrounding the object. 

What I did?

I had to install tensorflow which is an open source machine learning platform that can train neural networks, such as training the model to identify objects. This process took an extended amount of time to train the model. Then, I installed OpenCV which effectively displays the objects and their detection on the screen. After that, I installed Protobuf which is a mechanism to arrange data, which is also used by tensorflow. Finally, I used a camera and I was able to identify the objects with a box as well as the object identifier and the percentage match of the object. The frame rate of this program is around 1.1-1.3 fps, which can be improved upon with a better model.

How it works?

The object detection program identifies the objects and places rectangles around the detected objects. Additionally, there is a percentage match of the object as well as the actual name of the given object. The frame rate is given in the top left corner of the program.

Next Steps

The next step is to research different ways to improve object detection through the different weights that can be manipulated.

Starter Project

For the starter project, I created the Who Am I game which is similar to the popular mobile game called Heads up. I used a breadboard as well as the RedBoard from Sparkfun. In this game, the user puts the LCD and components to their head, where they can’t see the screen. 

What I did?

I connected jumper wires to create a circuit to power the button, flow through the Potentiometer, LCD screen, and speaker. A battery is used to power the Arduino. I initially faced a roadblock was compiling the code onto the Arduino by selecting where to output in the IDE. Then, nothing was appearing on the screen, which was fixed by editing the potentiometer to reduce the resistance, which was restricting anything to appear on the screen and power the speaker. Then, I had a problem with my speaker because it was not working correctly. I fixed this by changing the wiring which did not allow current to flow through the speaker.

How it works?

Words are displayed on the screen and the user has to guess the word that appears with the help from the people around them. Once they get their guess correctly, then press the button to move onto the next word. The user gets 15 seconds for each word with a total of 25 words to guess from. If the timer runs out for a certain word, a buzzer is sounded and the game ends. If the user is able to get all 25 words in the set, the win and a buzzer is sounded that they have won.

Next steps

Now I must begin my main project by setting up the Rasberry Pi and downloading the correct software onto the source.

Leave a Comment

Start typing and press Enter to search

Bluestamp Engineering