Smart Surveillance with People Detection
This is a Raspberry Pi surveillance project with facial recognition. With OpenCV, a computer vision library, it can take pictures when it detects motion at the front gate. If it recognizes a face it will send a text message to the user letting them know who is at their door.
Area of Interest
Lynbrook High School
My final project included integration with Twilio API. When I run the code the camera waits at the front of the house to detect motion. Once it detects motion, it takes pictures and checks for people in the object detection model. I used Twilio to manage the text messages over the cloud. After it takes a picture it sends a text message to the user informing them of arriving home safely, and if it doesn’t detect anyone is informs them of motion at their front door. Through this project I got a lot more familiar and confident with Python. Image processing with OpenCV was definitely my favorite part. Even though it was difficult, I am proud of what I was able to accomplish with it. This was all a new experience, and I learned a lot while making this project. My next steps are further refining the model to be more accurate, send pictures to the user by authenticating the pictures taken by the picamera, and to expand the amount of people that can be included in the model.
My second milestone was getting the Picamera to detect motion with OpenCV and to get it to recognize members of family with Nanonets. OpenCV is a computer vision library that has a lot of dependencies. My Raspberry Pi didn’t have enough SWAP size to compile OpenCV so I had to increase it temporarily. I am really proud of what I was able to accomplish with it. I had to create a custom object detection model with Nanonets to have it recognize members of my family. I had to upload and individually annotate pictures of my family which was very time consuming. Now when I run the code it takes a picture whenever it detects motion and checks the image for any people in my family.
The first milestone in my project was having the Raspberry Pi accomplish identifying objects from a pre-trained sample model through the camera. I had to begin by downloading Raspberry Pi OS through the imager with a microSD card to the Raspberry Pi. Then I had to set up the model through Nanonets. Nanonets had a confusing documentation and was bit difficult to understand at first, as I haven’t really worked with Python and Raspberry Pi before. Eventually I got the hang of it and had to make adjustments to the sample code so it could recognize objects through the camera and not just pictures off the internet. In the end, the code was able to draw bounding boxes around each object and predict them quite accurately with the model that was provided.