Smile Recognition with a Raspberry Pi
My project detects if a person is smiling in a video. It uses OpenCV, python, and sci-kit, to train the computer to predict smiles based on classification of faces. It takes the classifications and trains a model onto them. Finally, it breaks down the video frame by frame and detect if the frame is smiling or not smiling.
Area of Interest
Electrical Engineering, Machine Learning
Homestead High School
Overall, I am excited that I was able to do such a complicated project in a short amount of time. There were times in the project that I wasn’t enjoying myself or was getting frustrated. I was proud that I was able to persevere through the challenges and figure out the project. AI will soon become the future and I am excited to get a headstart in understanding facial recognition and prediction. To others doing the project I would do more research on the lines of code in the tutorial. I had to reclassify multiple times due to the fact that one line of code was rewriting the file of classification. Another thing I would recommend would be to learn about all the different editors and how far along OpenCV and python have progressed. They have been moved since the original tutorial. For example, button widgets can now only be used on Jupyter Notebook, which was a problem for me since I was using a script.
My third milestone consisted of me creating a program which would detect smiles live. It would create a video which would try and detect a smile using the raspberry pi camera. The first thing I had to do was find the proper stretch coefficients so in the live detection when it detected the face, it would stretch the face to match the olivetti faces. That was difficult because there were many options where the coefficients worked. I found ones that mapped the mean of my smiling and non smiling face as close as possible before starting to create the live loop. For the live loop I had to use OpenCV to start capturing video. I used another OpenCV feature to write the video into a file. I had to break down the video frame by frame so that it could predict if I was smiling or not smiling. I used haar cascades to detect a face and then draw a rectangle around. I then detected if the person was smiling or not. After I used a feature called puttext to write smiling or not smiling above the rectangle. Afterwards, I ended the program when the user pressed the key q. When I tried to run the program, I learned that Jupyter Notebook doesn’t support video, so the kernel would restart every time I tried to run it. I had to copy and paste the program into a python script before running it. When I ran the smile recognition program I learned that it was 50%-98% accurate depending on the video. Light was also very important and if there was a light difference it was way less accurate.
My second milestone was using OpenCV and haar cascades to detect faces and smiles through images I put in. The first thing I had to do was download the jupyter notebook so that I could use widgets and classify the images from the sklearn package. I then wrote a trainer program which would classify all of the images. It took a while to classify all 400 images, but after I finished I saved it into an xml file. I was able to train the dataset using cross-validation and learned that my program was right about 80% of the time.I then downloaded OpenCV and tried to load my own image, so OpenCV could predict if I was smiling or not smiling. I ran into snags when the program couldn’t open the file of my image. I had to download the image onto my raspberry pi so the program could open it. Another snag that happened was the program couldn’t read the XML file that used haar cascades. Once the program was able to read the XML file for haar cascades, I could begin to detect if I was smiling or not based on my own face. The first thing I had to do was use haar cascades to detect a face in the image I had put in. It was able to detect my face even though I was wearing glasses which surprised me. I then had to zoom into my face and make it into 64×64 pixels so that it matched the olivetti faces. It was critical it matched the olivetti faces because that was the only way that OpenCV could predict if I was smiling or not. When I ran the predict for my face, it was able to predict correctly that I was not smiling. I took a picture of myself smiling and it was able to predict that correctly as well.
First 10 images from the from the olivetti images. These will be classified as smiling or not smiling.
My first milestone was assembling the raspberry pi as well as starting to use the sklearn database and matplotlib to display the images. The first day, I set up the raspberry pi. I started by using an SD card and flashing the Operating System on the SD card before inserting it inside the raspberry pi. I got the raspberry pi to display on a monitor. I was then able to attach a camera to the raspberry pi and make it film videos. I started programming in python using visual studio code and ran into some problems because the default language for the pi was python 2.7, but I was running python3.7. I set up an SSH and was eventually able to solve the problem and after some help. I began to start coding and ran into problems again with matplotlib. I didn’t realize I had to download matplotlib separately. When I got matplotlib to work, I realized that since I wasn’t using a notebook I had to use different syntax to get matplotlib to work. I had to download scipy and sklearn as well to finally get the faces to show.