Emotion Recognition with Raspberry Pi
I will create a Smart Mirror with a facial recognition tool.
Area of Interest
Computer Science, Mechanical Engineering
Avon Old Farms School
My final milestone involved implementing my machine learning model onto the Raspberry Pi and Camera through a Python program.
My second milestone came in expanding the smile recognition to look for other, more complex emotions. Instead of 2 classifications, the second iteration used 6 expressions: anger, disgust, fear, happiness, neutrality, and sadness. With the second iteration of the expression predictor, I needed a larger dataset to accurately annotate all 6 classifications. For this, I used both stock images from Google and free databases online to annotate and collect 100-200 pictures of each expression. These pictures covered both genders, and a wide range of races, ethnicities, and ages. With such a wide range of classifications, I found the accuracy was lower. I found that even a machine struggled to recognize the nuances in each facial expression. While differentiating smiles and non-smiles came down to the shape of the mouth, more complex emotions required much more subtle pattern recognition. For example, while sadness and disgust are very different emotions, I found that these emotions shared similar facial expressions. A frown and a wrinkled nose could imply both sadness and disgust. As such, the machine often outputted mixed results or low-confidence results. In addition, I found the importance of thinking from the perspective of a machine. I needed to guarantee that the only patterns would that the machine would try to recognize would be within the face. I found that many of my annotated images of fear were in darker lighting, while pictures of joy were in brighter lighting. As such any picture, I fed to the machine that was particularly dark was automatically classified as fear. I had to account for this by changing the lighting so that there was no bias between any of the classifications.
My first milestone came at the end of days of technical issues with Python. Originally my plan was to integrate OpenCV into my smile recognition program. However, to use OpenCV, there were many dependencies that I had to download along the way, each of which had to be of the correct version to work with OpenCV. In the end, I switched to Nanonets, an online machine learning tool that trains a model according to pictures that are fed to it. Through this, I was able to create a smile recognition program that had decent accuracy. The first step of this involved finding and annotating the datasets that would be used to train the machine. I used the Olivetti faces, a dataset of 400 smiling and non-smiling pictures, and labeled all of them as “smile” or “not_smile”. Then, I uploaded the pictures to Nanonets and tested the model. the model would recognize patterns between both the smiling and non-smiling pictures, and predict any picture given to it based on these patterns. Because I trained the machine with square, black-and-white images, the machine was not very accurate for colored pictures. With black-and-white pictures though, the model was fairly accurate. In addition, I found that when I fed more pictures to the model, it actually decreased the accuracy of the machine. This was likely because the new colored images I fed the machine confused it, and some recognizable patterns in the images were contradicted.