A.I. Smart Mirror
For my final product, I created an AI smart mirror powered by my computer. This mirror combines components of my base project (the weather API code) as well as knowledge-based code using NLU data. I built on my software code and worked on some of the more challenging APIs such as the upcoming holidays one, the weather forecasting API, and the maps one. In the future I hope to implement a directions feature in which my mirror can give directions to locations using map APIs such as the one available on the Google Maps platform. I also hope to integrate my code with the user’s personal calendar so that it can give upcoming events and such. In AI, the possibilities are endless and I’m looking forward to exploring them in the future.
When trying to implement OpenCV, I discovered many parts of my computer and went deep into the file systems of it. I looked into different pathways and debugged the entire system, gaining a deep understanding of how these different libraries worked. I finally was able to do it, after weeks of coding!
Throughout this project, I learned so much about raspberry pis, how AI works, and how to utilize JSON files with APIs. I encountered problems every step of the way, whether it was debugging my code when the Electron application took over my computer or thinking that I had broken my raspberry pi when one of the pins was tilted. This experience taught me how to push through when met with challenging issues, and there were so many times when I worked through
I’m was able to add on the facial recognition addition using OpenCVin the beginning so that the mirror automatically turns on when it detects a person in front of it. Overall, this project taught me so much about working with AIs available online as well as developing my own. I gained the confidence to work on more challenging projects and develop my smart mirror further after Bluestamp. It was such a rewarding experience and I’m so grateful to all the teachers and my classmates!
For my second milestone, I set up the raspberry pi smart mirror in front of the display by cutting it to the correct size. To keep the small cracks on the sides of the mirror from propagating, I covered them with clear tape and wrapped the edges as well. I held off on creating a cardboard frame because I will be receiving a larger display and mirror sheet and craft foam to create that frame. This is because my AI program is quite complex and some things such as maps won’t be seen on the smaller display.
While working on code, I learned the basic path that an AI takes (specifically the Magic Mirror AI) and I worked closely with most of the software parts of these steps. Through fixing code and researching some of the many errors, I was able to integrate new APIs (eg. weather, holidays, etc) and write my own natural language interface program from scratch due to the outdated json files from the readme.md from the GitHub.
In the future, I’ll continue adding to the wit.ai program so that the model will be able to understand more intents. I will also update the maps code and integrate some of the Google Maps APIs (hopefully including directions) after doing some research. I will also work on the facial recognition with OpenCV, as I was having many problems with it this week and wasn’t able to install it inside the smart mirror virtual environment. I will also set up the actual display and mirror sheet once they arrive, and subsequently will set up the webcam and speaker. The frame will be built from craft foam and cardboard once the display arrives.
For my first milestone, I was able to pull up an initial terminal display for the smart mirror features that included a real time clock module, local news, and a weather module. Using a tutorial and GitHub repository from HackerShack, I was able to run code with a few modifications for the time and local news after downloading all the dependencies. However, I had to write my own code for the weather module since the original API, Dark Sky, shut down. After researching different replacements, I decided to use the OpenWeatherMap API available on the RapidAPI database coded in Python using the http.client module. After reading the documentation and writing appropriate code and utilizing JSON files, I was able to display the current temperature based on city or longitude/latitude as well as an appropriate icon and weather description. However, right now I only have around 10 icons implemented in my code and therefore the icons seem inaccurate sometimes.
For my next milestone, I plan to add more icons to my arrays so that it will be accurate at all times of the day. For example, my “clear sky” icon will now just refer to clear sky during the day, and the description of clear sky will be broken up into “clear sky night” and “clear sky day”. I also plan on implementing a real-time news feature in which the news headlines refresh every few minutes or so. Also, I plan on implementing an A.I. feature in which the user says something or asks a question and the mirror “answers” using voice recognition softwares. If I still have time, then I can add another relatively simple feature such as daily horoscope. At the end of next week, I want to finish all the software components so that I can build the frame for the mirror and display once they arrive.
I learned more Python syntax as well as how to implement an API from the internet properly. After making many errors with variables and urls, I was able to practice reading and understanding complex documentation. I also learned more about certain Linux commands such as sudo pip install, vim/vi, and various others. I was able to practice these on different operating systems such as iOS and linux and saw some differences in cloning repositories.