Welcome to my project page! This summer I am working on a ball tracking robot to follow a ball and other objects of a certain color (red). Here is a link to my code for this project and my surveillance modification and this is the bill of materials for this project.
Ball-tracking robot schematic:
Area of Interest
Software and Mechanical Engineering
The Harker School
Looking back on these past six weeks, I have discovered a new interest and passion in software engineering. Before, I had only worked on projects related to mechanical engineering. I will definitely be working on more projects that are more of a combination of the two fields such as my ball-tracking robot which I built this year.
For my third milestone, I have transitioned to a VNC connection over wifi as opposed to one over ethernet and made a modification to resemble a surveillance robot which has live video streaming and remote control.
VNC and Surveillance Modification
Originally I controlled the raspberry pi with my computer using a wired connection(ethernet). However, to have a color-tracking robot it is necessary to have a wireless connection so that the robot is not limited by the length of the wire. My main issue with this transition was that ethernet connections are much more reliable than connections over wifi. I solved this by moving closer to the router.
I wrote a new python script which will wirelessly transmit live video streaming from my raspberry pi to my computer. It also takes input from my computer’s keyboard to go forwards, backwards, left, right, stop, and quit. Code is in the link at the top of the page.
For my second milestone, I have begun to control my motors to move in the direction of the red object.
Motor Control Loop
In my first milestone I wrote OpenCV code to draw the smallest enclosing circle to encompass the largest blob of red in each individual frame. Now, I have updated my code to recognize when the radius of this circle is greater than a certain value, within a range of radius sizes, or smaller than another value. If the radius is too large, it will move backwards; if it is within the range, it will stay; and if it is too small, it will move forwards. Then, I improved my control loop to check whether or not the robot should turn and in what direction. Instead of using the radius, I used the x-values of the centers of each circle in each frame to determine whether or not to turn.
I want to mount everything securely to the robot, increase the speed, and include surveillance capabilities(live video streaming and remote control using, “W”, “A”, “S”, “D” keys to go forwards, backwards, left, and right).
Rohan's 1st Milestone
My first milestone was to make a python script to identify the largest blob of red in each frame from the livestream coming in from the raspberry pi camera module.
Original VNC Connection and Base Code
I used a VNC connection to remotely control the raspberry pi from my Macbook Pro. This connection uses RFB protocol which stands for Remote FrameBuffer protocol and it works by taking the pixels from the device with the VNC server on it and displaying it on the computer with the VNC viewer on it. Once you have VNC server installed on the computer you want to control and VNC viewer installed on the computer you want to control from, you can view the screen on the server computer remotely. Furthermore, using the viewer computer, the software reads your input including your mouse, keyboard, and touch and injects it into the server computer to actually control the computer remotely.
The code is a modified version of a python script I found on StackOverFlow. First, the code imports all of the necessary packages including: OpenCV (image processing library) and NumPy (used commonly in scientific computing using python, in this project it is specifically used to make an N-dimensional array to hold the various frames coming in from the raspberry pi camera module). Then, I set the upper and lower RGB values for the red color. In my case, the lower value was: (160, 160, 10) and the upper value was: (190, 255, 255). For the image processing part of the code: The program uses OpenCV code to make a red mask which emphasizes all red colors in the frame and then draws a circle around the biggest blob of red on each frame coming in from the live video stream. Before making the mask, the code converts the highest and lowest red RGB values into HSV which is another model for color which stands for Hue, Saturation, and Value. Earlier, I had some issues with the code not recognizing some red objects because of the lack of light in the room I was testing in especially since the HSV value of an object differs based on the amount of light it is being exposed to. After all of the processing is done, the updated frame is displayed on the screen
I am going to start setting up my motors to react to the inputs from the camera (turn in the direction of the center of the biggest red blob)