Hole in the Wall Game Using a Leap Motion Hand Gesture Controller
By using a Leap Motion controller, a player will have to use their physical hands to arrange virtual blocks in a specific way to fit through a hole in a wall that is moving down a table. Each time a new wall appears it travels faster, giving the player less time to assemble the blocks as the levels progress.
Area of Interest
Horace Mann School
For my second milestone, I completed the scenery of my game alongside a few components of my game’s mechanics. For the scenery, I went for a sci-fi, futuristic look. The game takes place in a large room with neon lights on the edges of objects and hanging from the ceiling. The room is also scattered with abstract decorations and simple furniture. The gameplay takes place on a table in the center of the room. For the technical improvements of the game, there are now two playable walls and a control panel. Through a simple wall behavior script, the walls move from a spawn point at one end of the table to a way point at the other end. Each wall has a specific shape that the cubes must fit through by stacking them in certain ways. To the left of the player view, there is a control panel with 9 buttons and a slider. Currently 2 of the 9 buttons are functional. The purpose of the buttons is to spawn different models of the walls, giving the player different shaped holes with ranging degrees of difficulty. The buttons interact with the wall behavior script by calling a specific wall from an array of meshes called Wall. Each button has a number assigned to it, so when (for example) the first button is pressed, the script will call Wall (in an array, the 0th component is the first component) because the first button has the number 0 assigned to it. The second button will call Wall, the third will call Wall, and so on.
For my first milestone in my project, I managed to get a working demo of the mechanics that will make my game work. More specifically, I created a demo game in which a player can pick up blocks on screen and assemble them with their bare hands using a Leap Motion controller. The controller works by using three infrared LEDs and two optical cameras to detect the presence of hands within a 8 cubic foot range. The LEDs light up your hands which, in turn, reflect the infrared light back to the cameras. The data from the cameras is transferred to the computer, which compiles the data in the form of a grayscale stereo image. By only detecting infrared light in the scene, the Leap Motion software is able to isolate your hands from any other objects nearby. The software then runs its algorithm to create a 3D model of what is in the image. The algorithm can also use the given information to approximate the position of hidden fingers or other parts of the hand to help improve the accuracy of the 3D model. The positional data is then used in any situation that calls for it. In my case, my game calls the data through Leap Motion’s Unity plugin, which provides prefabricated hand models and scripts. In my demo, there are four cubes, each programmed with their own interaction script. The scripts recognize when your hand is doing a pinching gesture and makes their corresponding cubes follow the path of the pinching hand, making it appear as if the player is grabbing the cube and moving it around.