Hi, my name is Jared, and I am a rising junior attending The Bay School of San Francisco. I have always loved engineering, from my younger years when I played with toy trains and building tracks, to my older years when I made cars using legos, to now, when I’m finally learning how to make something of my own out of scratch, and actually understanding how each part works. Now with my starter and main/intense project finished, I can confidently say that I understand how my projects were made, how they work, how they could be applied to the real world, and how they are applied to the real world already. For example, boosters change 3 volts to 6 volts using a power inductor, and voice controlled assistants use sound levels and fluctuations to correctly identify words. In a six week time span, I understand much more in engineering than I did when I first entered the program. 


Who is Jasper? Well, the real question should be not WHO per se, but rather WHAT. Jasper is a voice recognition robot controlled by a Raspberry Pi that understands what I’m saying, and can process requests given to him… it?

Jasper is a modified Raspberry Pi B+ with a custom imaged SD card which allows it to run its programming. A Raspberry Pi is a small but powerful computer that is capable of running many different CPU intensive small projects. A few examples are micro-controllers, home automation computers, personal digital assistants (PDAs), and in this instance, Jasper. The processing power of the Raspberry Pi allows Jasper to process information, run a script, and say an answer in relatively fast time. Not only that, but Jasper has a variety of ports to allow me to install certain capabilities, including wifi and bluetooth capability, and most important of all, allowing it to use the microphone and speakers to allow him to hear and speak.



Here I have the final version of Jasper up and working. Jasper is using wit.ai as a speech to text engine, and Festival as his text to speech engine. What this does is it allows Jasper to have access to a wide variety of words, as well as telling me the answer to any question in a relatively natural sounding voice.

Jared R - Voice-Controlled Assistant Final Video (Main Project)


Jasper uses a Wolfram Alpha module that can retrieve the answer to any questions I may ever have in my educational life. Unfortunately, because wit.ai requires internet access, Jasper can be very slow to respond, making conversations with Jasper tiring, and sometimes difficult. I intend to improve his transcription rate, and I believe that a better internet connection, probably via ethernet, is probably the best way to fix his issues.

Jared R - Voice-Controlled Assistant Final VIdeo Demo (Main Project)


Here I try out Jasper for one of the first times. As you can tell, he responds to his name every time I say the word “Jasper” and tries to turn whatever I say afterwords into a command. Unfortunately, the first few iterations of Jasper could not quite understand a lot of words, thus sometimes tripping off the wrong modules, as well as detecting his name when I don’t want him to detect his name.The results of this happening is… quite entertaining to say the least.

Jasper works through a series of programs as described on the Jasper Documentation Page. In order to get Jasper working, I first had to disk image an SD card, and plug the SD card into a Raspberry Pi 2.0 B+. What this does is makes sure that the RPi is using the correct software and operating system. After doing this, I installed Jasper’s core modules and programs by cloning into the Jasper github page. After doing so, I had a semi-working version of Jasper. He had a primitive offline speech to text engine, and a robotic sounding text to speech engine, as well as limited modules. But, as I continued working with Jasper, I installed a new speech to text engine, and a new text to speech engine. I also installed a new module which allowed Jasper to use Wolfram Alpha, a hybrid between Google and Wikipedia that answers and intellectual questions I may have. I also programmed a few of my own modules, and am looking into installing conversation capabilities using cleverbot.
I also have a side project to make Jasper be able to use bluetooth. He currently can be connected to a wireless headset, but streaming audio to and from the headset has proven to be quite the challenge.

Jared R - Voice-Controlled Assistant Milestone 1 (Main Project)


My first version of Jasper used Pocket Sphinx as his speech to text engine, and Espeak as his text to speech engine. But, as I became more and more confident with my abilities to configure Jasper, I have changed out Jasper’s speech to text engine with Wit.ai, which allows Jasper to understand a larger variety of words, and his text to speech engine with Festival, which makes Jasper’s voice sound a little more human-like.

Here is a demo of Jasper responding to “How old is Jurassic Park”. Note that I did not hard code the answer into his program, but rather he is retrieving the information from the internet, specifically from wolfram alpha.

Jared R - Voice-Controlled Assistant Milestone 1 Demo (Main Project)


Starter project: Adafruit’s MintyBoost, a small battery-powered USB charger

Now you may be asking one of these three questions:

#1 What is it?

It’s a small portable tin can that charges any device that you plug into it via a USB cable using only batteries.

#2 How does it do that?

The charger works by converting the 1.5 volts of power in the battery into about 5 volts of power, just enough to charge my phone, albeit a tad slowly, but it works.

#3 That doesn’t really answer my question, how does the MintyBoost actually work? On a more detailed level?

The power from the batteries are fed through several capacitors to lower the voltage, and then into a series of capacitors in order to make sure the power is steady. Capacitors do this by storing a small amount of energy within them, and feeding the circuit power whenever it needs to, or storing energy in order to keep the feed of energy the same, even when the voltage fluctuates. An example of this is when we turn the power on or off. When we plug in the batteries, the power goes from 0 Volts to 3 Volts in a mili-second. What this can do is damage the components. By having a capacitor, the power increase is more gradual, both when it’s receiving power, and when it’s losing energy, making sure the components don’t break.

The power is then fed into a power inductor. What the power inductor does is it converts low-er voltage into a high-er voltage, so in this case, it turns 3V in the battery into 5V, which is what a phone can use to charge.

#4 Why is there an attachment to your Mintyboost that isn’t on other pictures of the device?

I put a switch on mine so I can turn the Mintyboost on and off, rather than having the batteries die while not in use. It took a little extra splicing and making another hole in the tin casing, but it is much more convenient and efficient than the original design. The original design should’ve had this addition.

Here I do a quick explanation on exactly how the MintyBoost works, so this should give you an idea of how it was made, and how it functions.

Jared R - MintyBoost (Starter Project)



Leave a Comment

Start typing and press Enter to search

Bluestamp Engineering