I haven’t finished the architecture post, yet. It’s coming. I have a lot of handwritten notes to transcribe and a lot of thoughts to type so it’ll be soon.
So much is happening quite fast since I made the switch to Node.js for my work.
The biggest piece of news is that I finally had a successful and fully integrated test of the TTS, STT, and OpenCV since moving from Python. Python was ok for me, but I don’t understand the language fully; I get by. Implementations of work I’ve done the last two years have been one-offs consisting of standalone code in test phases. Since I could focus on getting each individual framework to run I did that instead of integrating pieces. I figured I could come back and integrate later as long as the individual sections worked.
I implemented my own Command Factory, Command Processor/Implementer. For the time being these are the simplest forms utilizing jquery’s ability to make your own custom events. I’ll cover this more in-depth in the architecture post, for now here’s a short overview. As well, this is the “for right now” implementation so I don’t spend a fortune of time working on command processes while I should be working on features.
The user’s speech is converted to text on the page
The text is then sent to a route in node.js that is evaluated for any condition that would evaluate to a command
A command is generated for the page
That command is sent back to the page
In the success portion of the ajax call the command is sent to a command processor that runs our custom event.
All of this is done offline. That means this API does not hook into any STT or TTS online APIs like Google, witt.ai, or any of the others. This is au naturale. Remember, one of my goals is to be able to use this application anywhere without the internet. Eventually, configurations will allow the user to select what STT or TTS API they want to use and on and on, but I’m the only one using it right now.
So a successful test, what does that look like? Cheesy as all hell, that’s what. I say “ok bob show menu” or “ok bob show tesseract” or “ok bob hide video” and it does. You wanna know what though? It’s also cool AF.
What else has come up? I know that testing is coming and I’m not going to be stuck to the chair. The last testing I did out of a chair was with an Arduino Mega 2540 and a custom iPhone app that was feeding data so I could see the speed and other info. Since then my tests have been at or around my desk with multiple USB cameras and I’ve not needed to do anything else. Since this involves the actual HUD portion I’m gonna need to be portable.
I have five of the top recommended external batteries to test for running the raspberry pi rig and any displays. Each power source is supposed to be able to power something for a long time.
I have an external monitor for filming and an HDMI splitter. This is important because I am also testing some Vufine displays. Had I read that they were only right eye compatible I would’ve only ordered one, but we’ll see what happens. I have an unboxing video of the Vufine display coming.
For the display, this could be in terms of the Vufine or if that isn’t going to work I have an AR faceplate coming that will allow the display to reflect off the front and see it. I’m hoping one of these two or both will work for my tests. If not, then I’ve looked at the Leap Motion solution though I’ll modify it for what I want and not look like a giant bug.
Sorry – rant started – I couldn’t stop it
If I haven’t mentioned this before the reason I’ve chosen this route of make your own instead of say a Blade, Google Glass, or many of the others is that they’re locked into their own proprietary software. This is designed for everyone. One set of glasses looked sweet until you find out that you’re locked into a stupid Android device on a jacked up controller that you use to swipe your finger. Dumb. I want something a user can plug in, order parts available (granted I’m experimenting with Vufine and this other AR faceplate), and go or make your own stuff and improve it and do whatever. I’m not going to pay 1 grand for a device that I’m locked into their proprietary software and hardware. Make a device that I can use that gives me what I want. That’s cool that it’s wireless but it only has one view for the user in one eye. Dumb. Sure people will pay for it. I would certainly pay for it as well. The Blade is a fantastic piece of equipment I’d love to own. The reason I won’t? Because there are better ways to do it. Using Android and having only one view is not it. Pushing a device out to make money that’s how a company does it.
I will admit running this device will have limitations at first. I mean, right now a Raspberry PI B 3 may not have the power to make it happen. I may upgrade to a LattePanda. I don’t know yet… and I’ve digressed long enough. The point is there’s a lot of experimentation and a lot of companies trying to be the first but they’re producing shit products or ok products that could’ve been much better if they’d not restricted users to dumbass tethered devices.
end of rant if you want to skip to here
Cameras – I have several cameras to try. A couple tethered and a couple wifi. I have no idea if the Pi can handle any of them. Right now, I’m just getting the installation files to install the work I’ve done with the project onto the Pi so I can test out the interaction. After that, I’ll check out the cameras.
I should detail all of the pieces in a video and why I chose each of the products I’m using and how this will all be laid out.
In any case, so much has happened in the last week, the last two days alone.
My next steps are going to be finishing the install on the pi so I can test the HUD in the Vufine I have and see how that’s going to work. I already don’t like them because they obstruct a whole field of vision. If they were like projector versions I’d just go with that where the pico projector projects to a mirror that reflects onto a surface. I have lots of tutorials on that process since 2012, but the issue is finding the small display that’s HDMI. Clearly Vufine did some work to put that into an eye piece with a battery and change displays. That’s a bit of overkill for me, I think. If I’m going to be tethered to an HDMI cable and possibly a USB cable for charging then I might as well look at those options as a reality and not look to the pie-in-the-sky options like full wifi and all of that.
I’ve probably gone on long enough:
lots of cool stuff – fully integrated. working on the pi installation so I can test the new eyepiece, batteries, and camera.