The HUD “goggles” I was expecting are delayed. No one knows where they are. Not FedEx. Not Amazon. Right now, Wilson is enjoying some HUD lenses and pretending to see, well anything AR.
My MEAN.io seed turned to bust. I worked too hard to get it to run. So I found another seed that took minimal work on my part and is running. It’s an Angular 6 MEAN stack and it’s enjoyable.
My only issue so far is that it’s different from AngularJS so I’m finding my way around. That means I’m watching Udemy courses and coding. I try to do that after I’ve done everything else for the day. If I can figure out my routing issue I’ll be set. My express routing is no problem. Page routing doesn’t seem to be an issue. So my component somewhere is having an issue. So once I have more time to actually figure that out I will.
I cannot get multiple cameras to work with opencv4nodejs. Something isn’t working with my release() call. I’ve tried several different ways and it’s just not working. At the moment I’m going to have to stop working on that because I need to add more functionality (read convert from Python) so I can use what I’ve already done.
I’ve decided not to go ahead with OpenCV (opencv4nodejs) for gestures at this time. I’ve done gesture work in the past and lighting has to be just right. I might add some IR LEDs eventually to make gesture work easier. For now, though I’ve ordered a Leap Motion. I like their AR work and the mount makes it easy to mount to my rig. So we’ll see how that goes.
I zip-tied the USB camera to the side of the head rig today and tested out the optics. They’re not too bad. To try out a simulated HUD I used a large video monitor (large being like 7″) and CD case front to see through. I mean it was reversed but it did well for a test. I took some pictures, but they’re quite embarrassing. They do not bring out my eyes at all.
I’m currently installing OpenCV into my WSL Ubuntu so I can attempt some node-to-python calls. I want to see if I can utilize multiple cameras in Python directly then socket the feed back into the browser. That’s essentially what’s going on with socket.io now but all in Node.js. If I can use multiple cameras, and maybe some overlays or if I run into issues then I can do it this way. As well, there are some deep learning, neural network, and other items that I’ve done in Python and may be much better handled via Python (read: passed off to Python) to run behind the scenes then send a result back (or post in a message queue for the front end to look up). We’ll see.
Finally, I guess the last thing I see on my desk that I worked on is a module-esque-plugin thought. A way to make each of these plugins that I’m working on to be installed, removed, activated, deactivated, etc. and not have to do it manually. Right now, I edit a few files for an Express route, add to a command list, add an HTML file, add supporting javascript for the HTML file, add to the menu, and add to a helper javascript file that actually runs opencv4nodejs code. When I look to the future some of this would be nice to be an Angular component and some auto way to hook in for the rest, but that’s a long way off. I just need to think about it now. Even if during the build some of those files were autogenerated it’d be ok. We already know any changes to the Express routes means the app is going to have to restart.
Hopefully, tomorrow I get this camera to run the barcode code. What I’d really like to do is switch over to the new MEAN stack as soon as possible and run multiple cameras. But I’d settle for being able to run this barcode code from PyImageSearch which I can’t find at the moment. If you’ve never checked out Adrian’s work over at PyImageSearch do so. I’ve been following it for years. There aren’t too many questions that I don’t find answered there. Full disclaimer: I own his 4th Edition book, but he doesn’t pay me.