Osiyo. Dohiju? ᎣᏏᏲ. ᏙᎯᏧ? Hey, welcome back.
Let’s start with motivation. I wanted to add the 3D objects to my visual scene and be able to manipulate them. This means, labels, some HUD-like graphics, information panels, and more. I also have a LeapMotion to integrate and Javascript seems to provide better examples than Python. I also need to include OpenCV in my scene and rewrite my Python plugins as Javascript. I know I said I didn’t want to do this, however, after a lot of experimentation, I think it’s the better choice. When OpenCV has points to provide for HUD elements I can use those directly instead of expensive calls to the server. As well, the changes will be near instantaneous.
LeapMotion has LeapJS which examples like leap-widgets and others use ThreeJS for rendering WebGL. I spent quite a bit of time working with ThreeJS. I kept working with it because of the LeapJS examples. Then I did more research into BabylonJS.
Here are the elements that BabylonJS sold me on over ThreeJS. Export directly from Unity, Blender, 3DS Max, and more to a BabylonJS scene. 2D Canvas, HUD, GUI, 3D GUI, and more tools. The biggest selling point was the advanced playground and extensive examples in the documentation. Here’s an example of a scene I put together for SERINDA. I’m wiring it with my LeapMotion now so I can rotate it around and manage each panel. If you click Panel 11 or Panel 21 an input box will appear. If you click on the input box a virtual keyboard will show up and you can type in that box. The box and keyboard came from an example on the site and I modified a 3D scene to make the globe-like grid. I’m still manipulating a few things so it’s the way I want it, but this gives you a pretty good idea of how the playground works. ThreeJS has something like this as well.
I’ve tested this with both the Johnny Mnemonic headset and the Roy Orbison headset. It looks great. Once I’m done taking the LeapMotion coordinate results and applying them to the scene I’ll have a better feel for what I still need to do. As well, this is just one example scene. I have another which is a video overlay. So OpenCV will do the object detection then the points will be sent to a method and a scaled HUD element will appear. I have a Javascript SLAM I’ve been interested to try out. I also have two Intel RealSense cameras I need to hook up for SLAM as well. I am hoping to have the SLAM set up with or without IMU by the end of this week. Then I’ll have to remove portions of the codebase I no longer need in SERINDA.
A lot of exciting work coming up.
Until next time. Dodadagohvi. ᏙᏓᏓᎪᎲᎢ.