Osiyo. Dohiju? ᎣᏏᏲ. ᏙᎯᏧ? Hey, welcome back.
The last couple weeks have been pretty focused on the 3D GUI/HUD (we’ll just call it HUD from now on). I already have LeapMotion integration, OpenCV integration, and BabylonJS integration. All three are separate pieces. With the HUD I wanted to make sure I had pieces in place so that the OpenCV detection coordinates could be sent to BabylonJS directly to display 3D elements. With LeapMotion I wanted the detection coordinates sent to BabylonJS to determine hit interactions and collision detection just like with a mouse. I’m part of the way there with both. From the screenshots below you can see the BabylonJS view with the OpenCV view. I created this 3D sphere based off of an existing example of a 3D curved panel. I call it cerebro because it’s how I view things stored in my head. I’ll explain that another time. It’s displayed here for the example of BabylonJS working with OpenCV and LeapMotion (which you can’t see working in the example but it’s there).
In the images you can see my wall, my ceiling, a lamp, and a photo. I would’ve shown a picture of me but I’m having an Alfalfa hair day. When the testing is complete the OpenCV portion will be a transparent image – the data will be processed by BabylonJS and whatever relevant data will be displayed.
For example, if I detected the lamp with OpenCV then those coordinates would be sent to BabylonJS which would put up a HUD with data about that lamp. This is a simplistic example. However, if OpenCV detected that the chair was from IKEA, for example, then it could tell you what the name of the chair is and maybe how much it cost. There are many, many applications.
At this moment I have everything set up so now we’re just wiring it all together. Take the coordinates shown in the square and link them to BabylonJS.
Until next time. Dodadagohvi. ᏙᏓᏓᎪᎲᎢ.