Osiyo. Dohiju? Hey, welcome back.
I tend to do a lot more research on subjects until I’ve narrowed down my options. This means that if I’m looking for a way to read USB data I’ll look at every library out there starting with what I want. For Python 3 I would Google that. If I didn’t find something I liked then I’d look for a C library that maybe I could use with Python. And so on. Eventually, I get to the point where all of the articles read about the same since they all have the same information and I’ve read it a bunch of times.
This just so happened with headsets. I was working on my headset and code when I decided to do a search for AR Glasses on Amazon. Up came the Mad Gaze Glow Plus that I’d been waiting to come out for a while. Turns out that was last year. I did some searches, read reviews, watched videos, read all of their documentation and ordered one. I then continued with their SDK documentation.
You can learn a lot from the SDK if you know what you’re doing. For instance, reading the SDK notes and documentation and what libraries are downloaded for use in an Android app I can tell you how they achieve results they do. The first things I noticed are the use of a few libraries readily available for users regarding USB data. One is the Felhr USBSerial library which they’ve provided in the SDK and you can find here. The next one is the libuvcamera library also provided for you; the code is here. Finally, they provide the GlowSDK aar (Android Archive as opposed to Java Archive jar). This doesn’t have any source that I can find on their site. I am guessing it’s an API wrapper for their code so they can provide high level commands that you can use in your own app without having to write code to do the serial connections and session management yourself. You can read the SDK documentation here that provides the code to set up pretty much every element of your application.
I apologize for not citing some sources. If I’d thought I’d use then I would’ve kept track better. Here’s what I’ve learned, so far just from the libraries used, the code in the SDK, and comments in their videos and on the site. One source (read: article or video) stated that iPhone doesn’t do screen mirroring like Android does. That got me thinking about the display. I thought, at first, that the headset used Bluetooth to connect to the phone and provide the metrics then the libraries sent video back. This is what I’d do on other projects. However, they say that they use USB-C OTG and that can provide everything needed bidirectionally. So USB data to the Android app and video output in the form of screen mirroring from the Android app out to the binocular displays. I was looking at the libraries included and wrapping my brain around what it would take to get the USB data.
Would I have to create Java code starting a Java server then using Python connect to that server and then pass display information back? No, that would be dumb – but I was tired. At that point I realized that they have provided a dummy device. Essentially it’s what I have with the LattePanda and the cameras. I have all of the USB cameras and sensors connected to a single USB strip. That is then plugged into the LattePanda. I hope you’re following because this is where I got excited. I already do what they’re doing with the USB ports but using OpenCV where they’re using different libraries to get the USB data from sensors and cameras. I don’t need anything extra, I’m already doing it.
What about the display? That’s a bit more tricky for me. I thought I’d look at how to use Python 3 to send display data over USB-C with DisplayPort. It then occurred to me that I’ve done this before in different ways. I have adapters that connect HDMI and MicroUSB or Apple USB plugs into other devices. Looking at their three way connector on the Mad Gaze site that may be what they’re doing too. So I ordered a USB-C and HDMI to USB-C connector. So now, the idea is that I plug in the USB-C from the connector to the Mad Gaze headset. Then I plug in the HDMI out from the LattePanda and a USB-C power cable to the connector. If my hypothesis is correct – which not having seen the product first hand I don’t know. I should be able to get USB data from the headset and send HDMI output to the headset. That also means I can plug it in to do initial testing with my laptop.
So, the only thing I’ll have to figure out once I get the headset and plug it in is what the different devices are. For example, in OpenCV I’d do VideoStream(0). As far as I can tell there are no sensors in the schematics. Any sensors come from the phone and whatever is figured out for gestures etc is done by software. Since I don’t have access to the code I can’t tell you how they are accommodating for things like placement of AR items in space and that kind of thing. We’ll see for sure on Monday what the sensor packages are… The documentation has none though.
So, here’s what I’m going to do. I know it has an IR camera, RGB Camera, and the displays. I know my software works with the LeapMotion. I’m going to have to work on an OpenCV library to enhance the gesture recognition I already have. However, in the interim, I will mount the LeapMotion to the front top of these glasses so I can have the gesture work that I want. Since there are speakers but no microphone I’ll add one that goes to the LattePanda.
I’m going to read more of the SDK and find out how they use the sensor data and how the sensors work exactly.
I’m a little excited for Monday. I’ll do a live unboxing and review. And I’ll do a live connection of the LattePanda and my code with this headset. It could go well, if I’m right, or I could go down in flames.
Until next time. Dodadagohvi.