This was just posted https://www.youtube.com/watch?v=thhH2pQvQXw not long ago. He uses a vufine display and modifies it. I posted a question about the resolution he’s seeing in the glasses. I had such a horrible time with the Vufine HUD that I couldn’t use it the way it was. He takes an innovative approach to project onto the glasses.
This is the comment I posted “I’ve used the vufine for my HUD/HMD before and the resolution was soo bad for my eyes. I couldn’t use it to be able to do much of what I wanted – primarily because of the tiny display. I just couldn’t get it to sit correctly all of the time to be usable. I saw part of the video where you kind of show the view – but you do all of your work on the computer as opposed to trying to type using the glasses – which is why the vufine didn’t work (for those wondering https://www.amazon.com/Vufine-006011-Wearable-Display/dp/B01MZ89QXF/) for me. What is it like for you with this setup to use the Pi with the view in the glasses? My biggest problem was resolution. I could not get the vufine to display large enough to do what I wanted. I have a much different setup now that utilizes a 3.5″ LED reflected and mirrored off of a display for the AR/MR portion and three different viewers – two are way too bulky and you have to hold them (the Vufine AR Kit, HoloKit, and the third are the AR Box AR Glasses). In any case, I’m curious as to the display on the glasses you’ve chosen and how well you can see the screen at the vufine resolution. I do have blogs about my work as it stands today. Including being inspired by your tOLED use that I bought one and am going to wire it with the LattePanda I have (b/c it has a built in arduino). If your resolution is good enough or you have a workaround that makes it better I’d probably get another one for the AR display since, like I said the other three are way too bulky. Thanks”
I’m really hoping he has a mod he did to make it so that the vufine will work well on the glasses. I had to do a lot of issues – you can read those here https://winkdoubleguns.com/2019/02/04/serinda-why-the-framework-choices/ and here’s the only quote I had “I tested the Vufine eyepiece. Fantastic piece of hardware. I could not use it. My son loved it. The eyepiece blocked too much of my vision to do what I want. Again, not knocking the hardware – it was great. I couldn’t use it. It’s built for the right eye only. I have to use my left eye. I flipped the computer image and was able to kind of use it then. But it isn’t what will work for this project. I really need that AR projection.”
I had more than one issue that doesn’t seem like I listed. In addition to what I mentioned I had issues with the fact that I had to move it so close to my eye AND even with the helmet rig I couldn’t keep it stable enough to use. The display for the computer and what eventually I did with OpenCV4NodeJS and the web page mean that the 720 HDMI and the display resolution on the Pi even after making it bigger so I could see it all didn’t work the way I wanted. It just wasn’t good enough. Since then the AR Box, Vufine AR Kit, and AR Box are great for testing, but they aren’t good enough for a full HUD or HMD. The tOLED I received the other day isn’t full picture quality so that’s just going to be an intermediate thing (which I mentioned in my last blog).
In any case, I’m curious to see where this goes for HUD/HMD. I can do all of the items he’s doing, I just do them locally instead of using dropbox and google – that’s not a big deal. He could use Python to pull the image and save it locally then use the Cloud Vision API to identify objects – the pricing can be a downside which is why I was going with forms that I could install on the Pi (and now LattePanda). I wanted as much of my AI/IPA to be on computer as opposed to utilizing the network transmissions (Alexa, Siri, Google, etc).
It’s a very cool concept and I’m curious to see where it goes from here and if it can help me fix a couple of issues without having the ginormous headgear I am building on.