Osiyo. Dohiju? ᎣᏏᏲ. ᏙᎯᏧ? Hey, welcome back.
There’s a write-up about a Raspberry Pi HUD that someone is receiving praise for. I always read these with anticipation as though there is going to be some breakthrough that someone found to advance my work. I am almost always disappointed.
This work is a start, but it is far from noteworthy. You can read any of my posts from the past where I’ve gone through the SBCs (Single Board Computers) from Arduino Uno, Arduino Mega, Arduino Yun, [the failed] LinkIt, Raspberry Pi 2, Raspberry Pi 3, Raspberry Pi 3 B+, and Latte Panda. As well as multiple headsets (my own and others), viewers, and software from proprietary, to open source, to my own open source code. I’ve also interfaced with a LeapMotion, used two cameras to capture gestures, and more.
Everyone has to start somewhere and most projects start this same way – I did. My first HUD was an Arduino Mega with a small OLED to display speed while riding in a car and I used a cardboard tube with an angled mirror to be able to see the road and see the speed. In 2014, it was pretty cool. You can check that out here.
What I am saying is that I’d like to see more projects build on wearable HUD technologies that are more advanced than this. Triton used the same style headset and printed his own headset to accommodate the eye “pieces” then instead of two screens he split the one screen programmatically. I’ve done this as well with my own headsets and software.
Do research. Find out what exists. If you’ve seen other projects then include attribution. If you build something better, then post it. Build on existing projects or contribute to existing projects. Don’t just come out with something simple that everyone who has built a HUD with a Raspberry Pi has done and expect high praise. See what others have done in the last 8 years and don’t reinvent the wheel.
Until next time. Dodadagohvi. ᏙᏓᏓᎪᎲᎢ.