SERINDA V2 Progress 23Dec20

Osiyo. Dohiju? Hey, welcome back.

I was sick last week and forgot to publish this update so I’m releasing it now so I can put together my next update

I have managed to update the code so that you can simply add a plugin, filter, and a couple more items to a folder and they’ll be included automatically. The plus side is that this means I write a new filter for the cameras and when I enable that filter it will propagate. The downside is that the code is slower because it goes through two different loops. I need to fix that. Despite that, the front end display is just as fast, as expected.

In addition to the filters and plugins I did add dynamic pages. So when a plugin is added the javascript, menu, and html files are automatically populated on the page for use. This is still lacking in front end callbacks. So you cannot manipulate the front end with a command because there’s nothing coming back to tell it that it should… except for translations. If you ask for a translation to Spanish then the computer voice will be hispanic and sound correct.

I don’t know if I covered this before, however, each plugin and filter can have their own set of commands defined for SNIPs. Those commands are automatically merged to one commands.yml file which is then turned into a JSON file for use with SNIPs. I did fix the Java/Groovy startup for the JVM. In this way, if I want to write Groovy or Java to do tasks I’d be more comfortable with in those languages or even to experiment with elements I can do so. I can’t imagine using this much. In fact, I’d like to turn some of the plugins and tasks into faster running C/C++ code but I don’t know what the impact of that would be, yet. It is on my list though.

Here is my current Todo List that I’d like to have done before I release Serinda V2. Not all of these may be done before the release, but I’d like to at least wire the abilities. There are a few items on this list that if they were done (like installers, nn factory, and documentation) I’d probably release V2 then create this list in bitbucket as features to add. I am still quite happy with the work. I’d really like to get it on the LattePanda to do some larger scale testing with a battery pack instead of sitting at a desk. But I’m willing to wait until I have more features. If I get too antsy I can always hook everything to the laptop and walk around with the three cameras.

Until next time. Dodadagohvi.

Current Todo List:

make a set of installers to cover windows, linux, and mac
    lookup SERINDA OS recognizer for ideas

Move video 2, left 50 pixels
    save in database
    load gui
    hide video 2
    
requirement for pip3

computer create new class
    find code examples
    create stub

Dynamic camera detection
    database?
    iteration and initialization
   
Keras, tensorflow, pytorch, caffe?, theano?
    Neural network class to interface
    pretrained models wired, darknet, kaggle, yolo
    can use whichever one
    load and unload as needed
    PDF load vox
    Scroll PDF vox

OpenVX
OpenPose, Skeleton pose, micro expresions, body language, human poses, eye tracking, human activity
objects recognition
gestures recognition

PyTTsTest -> Python TTS instead of the browser voice

Detect barcode on camera 1

TesseractOCR Streaming

Kivy

take realtime photos and video to train NN recognition

vox select grid system to train with like captcha images select traffic signal

update wiki and readme

update code comments

language lessons

read pdf aloud

turn HTML into React should make it easier to do a lot of the stuff I want to

fix commands for menu

speech commands instead of python? later when optimizing we'll look more into this

snips can also be used with Tensorflow JS

track object and select grid coord (until gestures work)

screenshot a grid number 

track an object by grid coordinates

select an object by grid coordinates and train a new recognition object - basically - select grid 150 to 200 by 100 (x 150 to 200 and 100 high) - then you can rotate or walk around the object and it will continuously get input of that object that it's tracking and generate the training files needed for this object to be added to the recognition framework.  Real-time object recognition training

Oculus with 2 cameras where eyes are to introduce whole AR to work with and see what happens

test C/C++ code for some OpenCV items and see if that increases speed.

test the write to a file portion - the writing works, but the encoding doesn't seem to be complete for some reason.

speed up camera processing with the filters (look at the loops)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.