Severely disabled children controlling the mouse
Posted: Wed Oct 17, 2007 3:36 am
I am co-inventor of EagleEyes, a Discover Award winning system that allows severely disabled children to move the cursor with their eyes (in one version) or with a feature of their face (or their toe) that we can lock onto with a camera (version 2). The application is currently only pc based and also not easily ported or improved upon in its current state. I am considering redoing the application in Revolution. The existing application currently captures live video from an attached web cam. First, a 25x25 pixel area of a prominent feature is "grabbed". This rect is tracked by the mouse. At present, that rect is compared to all the other 25x25 rects in the image and the match with the highest correlation is where the mouse location moves to. I'd like to do something similar in Revolution.
Now my question. Can anyone suggest ways in which to use Revolution to analyze each frame in a captured camera image by dividing it up into a bunch of 25x25 rects? I have investigated the revvideo transcript commands but have no experience with them.
Thanks in advance for any suggestions, help or alternate approaches.
Now my question. Can anyone suggest ways in which to use Revolution to analyze each frame in a captured camera image by dividing it up into a bunch of 25x25 rects? I have investigated the revvideo transcript commands but have no experience with them.
Thanks in advance for any suggestions, help or alternate approaches.