There's a lot of research papers that I found, but nothing hardware generic unfortunately.
Hand tracking is a difficult beast especially, and we would like to just use the new Ultraleap module for that, but they don't support Linux yet.
Eye tracking is relatively simple because it's a closed/controlled environment. Just some IR LEDs, an IR camera, and some edge detection and math.
SLAM (positional tracking) has a lot of different approaches . There's open source software, but it's generally running on a normal computer and that's not particularly efficient (especially with our GPU already loaded). Some research papers use a FPGA, but the code is rarely available so you just have a starting point.
You could probably crib the software from DepthAI or similar? We could implement the AI coprocessor they're using and adapt the code. I haven't looked closely enough yet to see whether that's a good use of resources.
Hand tracking is a difficult beast especially, and we would like to just use the new Ultraleap module for that, but they don't support Linux yet.
Eye tracking is relatively simple because it's a closed/controlled environment. Just some IR LEDs, an IR camera, and some edge detection and math.
SLAM (positional tracking) has a lot of different approaches . There's open source software, but it's generally running on a normal computer and that's not particularly efficient (especially with our GPU already loaded). Some research papers use a FPGA, but the code is rarely available so you just have a starting point.
You could probably crib the software from DepthAI or similar? We could implement the AI coprocessor they're using and adapt the code. I haven't looked closely enough yet to see whether that's a good use of resources.