Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Another founder here, with some more comments on the tech side of things:

1. The software is relatively usable, and you can try it out right now on https://github.com/SimulaVR/Simula

2. The hardware is still being worked on, and the website is kind of a list of expected specs/placeholder in that regard:

2.a. The compute unit is tested and works, but requires a custom carrier board to fit the form factor. This is a blocker for the final product, but relatively low priority for the prototype.

2.b. Lens system design is scheduled to be complete in early November, with first prototypes available in early December. We're planning to use Valve Index lenses as a stopgap right now for prototyping etc.

2.c. We're currently solving a few challenges in driving the displays, as we're pushing the boundaries of the available technology, and at our volumes support from manufacturers is like pulling teeth. BOE supplies the 2880x2880 panels and there aren't even enough docs to figure out how to drive the (non-trivial, local dimming based) backlight.

2.d. We're also experimenting with different approaches to tracking as our original plan (RealSense) became end-of-life recently. I'm interested in an mmwave based solution, but we might just use RGB cameras instead.

2.e. The mechanical design for the front part is reasonably advanced, but we're still working on the back part.

There's a lot more going on right now that's probably not coming to mind immediately, but that should provide a good overview.



What's the best off the shelf inside-out tracking system you can get now? Does anything compete with Quest yet?


Nothing that's satisfactory in one way or another. Probably Luxonis DepthAI?

The main problem with off-the-shelf solutions is that they add another set of cameras, and afaik nothing exists that allows custom cameras.

We're gonna need an FPGA anyway due to the large amount of IO (2 cameras for AR, 2 for eye tracking, IMU, whatever other sensors we need, plus potentially mmwave radar if we decide to go that way) so it's tempting to put the processing on the FPGA as well.


Interesting - I guess I assumed the hurdle is both hardware and software. Oculus's hand tracking was a huge lift. Is there any commercially available software stack being worked on that is at least hardware generic? Or is everyone forced to build from scratch?


There's a lot of research papers that I found, but nothing hardware generic unfortunately.

Hand tracking is a difficult beast especially, and we would like to just use the new Ultraleap module for that, but they don't support Linux yet.

Eye tracking is relatively simple because it's a closed/controlled environment. Just some IR LEDs, an IR camera, and some edge detection and math.

SLAM (positional tracking) has a lot of different approaches . There's open source software, but it's generally running on a normal computer and that's not particularly efficient (especially with our GPU already loaded). Some research papers use a FPGA, but the code is rarely available so you just have a starting point.

You could probably crib the software from DepthAI or similar? We could implement the AI coprocessor they're using and adapt the code. I haven't looked closely enough yet to see whether that's a good use of resources.


Cool, that's helpful, thanks!


I recommend QP if you are going to do FPGA processing using a softcore or hardcore processor. It's an event-based state machine framework that handles IO really well. A hardcore processor would be more performant and take less LUTs but softcore will give you more flexibility as far as sourcing FPGAs.


Appreciated. FPGAs are something I've been aware of for a long while now but haven't used before, so recs are always good.


What are the potential advantages of an mmwave tracking system? The only previous commercial application I can think of was the pixel 4, which was very range and accuracy limited and power hungry.


You get position/velocity/angle data directly, and it's less power hungry than running high-res cameras specifically. Also some research papers show an increased tracking accuracy with mmwave+IMU than RGB+IMU.

So less processing + potentially less power + better performance, in theory.


does the device have a cpu or it needs to connect to a pc?

what is the predicted price point?

how to fit prescription lens ?


Compute module based on NUC will be included, and is pluggable on the back of the headset.

About 2-2.5k predicted price point.

Prescription lens we'll figure something out. We're trying to keep enough eye relief to support glasses, and we'll have at least provisions for mounting prescription lenses.

If we can, we'll be able to supply prescription lenses with the headset (for a surcharge) or collaborate with an existing vendor to provide lenses.


2.5k price point, ouch.

On a side note... are you on Kickstarter?


We will be, once we've sorted out all the blocker issues and our prototype is complete.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: