Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I know we've made significant progress wit on-device compute but I would be surprised to find out they are stiching in real time in that amount of space... Even doing this in real time on a commodity laptop would be pretty impressive to me.


It’s plausible to do this on device. We were discussing this product concept 8 years ago at one of the video/AI companies I was at. Between the relatively low res of thermal cameras and the cues you can get from various sensors about orientation it should be doable.


When the camera settings do not change, the positions are locked, etc, you can quickly do a "stitch" with a template that doesn't really take a lot of compute at all. It's by no means a clean stitch. Even in the demo video, you can see the edges from each camera. However, this is a product embracing the "know your audience" mantra, and it serves the purpose as designed.


Thermal cameras normally have a much lower resolution and only a 4 or 8 bits color depth. More than enough for locating people in a situation, I assume


“The man from Box 500 [UK MI5] proudly announced that the thermal-imaging camera at the rear had detected a pattern of movement. Every morning, he reported, at exactly 08:00, the hostages gathered in the commercial section on the ground floor. An SAS officer tartly pointed out the central heating was timed to go on every morning at that hour: the “hostages” detected by MI5’s machine were radiators warming up."

Excerpt From: The Siege by Ben Macintyre

Very good read.


YES! That's the neat trick. Clever of you to get that without prompting! So I figure you must have some imaging background. We're hundreds to thousands of times more efficient than traditional SIFT/SURF or other feature detection methods, which is the only way we can do this at the edge on the camera itself (without melting it or running out of battery in five minutes).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: