Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The intermediary stages take more information then the final picture. So by moving part of the processing to the device you increase bandwidth requirements


The thought crossed my mind, but here is an example that I am not sure actually ends up being helpful but illustrates a point.

Imagine some kind of game with rain where water is collecting on your lenses. You could render and compress the frame, send it over, and distort it further You could even render the first source at 60fps, and the rain at 120fps. Or the first image comes over at a lower resolution, is upscaled, and then the rain effect is rendered at full resolution. The same could apply to synthetic film grain. Compressing a more pristine image and then adding film grain later should allow for significant additional compressibility? Decoupling the rendering into two layers could possibly allow for more resolution and framerate tricks like this? Or even color space upscaling upon display? Would it be possible to send half the color depth in even frames, half in odd, and have a nn up color both frames to their original?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: