I'm not going to mince words: NeRF is rubbish compared to traditional photogrammetry.
> NeRF is the SotA method for these novel view synthesis benchmarks
That's an interesting statement; uh, I don't know what 'novel view synthesis benchmarks' you're referring to, but the parent post didn't mention them, and like me, probably doesn't care what they are.
If the state of the art is an 800x800 pixel image... uh, well, bluntly, that's really very unimpressive.
> Compared to that, I find "AI rendering" which is blurry and much slower (15fps @ 800px) somewhat underwhelming.
^ This.
It's very much a 'watch this space' technology, because it does have some really interesting and promising features, and it's changing quickly, but I think finding it 'somewhat underwhelming' is a pretty fair response.
No, I'm sorry but this blows photogrammetry out of the water. The original NeRF paper photorealistically handled complex occlusions (like foliage) and reflective and refractive caustics. No other technique comes close. That is the entire reason it's interesting, and believe it or not there are practical applications for it right now. Forget gaming - this lets you capture lightfields for VR with a cell phone in 5 minutes. And if the NeRFs themselves can be rendered in real time, it solves the problem of compressed light field scene representation. Buckle up for photorealistic VR.
I just gave you one. You can now cheaply and rapidly capture dense lightfields of highly specular objects for VR display. Get yourself a camera array (100 cameras is not infeasible!) and you can capture them instantaneously. That's totally game changing compared to the current state of the art of scanning camera gantries (slow) or photogrammetry (fails on complex or highly specular geometry).
If you're asking me for an example of it being publicly used in production, well I think you're asking a lot considering the technique is only a few months old.
> If you're asking me for an example of it being publicly used in production, well I think you're asking a lot considering the technique is only a few months old
That is what I explicitly asked for.
You’re failure to provide an example is not because it’s new it’s because it’s actually not useful practically at the moment.
NeRF has been around since March 2020 (https://arxiv.org/abs/2003.08934); you are simply wrong; traditional techniques are better right now, have better implementations and are widely used.
NeRF is a promising technology that is categorically worse in its current implementation and maturity.
> NeRF is the SotA method for these novel view synthesis benchmarks
That's an interesting statement; uh, I don't know what 'novel view synthesis benchmarks' you're referring to, but the parent post didn't mention them, and like me, probably doesn't care what they are.
If the state of the art is an 800x800 pixel image... uh, well, bluntly, that's really very unimpressive.
> Compared to that, I find "AI rendering" which is blurry and much slower (15fps @ 800px) somewhat underwhelming.
^ This.
It's very much a 'watch this space' technology, because it does have some really interesting and promising features, and it's changing quickly, but I think finding it 'somewhat underwhelming' is a pretty fair response.