Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That’s not really true though. Unless you’re talking about a trivial distortion, like inverting the colors, there’s always some loss of information. In the case of blurry text, we’re still making an assumption that the paper holds some form of human writing, and not literally a blurry pattern. Maybe there’s external context that confirms this. But solely based on the image itself, you can’t know this. It’s basically a hash function; there are multiple possible “source” images of what’s on the paper that may end up looking exactly the same on the blurry/low-res/degraded etc output video. Human readable text is likely the most plausible but it’s not 100%.

You can’t reverse an operation that loses information with absolute certainty unless you are using other factors to constrain the possible inputs.



> You can’t reverse an operation that loses information with absolute certainty unless you are using other factors to constrain the possible inputs.

Ok, but there are other factors you can use to constrain the possible inputs.

For example with license plates. You know the possible letters and how they appear when "blurred" so you can zoom-enhance them.


> For example with license plates. You know the possible letters and how they appear when "blurred" so you can zoom-enhance them.

And even STILL, you can't be SURE.

Let's imagine some license-plate system optimized to give the biggest hamming distance for visual recognision (for example, no O and o in, only one of them, and no i and 1, only one of them, and so on) to make it as good as possible.

Now, you take some blurry picture of a license-plate and ask the ai to figure out which it is, well, one of the symbols are beyond the threshold of what can be determined, and the ai applies whatever it's learned to conclude (correctly) that the only allowed symbol is X.. Now, thing is, the license-plate was a fake, and the unrecoverable symbol didn't conform to the rules of, it was actually a 1 printed there, but the AI tells it's an 'I' since that's the only allowed symbol.. It just made up stuff that was plausible..

You cannot extract what's not there. You can guess, you can come up with things that _COULD_ be there, but it makes no difference, it's not there.. It's the same with colorized vintage videos, we can argue that it'd not be wrong to assume this jacket was brown since we have lots of data on that model, but we _CAN_NOT_ know if that particular jacket was indeed, brown, it might have been any other color that made the same impression on the monochrome film. The information is _GONE_.


>And even STILL, you can't be SURE.

That's why I said "with reasonable accuracy and consistency". Human can't be SURE either. Nothing is ever SURE if we want to stretch it to absurdum.

My entire point is that computers can be better than people at a given visual recognition task. Therefore we might discover that some information is present in the data even though we previously thought that information was not recorded.

That's literally the entire argument. I'm not sure what you are opposing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: