Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I work at Axis, which makes surveillance cameras.[0] This comment is my own, and is not on behalf of the company. I'm using a throwaway account because I'd rather not be identified (and because surveillance is quite controversial here).

Axis has developed a way to cryptographically sign video, using TPMs (Trusted Platform Module) built into the cameras and embedding the signature into the h.264 stream.[1] The video can be verified on playback using an open-source video player.[2]

I hope this sort of video signing will be mainstream in all cameras in the future (i.e. cellphones etc), as it will pretty much solve the trust issues deep fakes are causing.

[0] https://www.axis.com/ [1] https://www.axis.com/newsroom/article/trust-signed-video [2] https://www.axis.com/en-gb/newsroom/press-release/axis-commu...



It shouldn't be too hard to film a deepfake movie from a screen or projection that don't make it obvious it was filmed. That way, the cryptographic signature will even lend extra authenticity to the deepfake!


>> I hope this sort of video signing will be mainstream in all cameras in the future (i.e. cellphones etc), as it will pretty much solve the trust issues deep fakes are causing.

> It shouldn't be too hard to film a deepfake movie from a screen or projection that don't make it obvious it was filmed. That way, the cryptographic signature will even lend extra authenticity to the deepfake!

Would you even have to go that far? Couldn't you just figure out how to embed a cryptographically valid signature in the right format, and call it good?

Say you wanted to take down a politician, so you deepfake a video of him slapping his wife on a streetcorner, and embed a signature indicating it's from some rando cell phone camera with serial XYZ. Claim you verified it, but the source wants to remain anonymous.

I don't think this idea address the problems caused by deepfakes, unless anonymous video somehow ceases to be a thing.

Similarly, it could have serious negative consequences, such as people being reluctant to share real video because they don't want to be identified and subject to reprisals (e.g. are in an authoritarian country and have video of human rights abuses).


The point of signing is to do a checksum of the content of the video, not just adding a signature next to the stream (which would be pretty useless).

The signature of the politician on a street corner would only be valid if it actually verifies the content of the video; and the only entity that can produce that signature is the holder of the private key.


> The point of signing is to do a checksum of the content of the video, not just adding a signature next to the stream (which would be pretty useless).

That's what I meant: sign the video content with some new key in a way that looks like it was done by a device.

> The signature of the politician on a street corner would only be valid if it actually verifies the content of the video; and the only entity that can produce that signature is the holder of the private key.

The hypothetical politician's encryption keys are irrelevant. The point is that even authentic video is going to be signed by some rando device with no connection to the people depicted, so a deepfake signed by some rando keys looks the same.

IMHO, cameras that embed a signature in the video stream solves some authenticity problems in some narrow contexts, but it's nowhere near a panacea that "will pretty much solve the trust issues deep fakes are causing."


It's not about the keys of the politician. It's about the keys of the manufacturer of the camera. The aim of the signature is to prove that a video did in fact come from such camera at such location, at such time.

Of course, if one has physical access to the camera/sensor it's possible to make a fake video with a valid signature. But it's a little more difficult than simply running a deepfake script on some machine in the cloud.


> It's not about the keys of the politician. It's about the keys of the manufacturer of the camera. The aim of the signature is to prove that a video did in fact come from such camera at such location, at such time.

If the goal is to allow a manufacturer to confirm video came from one of their cameras, I think it's somewhat more helpful than I was originally thinking, but it doesn't change my opinion that this technology would only "solve some authenticity problems in some narrow contexts." Namely, stuff like a burglar claiming in court that security camera video was forged to frame them. I don't think it addresses cases of embarrassing/incriminating video filmed by bystanders with rando cameras and other stuff like that.

> Of course, if one has physical access to the camera/sensor it's possible to make a fake video with a valid signature. But it's a little more difficult than simply running a deepfake script on some machine in the cloud.

If you're faking a video like I described, you certainly would have "physical access to the camera/sensor" that you claim made it. You're making a fake, which means you can concoct a story for the fake's creation involving things that are possible for you to acquire.


A screen with double the resolution and twice the framrate should be indistinguishable. Moreover if you pop the case on the camera and replace the sensor with something fed by display port (probably need an fpga to convert display port to lvds, spi,ic2 or whatever those sensors use, at speed) that should work too.


This is still a lot harder than just using the deepfake application in the OP. But I’ll admit the arms race might not be over yet.


You can raise the bar a bit. No one checks to see if a $5 bill is fake. There is no real upper limit on what the payout for a deepfake could be. TPM isnt going to save facial recognition ID like the IRS from being obsoleted by deepfakes. But, for things that go to trial a TPM dashcam with tamper evident packaging (that can be inspected during trial) is probably good enough for small claims court. You could add GPS spoof detection, put as much as you can on the tpm chip (like the accelerometer), and sign all sorts of available data along with the video, but that will up the unit price a lot, and for the kind of fraudulent payouts you'd be trying to stop, you wouldn't make it enough harder to keep it from being cost effective for the fakers.


Not if the camera includes metadata like focus, shutter speed, accelerometer, GPS, etc. I don't really know, but I imagine the hardware security required wouldn't be too far from what's common now. Cameras are already unrepairable, so I suppose the arguments would have to be more from the privacy and who-controls-the-chipmakers perspectives.


GPS spoofers are available legally, you just replace the gps antenna with the spoofer, so no FCC violation. You'd have to break the law if you don't want to open the case to get to the antenna. I don't have any answers to the accelerometer or focus other than replacing those sensors too, and if you made the accelerometer on the same tpm enabled soc it would make moving shots like from a dash cam hard.


Its not like TPMs are infallible. And even if they are thought secure today, older encryption becomes trivial to crack with time. But like you said, its about raising the bar. You can do a lot to mitigate the threat of deepfakes to a certain point which will eventually push them back into just the realm of those who really know what they are doing. That's not ideal, but well funded and talents groups have been able to falsify evidence to discredit people since the beginning of time. So the nature of the problem doesn't change, they just have another tool.


You could include LIDAR with the camera and use that data to verify that there is depth to the image.


Isn't it fun how the analog hole works both ways? :)


That will just move the hack one level further down and will create even more confusion because then you'll have a 'properly signed video stream' as a kind of certificate that the video wasn't manipulated. But you don't really know that, because the sensor input itself could be computer generated and I can think off the bat of at least two ways in which I could do just that.


"That will just make it more difficult to make fakes."

Yes that's kind of the point. Plus I'm sure they could put the whole camera in a tamper resistant case. They could make it very difficult to access the sensor.

Including focus data should make "record a screen" a bit harder too. I guess recording a projection would be pretty hard to detect, but how likely is it that people would go to those lengths, vs using a simple deep fake tool?


> "That will just make it more difficult to make fakes."

Please don't use quotes when you are not quoting.


Isn't the point to prove physical access to the camera. As in, this stream originated at this TPM which was sold as part of this camera.

So, the best you get is that the stream shows it wasn't produced with access to a particular camera. Then impersonating a YouTuber, say, requires access to their physical camera.


> Isn't the point to prove physical access to the camera.

Yes. But since you can't really prove who had and who did not have physical access to the camera (what are you going to use, video evidence ;) ) that carries little weight.

Let's say we're talking about a death penalty or a multi billion $ lawsuit, the stakes would be so high that to rely on something as trivial as a signature on a video stream would be way too risky because of the consequences.

Which expert would sign on the dotted line to the statement that 'the equipment has not been accessed by someone who wasn't authorized to do so'? And that's before we get into the question of whether this camera really is the only way that this particular video could be generated (can duplicates of the TPM be made by the manufacturer?), you'd have to do a lot of work to make guarantees that would stand the test of time.

All it takes is one 'DVD Jon' and you can kiss your carefully constructed scheme goodbye.

Finally, the whole chain of that video would have to preserve the signatures, which in a world that tries to balance privacy with other concerns may not be feasible and in fact could well have downsides all its own.

Personally I would much rather see the level of skepticism against all kinds of digital evidence go up than to see the trust go up due to supposedly magic signatures and other tricks to pretend that we have a perfect lock on this stuff. Every time we believe we do someone comes along that proves us wrong, usually after the damage is done. See also: fingerprints, face recognition, DNA evidence, lie detectors, bomb detectors and so on.


TPM is supposed to be a secure enclave that's resistant to electronic access, so non-duplicable (in theory). So as much as anything, you're pricing access by the signing of a file with the TPM. That means continuity of 'proof' -- perhaps we'll get to the point where famous people perpetually film themselves using a third-part verifiable module to sign the data, just so they can show "that wasn't me" (ie technological deniability).

I don't think you can show the reverse (it was you), nor can you make a 'proof' without an established chain (you're only proving that you verified something with the same equipment you previously used).

Reputation seems like it could become much more a part of social media, like, this video was signed by all these witnesses to endorse it as a true account.


> Which expert would sign on the dotted line to the statement that 'the equipment has not been accessed by someone who wasn't authorized to do so'?

You have a lot more trust in the ethics of expert witnesses than I do.


Well, expert witnesses that give false testimony are a lot more rare than expert witnesses that are going to go on the record for things that they know to be false.

Expert witnesses and liability are a very interesting area of the legal system. There are all kinds of variations on this theme ranging from blanket immunity to various forms of liability. Knowingly giving false testimony would likely lead to problems, especially if say a panel of experts would in turn testify that said expert should have known and could have known that what they testified was wrong.

I'm not aware of a case that went that far, but I would love to read more about such conflicts between expert witnesses on two sides of a lawsuit, usually the experts agree in broad terms on their testimony, and may differ in the details.

Maybe some who have served in such a capacity can chime in here?


Just like this post about making a software bypass to a security check, there are hardware guys making bypasses to security checks.

Now you create a situation where hardware bypassed cameras have immense value to bad actors.


> it will pretty much solve the trust issues

I'm not so sure. How will you verify a signature if you see a video on TV or on social media? Do you believe these devices are 100% secure and the keys will never be extracted?


Nothing is 100% secure, but the point of using a TPM is to prevent extraction of the keys.


TPMs are great at preventing extraction of keys via the exploitation of software bugs. They are not so good at preventing extraction if you have physical access to the machine, and especially not if you don't need the machine to survive intact, and can afford to destroy as many machines as you need to get the keys out...


I agree that it's better than nothing. Sooner or later deepfakes will be pretty much perfect and detection will be more or less impossible.


It will be 6 months before hardware bypass devices show up on alibaba.


I don't see how that would be useful here. I'm not giving out info about my webcam to anyone. There would be no way to verify the signature.


Manufacturer signature, not your signature.


That will help someone determine that a fake video didn't come from their own camera.

But most videos where we're worried about deep fakes are videos that come from other people's cameras, where we don't know which signature should be valid, nor whether it should have a signature.


I believe they're saying that the manufacturer will sign the video, not the filmer. Those signatures can then be validated by platforms that the video is uploaded to. The signature isn't supposed to say "this video came from person X" it's supposed to say "this video in fact came from the real world".


But there are many manufacturers of cameras. How do I know that a particular clip should even be signed?

Signatures don’t change the provenance problem.


It seems like the goal would be to make this ubiquitous enough that eventually you'll expect all clips to be signed, although this would of course take quite a long time to achieve, which would indeed undermine the authenticity guarantees a system like this could provide.

It seems like official channels like CSPAN, white house press briefings, and other world governments could get immediate benefit by applying these methods to all of their official communiques.


>it will pretty much solve the trust issues deep fakes are causing.

It's a nice piece of tech, I can see it being used in court for example, to strengthen the claim that a video is not a deepfake.

However, that's not "the" problem with deepfakes. Propaganda of all sorts have demonstrated that "Falsehood flies, and the Truth comes limping after it". As in, with the proper deepfakes, you can do massive damage via social media for example. People re-share without validation all the time, and the existence of deepfakes add fuel to this fire. And I think that we can't do anything about either.


That's really cool! I've been waiting for tech like this to finally come to light. Honestly expected either Google or Apple to lead the way on it. Have you all worked with the content authenticity initiative at all? It seems like they're looking at ways to develop standards around tech like this to ensure interoperability in the future.

https://contentauthenticity.org/


The surveillance states wet dream, where you can just look up who took that ‘unfair’ video of your agent breaking the rules.


You're being uncharitable. I read it as a way for a person to prove they took the video, not as a database of person<->signing key. In other words, the government would only be able to see that video 123 was signed by signature 456. It would be up to the poster to prove they own the camera that produced signature 456.


So as long as you never post any video you made with that camera on any account that is not anonymous, you’re safe. In other words, completely impractical.

This is exactly why all these identifiers and all this tracking are harmful: you can never be anonymous, there’s always someone watching who can couple everything you do to your identity. And it happens with you knowing as little about it as possible.


Hey, off topic but I love your cameras and the attention to detail you put in them, especially in regards to what helps the tech install them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: