I had pretty much the exact same experience as the peer comment in this thread. They were videos I was interested in, or at least got clickbaited into, but the fact that they were AI generated destroyed their value. At scale I think the most likely outcome of this is not that people embrace AI for this sort of stuff, but rather that it also destroys the value of genuine content by making people doubtful of the authenticity of anything that seems improbable.
The website is a mess (broken links, broken UI elements, no about section)
There is no history on webarchive. There is no information outside of this website and their "customers" are crypto exchanges and some japanese payment provider.
This seems a bit fishy to me - or am I too paranoid?
− Accenture plc
− Amazon web Services EMEA Sarl
− Bloomberg L.P.
− Capgemini SE
− Colt Technology Services
− Deutsche Telekom AG
− Equinix (EMEA) B.V.
− Fidelity National Information Services, Inc.
− Google Cloud EMEA Limited
− International Business Machine Corporation
− InterXion HeadQuarters B.V.
− Kyndryl Inc.
− LSEG Data and Risk Limited
− Microsoft Ireland Operations Limited
− NTT DATA Inc.
− Oracle Nederland B.V.
− Orange SA
− SAP SE
− Tata Consultancy Services Limited
The source for this is a reddit post from 7 years ago?
Reminder that rasengan, before running vp.net also owned Private Internet Access (PIA) [0] which was also allegedly involved in spreading rumors about ProtonVPN years ago [1].
If I understand correctly it does only show stats for the current frontpage (aka top 30). So it's the "Most Upvoted Story (on the frontpage right now)".
> So, if traditional game worlds are paintings, neural worlds are photographs. Information flows from sensor to screen without passing through human hands.
I don't get this analogy at all. Instead of a human information flows through a neural network which alters the information.
> Every lifelike detail in the final world is only there because my phone recorded it.
I might be wrong here but I don't think this is true. It might also be there because the network inferred that it is there based on previous data.
Imo this just takes the human out of a artistic process - creating video game worlds and I'm not sure if this is worth archiving.
>I don't get this analogy at all. Instead of a human information flows through a neural network which alters the information.
These days most photos are also stored using lossy compression which alters the information.
You can think of this as a form of highly lossy compression of an image of this forest in time and space.
Most lossy compression is 'subtractive' in that detail is subtracted from the image in order to compress it, so the kind of alterations are limited. However there have been previous non-subtractive forms of compression (eg, fractal compression) that have been criticised on the basis of making up details, which is certainly something that a neural network will do. However if the network is only trained on this forest data, rather than being also trained on other data and then fine tuned, then in some sense it does only represent this forest rather than giving an 'informed impression' like a human artist would.
>These days most photos are also stored using lossy compression which alters the information.
I noticed this in some photos I see online starting maybe 5-10 years ago.
I'd click through to a high res version of the photo, and instead of sensor noise or jpeg artefacts, I'd see these bizarre snakelike formations, as though the thing had been put through style transfer.
So there will be laws because not everyone can be trusted to host and use this "dangerous", new tech.
And then you have a few "trusted" big tech firms forming an oligopoly of ai, with all of the drawbacks.
reply