Since you seem like you have practical knowledge here, I hope you don't mind me asking:
Would it change the equation, meaningfully, if you didn't offer any transcoding on the server and required users to run any transcoding they needed on their own hardware? I'm thinking of a wasm implementation of ffmpeg on the instance website, rather than requiring users to use a separate application, for instance.
Would you think a general user couldn't handle the workload (mobile processing, battery, etc), or would that be fairly reasonable for a modern device and only onerous in the high traffic server environment?
That's very much not what transcoding is for. You don't want transcoding so a client can render the video in a comfortable resolution. You need transcoding to save bandwidth. If you want the client to do transcoding, you must send them the full raw video file. Either end of the connection may not have enough free bandwidth for that. The client may not be able to teanscode depending on size and format.
You of course, can do this anyway. PeerTube allows you to completely disable transcoding. But again that means you're streaming the full resolution. Your client may not like this.
If realtime performance is your concern I think PeerTube allows you to pre-transcode to disk. If there is a transcoded copy matching the client request, the server streams that direct with no extra transcode.
To answer your question: shifting transcode onto the client won't improve performance and will greatly increase bandwidth requirements in exchange for less compute on the server. You almost certainly do not want this.
Yep. As OP said: I meant the user could transcode the various versions on their machine and then upload each to the server. Sorry about the wording; I can see that it's vague.
> Would it change the equation, meaningfully, if you didn't offer any transcoding on the server and required users to run any transcoding they needed on their own hardware?
I think the user experience would be quite poor, enough that nobody would use the instance. As an example a 4k video will transcoded at least 2 times, to 1080p and 720p, and depending on server config often several more times. Each transcode job takes a long time, even with substantial hwaccel on a desktop.
Very high bitrate video is quite common now since most phones, action cameras etc are capable of 4k30 and often 4k60.
> Do you think a general user couldn't handle the workload (mobile processing, battery, etc), or would that be fairly reasonable for a modern device and only onerous.
If I had to guess, I would expect it be a poor experience. Say I take a 5 minute video, that's probably around 3-5gb. I upload it, then need to wait - in the foreground - for this video to be transcoded and then uploaded to object storage 3 times on a phone chip. People won't do it.
I do like the idea of offloading transcode to users. I wonder if it might be suited for something like https://rendernetwork.com/ where users exchange idle compute to a transcode pool for upload & storage rights, and still get to fire-and-forget uploads?
I really appreciate you walking through that; it's an eye-opener! It seems like you not only deal with a considerable amount of five-minute-or-greater videos, but much higher quality than I was expecting, too.
I also like the idea of user-transcoding because, honestly, I think it's better for everyone? I would love if every place I uploaded video or audio content offered an option to "include lower-quality variants" or something. Broadly, it's my product; I should have the final say on (and take responsibility for) the end result. And for high-quality stuff, the people who make it tend to have systems equipped to do that better anyway. So they could probably get faster transcoding times by using their own systems rather than letting the server do it. Seems like a win-win, even outside of the obvious benefits of "make a whole lot of computers do only the work they each need done, instead of making a few computers do the work that everyone needs done". With the only slight downside of the "average user" having some extra options that they don't understand which cause them to use it wrong and then everyone hates your product. Yay, app development.
I think offering client side transcode as an option, with server side transcode available for those who don't want to do it client side, is compelling. I would probably do it, as I have a powerful home system that can transcode much faster than my cloud host (I do use the remote transcoding feature in Peertube though).
Very neat! I and completely respect the skill. I respect the effort even more!
That said, it's not 'hands down, one of the coolest 3D websites', at least that I've seen. It's all "technical", very little "design". For example, why is it 'isometric overhead'? There's no particular benefit in the view, and it's specifically harder to control than it would be with a 'chase'/'third-person' camera. It's not like this is an RTS or a city-builder-ish thing, where having an overhead layout works to your benefit. Rather, it's just easier to program a camera that never changes angles and input controls that never have to re-interpret camera position/rotation (lookat vector) to function correctly. And there's a kind of symmetry between a flat page and the "ground" that the truck drives on, so some parts of the web forms have been ported over to that.
Again, none of that is bad and especially none of it is wrong. It's very cool that it works and works so well (technical)! It's just that the design feels more "portfolio" than it does "best ux for interacting with the environment I've created and the paradigms I've invoked (vehicle control)".
That's design exactly. There's no technical obstacle to making it over-the-shoulder instead, but it changes the aesthetic. The animations focus on what the jeep does to things, so a racing view that helps you avoid running into things wouldn't be appropriate. It also changes how you see the assets. And you'd lose that 'RC Pro-Am' feel.
> Rather, it's just easier to program a camera that never changes angles and input controls that never have to re-interpret camera position/rotation (lookat vector) to function correctly.
Not really, you just put the camera on a spring arm attached to the vehicle. Vehicle movement isn't harder either. You get this stuff practically for free with any game engine.
You're welcome to your counter-opinion about the design, but you haven't convinced me. I've played plenty of games with third-person views where the gameplay was quite conducive to running in to things. I can also appreciate that the design is faux-retro, but that's kind of my whole issue with it. Sticking to a design because it is nostalgic is not user-focused. It's demographically limiting, by design. It's specifically niche-targeting. That's the opposite of trying to make the best kind of thing for the most kinds of people. Which is a business interest of a portfolio site. Building a little game for people who likes those types of games? Sweet! More power to you. But if you're showcasing a demo for wide audiences, a critique of the niche-targeting is valid. Not nearly as important as the people claiming they can't even play the game, for sure! But if you bounce one person because they press up on the keyboard and the truck moves "forward", and they don't like that - it's a marked negative for the site's intent.
You can't worry about pleasing everyone, and you especially can't worry about broad, overall, two-paragraph critiques on literal months of dedicated work. But neither of those make the critiques, themselves, improper or even wrong.
You seemed to imply that the developer chose isometric to make development easier. I'm rebutting that this is unlikely; they're equally easy with an engine (and if you're not using an engine, you're skilled enough that they're still equally easy).
> But neither of those make the critiques, themselves, improper or even wrong.
Are you referring to my critique of your critique of razzmatak's critique ("Handsdown one of the coolest 3D websites")? Surely if you're allowed to disagree with them, I am with you.
ah, easy enough then: mistaken inference on your part.
> Are you referring to[...]
I'm referring to critique, in general, for the former, and my specific two paragraphs of critique on the project - not the commentary - for the latter. Your being "allowed" to disagree with me is what is meant by the sentence "You're welcome to your counter-opinion about the design, but you haven't convinced me."
I can't see Bruno's site and I assume it's because of the HN hug of death, but an impressive 3D website that always comes to mind is acko.net, with its 3D rendered tubular logo. He even describes how it was done in a blog post.
acko.net is one I thought of immediately too. The front page for Three.js usually has some nice examples too.
Of course, with WebGL and WebGPU support becoming ever more ubiquitous I'm not sure when 'impressive 3D website' just becomes either 'impressive website' or 'impressive 3D'.
I agree with you, it's not that it isn't impressive, but it functions poorly as a website. Innovation in design I'd expect from the HN title is something where the 3D enhances the user experience of the website itself, navigation interfaces feel natural, and so on.
This is a very well made little game that also showcases some of their work. I was hoping for something like, now I wish all websites were like this.
Fwiw if you want a simple pastebin, I've been running pinnwand for a couple years without any issues off of a single short docker compose file, I think running it on host also shouldn't be complicated
I appreciate that this writeup takes care to call out use cases when they help with understanding!
I do have a semi-unrelated question though: does using the recursive approach prevent it from being calculated efficiently on the GPU/compute shaders? Not that it matters; plenty of value in a CPU-bound version of a solution and especially one that is easy to understand when recursive. I was just wondering why the prominent examples used a non-recursive approach, but then I was like "oh, because they expect you to use them on the GPU". ...and then I was like "wait, is that why?"
> I do have a semi-unrelated question though: does using the recursive approach prevent it from being calculated efficiently on the GPU/compute shaders?
Historically speaking, the use of recursion in shaders and GPGPU kernels (e.g. OpenCL "C" prior to 1.2) was effectively prohibited. The shader/kernel compiler would attempt to inline function calls, since traditional GPU models had little in the way of supporting call stacks like normal CPU programs have, and thus recursion would be simply banned in the shader/kernel language.
it's the town square- it's not about the two people talking but for everyone reading what they said instead of controlling the narrative by only speaking to people you want in places you want. They don't even have to answer to everyone, so the only benefit of losing access to thousands/millions of people is to make an article like this for Pride.
I just mean that in the way that this site is a town square, or all of the rfc's, in that anyone can (for free) see all of the discussions and participate. Just like a town square, not every comment has to be addressed or promoted equally, but I don't know of a better way to release information and allow for anyone to discuss it with others.
You only need a placeholder if you think the platform matters enough to hold space for. For example: they don't have a placeholder on MySpace.
But if your goal is to prevent other people from having the name altogether, the move I personally enjoyed engaging in was getting my account blocked. That forces them to hold your account only to prevent anyone from using it, lest you might sneak back in and say something "harmful" like "stonetoss is hans kristian graebener".
No, I described a way of getting blocked that I personally enjoyed. Haters (me) gonna hate, after all.
I think the FSF should not be on Twitter at all. Sorry if I was unclear about that in my previous comment, but the first paragraph was meant to contradict OP's suggestion.
Stonetoss is a well known comic by an alt-Right Neo-Nazi. He kept his identity secret for years (for obvious reasons), but was outed a few years back. He received a lot of hate over this, and got fired from his tech job over it.
The comic was antitrans, antisemitic (with full-on Holocaust denial), racist, and sexist... but Graebener himself is a Latino, so he gets hated on by both the Left and the Right.
Websites that cater to the alt-Right ban users for saying his real name and ban people who make Stonetoss memes that shit on Graebener for being a Nazi.
And you know why HN is actually a great place? dang isn't going to ban me for repeating verifiable facts.
> People argue and disagree here but somehow in pleasant way I haven’t seen anywhere else.
Turn on `showdead` in your settings (or don’t, probably for the best) and be prepared to read some nasty comments. No substance, only hate. There are a few on this very submission.
> I think it's a mistake to imply that just because a comment is dead because it was flagged that it is hateful.
I wish people would stop inventing arguments and “reading between the lines” when interpreting comments from people they don’t know. There was no implication. Whatever you think you read is only in your head.
Of course not every flagged and dead comment is hateful. But hateful comments do get flagged so that’s where you’ll find them.
I'm about to do what you just asked people not to do. Perhaps, we're so used to dishonest interlocutors online that we search for intentions in people's statements?
That was the thing that got me blocked enough for it to stick. But it was right during the height of that meme, so I doubt it would go far now. Unless a bunch of people all started doing it or something.
If you're looking for something that might actually work right now, though, I think there's still some weird libertarian-ish "principle" they're pretending makes it wrong to post elon's (or others') flight information. At least that would be where I would start, because I don't like to bother people that don't deserve it, so general abusiveness is out, and it's funny to throw their free speech bullshit back in their face.
Same. I had a little library I wrote to wrap indexedDB and deno wouldn't even compile it because it referenced those browser apis. I'm sure it's a simple flag or config file property, or x, or y, or z, but the simple fact is, bun didn't fail to compile.
Between that and the discord, I have gotten the distinct impression that deno is for "server javascript" first, rather than just "javascript" first. Which is understandable, but not very catering to me, a frontend-first dev.
Even for server ~~java~~typescript, I almost always reach for Bun nowadays. Used to be because of typestripping, which node now has too, but it's very convenient to write a quick script, import libraries and not have to worry about what format they are in.
I imagine that's mostly embeddings actually. My database has all the posts and comments from Hacker News, and the table takes up 17.68 GB uncompressed and 5.67 GB compressed.
Wow! That's a really great point of reference. I always knew text-based social media(ish) stuff should be "small", but I never had any idea if that meant a site like HN could store it's content in 1-2 TB, or if it was more like a few hundred gigs or what. To learn that it's really only tens of gigs is very surprising!
Scraped reddit text archives (~23B items according to their corporate info page) are ~4 TB of compressed json, which includes metadata and not just the actual comment text.
That’s crazy small. So is it fair to say that words are actually the best compression algorithm we have? You can explain complex ideas in just a few hundred words.
Yes, a picture is worth a thousand words, but imagine how much information is in those 17GB of text.
I don’t think I would really consider it compression if it’s not very reversible. Whatever people “uncompress” from my words isn’t necessarily what I was imagining or thinking about when I encoded them. I guess it’s more like a symbolic shorthand for meaning which relies on the second party to build their own internal model out of their own (shared public interface, but internal implementation is relatively unique…) symbols.
It is compression, but it is lossy. Just like the digital counterparts like mp3 and jpeg, in some cases the final message can contain all the information you need.
But what’s getting reproduced in your head when you read what I’ve written isn’t what’s in my head at all. You have your own entire context, associations, and language.
Would it change the equation, meaningfully, if you didn't offer any transcoding on the server and required users to run any transcoding they needed on their own hardware? I'm thinking of a wasm implementation of ffmpeg on the instance website, rather than requiring users to use a separate application, for instance.
Would you think a general user couldn't handle the workload (mobile processing, battery, etc), or would that be fairly reasonable for a modern device and only onerous in the high traffic server environment?
reply