One of the best open-source tools out there. I'm a frequent user of Plex, Jellyfin, Tunarr, local music files, etc. I use it weekly to extract subtitles, trim videos, convert music formats, and remove audio tracks. After writing the previous paragraph, I realized I've never donated to the project; it's time to change that.
As a FFMPEG API user (e.g. through libavcodec etc.) I would definitely not say "simple". It's constantly breaking stuff and deprecating features from one version to another, and basically requires reading the source constantly to make sure of what's happening and on which backend / API each function can operate. Just today, when I was trying to implement vulkan video decode in ossia score (https://ossia.io) :
Copy data to or from a hw surface. At least one of dst/src must have an AVHWFramesContext attached.
int av_hwframe_transfer_data(AVFrame *dst, const AVFrame *src, int flags);
Well unlike what the very first sentence of the comment block hints towards, it actually is only implemented for host<->device copy, not device<->device for many backends
Big difference for ffmpeg especially (but I imagine for curl too): it's not just one guy in Nebraska. Seems to have a very healthy community of devs involved in it.
Building ffmpeg itself from source is actually quite easy.
The hardest part IMO is getting the necessary codecs to work; this can take a little while. If you know what audio and video codecs you want and need, and if you get them installed properly, then compiling ffmpeg is really simple and straightforward. It works almost always for me, and I have compiled ffmpeg from source for like +10 or even +15 years.
For reference purposes, my current configure options are:
Probably more codecs could be added, and some options may not be necessary anymore (I changed this last ... years ago, too), but this works for the most part fairly well.
One focus I have is mostly on a few .mp4 files, and for these I think you kind of want x264 x265 and so forth (I think one more codec from google too or so). But it is really quite trivial once you are past the codecs step. You can also start simple with just a few codecs, e. g. one good audio codec and one good video codec. One reason I like to have ffmpeg support many codecs is so I can use mpv, which in itself is really awesome; I like it more than vlc, which is also ok though.
getting the stock ffmpeg to compile/build might be "easy", but once you start adding on additional codecs and other features that get you into dependency nightmares "easy" is not the word I would use. I have not been able to use the stock ffmpeg since forever. for example, i see no openssl enabled in your config. I see no freetype. I see you've disabled openjpeg. clearly, you and i use ffmpeg differently which just goes to show your "easy" is very misleading
Building ffmpeg can be simple or complex, depending on how you configure the dependencies and if it's dynamic or static and of course it's target outputs.
I'm currently working on a cross-platform builder that runs within Github Actions runners, but the Mac and Windows builds take up so many of my monthly minutes.
I had to build from source because of that CVE that dropped, couldn't do it so I just wrapped the whole thing and injected my own -version command, passed the scanners cleanly
For anyone vaguely familiar with ffmpeg, don't sleep on this video. Quite funny, and everything from `yadif` (which I dealt with today!) to mkvtoolnix to "But then it will explode if you have an apostrophe in your file name. Because it doesn't understand that."
I've had great results using JPEG-XS to transport video for colour grading in feature film & TV post production. At 3:1 or 4:1 compression ratio is effectively lossless.
It is patent-encumbered though, you have to pay license fees to deploy it.
We use JXS when latency is critical. Most h24/265 decodes will have a 10 frame glass-glass delay, JXS drops that to 3 or 4, at a cost of bandwidth (our UHD jxs streams are 1.5gbit rather than 200mbit for hevc)
That's pretty depressing to read. x264 was handling the encoding side with sub-frame latency 15 years ago, and sub-frame decoding is significantly easier. "with –tune zerolatency, single-frame VBV, and intra refresh, x264 can achieve end-to-end latency (not including transport) of under 10 milliseconds for an 800×600 video stream"
But for some reason you can't make use of that and have to burn bandwidth instead.
In theory it's a small part. But if you got that many frames of latency difference by changing codec, then it wasn't being a small part.
It's not that you should have gotten a magical 10ms latency glass to glass, it's that you should have been able to get 4 frames latency on h.264. But something prevented that, so I'm sad about it.
(And if you say the bandwidth was fine in your situation I won't argue, but using more than a gigabit extra is not usually thought of as free.)
Yeah, we've been deploying JPEG-XS for high bitrate streaming for a while.
A lot of our customers are moving their grading systems into data centres and streaming the images over IP back to their grading suites.
I've got it down to less than 1 frame for encode-transport-decode, but you've still got to copy the image to an SDI card and wait for that to clock out.
Isn't the point of JPEG to have lossy compression for your photos that still looks fine? As opposed to something like PNG, which has lossless compression
"JPEG" is short for Joint Photographic Experts Group, an ISO/ITU group that creates a lot of imaging standards. The JPEG image format you're thinking of is only one of the formats they've created.
The Joint Photographic Experts Group manages many standards, generally each called "JPEG [something]". The one we most commonly call "JPEG" is just one of them.
Reading that it looks like the point of JPEG-XS is to have near-lossless compression for raw photo and video data while having extremely high throughput.
> gfxcapture: Windows.Graphics.Capture based window/monitor capture
> This source provides low overhead capture of application windows or entire monitors. The filter outputs hardware frames in d3d11 format; use hwdownload,format= if system memory frames are required.
This would strongly alter my plans if I were to develop an OSS Discord alternative. Chromium originally looked like a better core to start with largely due to its mature screen capture API. WebRTC is the other big thing, but there are other ways to do that. Native desktop apps (i.e., not browser based) are beginning to look much more compelling to me now.
FFmpeg is really great. The only wish I'd have is for the usage to become simpler - both for regular stuff, but also for advanced filtering.
If anyone remembers, avisynth was pretty cool back in the days. You could kind
of script video/audio manipulations, a bit like a UNIX/Linux pipe, but kind of
simpler, in my opinion. FFmpeg allows for many similar operations, but remembering
anything here is ... hard. I'd love for the whole usage API to become much simpler, but it seems nobody among the ffmpeg dev team is considering this. :(
I can't be the only one with that wish though ...
It does not diminish ffmpeg being so great in general, but I think it could be better.
Once I got over the -filter_complex is well, complex phobia, it isn't that bad. The command line makes it look daunting to be sure. But thinking of it as the name suggests of "filter chains" makes it less daunting. It is still cumbersome as every thing needs to be in the command.
Debugging commands gets hairy the more complex they get but you'll get muscle memory on how to search/replace to make line breaks to make it easier, similar to breaking up gnarly SQL. The worst part about debugging is the error messages can be misleading when it interprets the filter chain incorrectly because of some issue you've typoed in there somewhere. Even those start to become recognizable as "it thinks this, which means I probably messed up this other thing instead". To be fair, I work with ffmpeg daily using some commands that make your eyes bleed. So for someone using it every now and then, the practice from repetitive use just takes longer.
Also, saving things as shell scripts helps a lot. A simple script that does the same thing with a few adjustable params can be done with %1, %2 usage or even cleaner with getopts. You can then change small things within a tested complex command.
It won't always give you a perfect answer the first time, but it's much better than memorizing the manual or interpreting a forum discussion. Haven't used it for ffmpeg, but lots of other command lines.
I would far rather look at the manual or a forum discussion, because then I know I'm getting something real. With LLMs, odds are decent that I'm getting something which doesn't actually exist, but it sure would be nice if it did.
I've had someone post a problematic ffmpeg command into a prompt to ask why it wasn't working. It didn't work so well. By the time that someone rejiggered their prompt, I had found the issue.
reply