Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Audio is real-time and performance is everything. Freezing tracks should take the least amount of time possible and no skipping should occur unless you are using the most complex modular VST out there. With WASM being like 1/3 of native and having extremely limited SIMD support I would probably expect it to not work at all for serious work.

Quite frankly, even native performance is often not enough.

That said, I can see it being relevant for learning audio, synthesis and how signal processing works. And of course, just for fun!

Source: Worked with DAWs for a decade. Also currently writing a paper on the role of native performance.



I’ve played around with ableton before- im wondering what are the high-level aspects of a DAW that take up that compute? Off the top of my head if you have like 10 channels of synths, what in there is super intensive? What does freezing tracks mean and why is it so expensive?


It's not the high level aspects, but the low level ones.

Audio DSP is doing a lot of math. CPUs are good at it, sure, but modern synths and effects are legitimately pushing up against how much math a CPU core can evaluate in the few milliseconds you have to render (in the worst case, low latency realtime rendering time is actually dominated not by how much DSP you can do, but how long it takes to move audio from userland to kernel and out to the hardware and back).

Some of the DSP algorithms are really hard to optimize with SIMD, in fact most of the common audio DSP operations can't be trivially converted to SIMD forms (and when they are, they aren't N times faster for N more lanes). Filters are especially tricky because converting the math from one form to another changes the topology of the signal flow, which is only equivalent in the steady-state of non-linear and time-invariant filters. DAWs are using non-linear time variant filters that are being modulated in realtime, so your super fast SIMD optimized biquads might not sound as good as the converted SVF that can't be trivially optimized (there are tricks, but it's a game of tradeoffs).

And there's the other aspect of the scene that there's just a lot of bad or naive code out there. There is a lot of know-how floating around, but a lot of tools are designed by folks without it to begin with. That's a good thing because it makes a lot of interesting and cool tools, but it also means that institutional knowledge is kind of locked away. It doesn't help that some of the largest examples for newcomers (JUCE's DSP module, RAFX/Aspik with the accompanying text), as well as classic (and new!) textbooks teach people to do things in the least performant way possible, and those algorithms make it into production.


Thanks for the informative comment. Are there resources you would recommend for learning more about performant algorithms? At the moment I'm just messing around with JUCE


> you have like 10 channels of synths, what in there is super intensive

The synth itself. Samplers, hardware emulations, and effects can eat a lot of memory and CPU, to say nothing of a monster 100+ voice synth patch (very easy to achieve with unison, used in supersaw-type sounds)

> What does freezing tracks mean and why is it so expensive?

Freezing tracks means recording the output of that track to a WAV and using that output as a stand-in for the real thing. Freezing tracks isn't expensive, it's what you use when another plug-in is too expensive and you want to reduce your CPU load.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: