My other thing is... why does something like this need to run in a browser? What's wrong with desktop apps?
I have FLStudio mobile and some other mobile DAW on my iphone and ipad. And to be honest, I never use them, other than as a tapping BPM counter. Do people actually feel productive tapping this stuff in on an iPad?
Shoddy support for platforms other than Windows, not to mention VSTs going out of support by the developers and then unusable when something changes (e.g. Apple dropping support for 32-bit x86 years ago).
WASM is at least a common denominator supported everywhere... basically Java just better.
Bitwig's pretty good as a cross-platform DAW, from what I hear.
There's entire ecosystems of plugins out there. Yes they can be reimplemented but if the authors use the same quick algorithms as everyone else it's going to sound plastic-y and dull. IMO plugin dev is one of those things where every hour spent makes a cumulative difference in output quality.
WASM may be great, but unless they can run as their own binary in their own windows you're going to have browser overhead for each window, nevermind those of us that spread our DAWs out across multiple screens.
Ummm ... Audacity is fully cross-platform (Windows, macOS, GNU/Linux and other operating systems) and open source [0]. ProTools (the elephant in the room) was originally developed (and still) on the Mac [1]
By the strictest/most literal definition yes, it is a DAW. But it does not do what people expect of DAW’s.
* no real time effects playback
* it doesn’t support VSTi’s (long standard and expected)
* no sequences
* can’t use MIDI controllers
It’s a fantastic platform for recording audio and post-processing, it is a terrible music production program, which even the most basic DAW’s can handle.
To make a video comparison, it’d an like an NLE that can’t sync audio/multi cam, has an unintuitive and semi-bloated UI, doesn’t support captions, and can’t support multiple timelines under one project.
Except nearly all of them support both main desktop platform, so not exactly a valid critique.
True Linux is often forgotten. But is there a real market there? And to my knowledge, Linux low level audio performance is quite bad, which doesn’t help.
The only thing I can think of would be that it would be great if you could easily record with people on the same interface over the internet. Maybe not live jamming or anything, since the delay would be murderous at best, but that it would be great to be able to have your drummer lay down a track at 3am and your guitarist lay down a rhythm to it at 7 and wake up and do vocals when you're ready instead of having to schedule time to all meet together.
>why does something like this need to run in a browser?
Same reason I like Squadcast and other browser-based podcast recording software - works on (virtually) every machine, no one needs to download anything, cloud storage baked into it, easy for my other producers/colleagues to hop on and check things out, etc.
Audio is real-time and performance is everything. Freezing tracks should take the least amount of time possible and no skipping should occur unless you are using the most complex modular VST out there. With WASM being like 1/3 of native and having extremely limited SIMD support I would probably expect it to not work at all for serious work.
Quite frankly, even native performance is often not enough.
That said, I can see it being relevant for learning audio, synthesis and how signal processing works. And of course, just for fun!
Source: Worked with DAWs for a decade. Also currently writing a paper on the role of native performance.
I’ve played around with ableton before- im wondering what are the high-level aspects of a DAW that take up that compute? Off the top of my head if you have like 10 channels of synths, what in there is super intensive? What does freezing tracks mean and why is it so expensive?
It's not the high level aspects, but the low level ones.
Audio DSP is doing a lot of math. CPUs are good at it, sure, but modern synths and effects are legitimately pushing up against how much math a CPU core can evaluate in the few milliseconds you have to render (in the worst case, low latency realtime rendering time is actually dominated not by how much DSP you can do, but how long it takes to move audio from userland to kernel and out to the hardware and back).
Some of the DSP algorithms are really hard to optimize with SIMD, in fact most of the common audio DSP operations can't be trivially converted to SIMD forms (and when they are, they aren't N times faster for N more lanes). Filters are especially tricky because converting the math from one form to another changes the topology of the signal flow, which is only equivalent in the steady-state of non-linear and time-invariant filters. DAWs are using non-linear time variant filters that are being modulated in realtime, so your super fast SIMD optimized biquads might not sound as good as the converted SVF that can't be trivially optimized (there are tricks, but it's a game of tradeoffs).
And there's the other aspect of the scene that there's just a lot of bad or naive code out there. There is a lot of know-how floating around, but a lot of tools are designed by folks without it to begin with. That's a good thing because it makes a lot of interesting and cool tools, but it also means that institutional knowledge is kind of locked away. It doesn't help that some of the largest examples for newcomers (JUCE's DSP module, RAFX/Aspik with the accompanying text), as well as classic (and new!) textbooks teach people to do things in the least performant way possible, and those algorithms make it into production.
Thanks for the informative comment. Are there resources you would recommend for learning more about performant algorithms? At the moment I'm just messing around with JUCE
> you have like 10 channels of synths, what in there is super intensive
The synth itself. Samplers, hardware emulations, and effects can eat a lot of memory and CPU, to say nothing of a monster 100+ voice synth patch (very easy to achieve with unison, used in supersaw-type sounds)
> What does freezing tracks mean and why is it so expensive?
Freezing tracks means recording the output of that track to a WAV and using that output as a stand-in for the real thing. Freezing tracks isn't expensive, it's what you use when another plug-in is too expensive and you want to reduce your CPU load.
I have FLStudio mobile and some other mobile DAW on my iphone and ipad. And to be honest, I never use them, other than as a tapping BPM counter. Do people actually feel productive tapping this stuff in on an iPad?