Depends on the workload if it matters or not. Some it matters a ton, some not a bit. And if you have n=cores processes, you can get some benefits with collocation and the like the OS can do for you.
I’m not saying performance is not different between these types of solutions or the dev work is not sometimes different.
I’m saying that the penalties (including development effort) for work co-ordinated between processes compared to threads can vary from nearly zero (sometimes even net better) to terrible depending on the nature of the workload. Threads are very convenient programmatically, but also have some serious drawbacks in certain scenarios.
For instance fork()’d sub-processes do a good job of avoiding segfaulting
/crashing/dead locking everything all at once if there is a memory issue or other problem when doing the work, it’s very difficult in native threading models to do per-work resource management or quotas (like maximum memory footprint, or max number of open sockets, or max number of open files) since everything is grouped together at the OS level and it’s a lot of work to reinvent that yourself (which you’d need to do with threads). Also, the same shared memory convenience can cause some pretty crazy memory corruption in otherwise isolated areas of your system if you’re not careful, which is not possible with separate processes.
I do wish the python concurrency model was better. Even with it’s warts, it
is still possible to do a LOT with it in a performant way. Some workloads are definitely not worth the trouble now however.