There are no many reasons why new analyses should default to using ROOT instead of more user friendly and sane options like uproot [1]. Maybe some people have some legacy workflow or their experiments have many custom patches on top of ROOT (common practice) for other things but for physics analysis you might be self torturing yourself.
Also I really like their 404 page [2]. And no it is not about room 404 :)
One common criticism of uproot is that it's not flexible when per-row computation gets complicated because for-loops in Python is too slow. For that one can either use Numba (when it works), or, here's the shameless plug, use Julia: https://github.com/JuliaHEP/UnROOT.jl
That'a true and Julia might be a solution but I don't see the adoption happening anytime soon.
But this particular problem (per row computation) have different options to tackle now in hep-python ecosystem. One approach is to leverage array programming with NumPy to vectorize operations as much as possible. By operating on entire arrays rather than looping over individual elements, significant speedups can often be achieved.
Another possibility is to use a library like Awkward Array, which is designed to work with nested, variable-sized data structures. Awkward Array integrates well with uproot and provides a powerful and flexible framework for performing fast computations on i.e jagged arrays.
Uproot already returns you Awkward array, so both things you mentioned are different ways of saying the same thing. The irreducible complexity of data analysis is there no matter how you do it, and "one-vector-at-a-time" sometimes feel like shoehorning (other terms people come up with include vector-style mental gymnastics).
For the record, vector-style programming is great when it works, I mean Julia even has a dedicated syntax for broadcasting. I'm saying when the irreducible complexity arrives, you don't want to NOT be able to just write a for-loop
A great alternative to numba for accelerated Python is Taichi. Trivial to convert a regular python program into a taichi kernel, and then it can target CUDA (and a variety of other options) as the backend. No need to worry about block/grid/thread allocation etc. at the same time, it’s super deep with great support for data classes, custom memory layouts for complexly nested classes, etc etc, comes with autograd, etc. I’m a huge fan - makes writing code that runs on the GPU and integrates with your python libraries an absolute breeze. Super powerful. By far the best tool in the accelerated python toolbox IMO.
>they made a lame excuse that Pytorch didn't support 3.12
how is this a lame excuse
>but it fails on a bunch of PyTorch-related tests. We then figured out that PyTorch does not have Python 3.12 support
they have a dep that was blocking them from upgrading. you would have them do what? push pytorch to upgrade?
>Later, even when Pytorch added support for 3.12, nothing changed (so far) in Taichi.
my friend that "Later" is feb/march of this year ie 2-3 months ago. exactly how fast would you like for this open source project to service your needs? not to mention there is a PR up for the bump.
Also I really like their 404 page [2]. And no it is not about room 404 :)
[1] https://github.com/scikit-hep/uproot5
[2] https://root.cern/404/