I initially considered using Pandas to work with community collections of Elite: Dangerous game data, specifically those published first by EDDB (RIP) and now by Spansh. However, I quickly hit the maximum process memory limits because my naïve attempts at manipulating even the smallest of those collections resulted in Pandas loading GB-scale JSON data files into RAM. I'm intrigued by Polars stated support for data streaming. More professionally, I support the work of bioinformaticians, statisticians, and data scientists, so I like to stay informed.
I like how in Pandas (and in R), I can quickly load data sets up in a manner that lets me do relational queries using familiar syntax. For my Elite: Dangerous project, because I couldn't get Pandas to work for me (which the reader should chalk up to my ignorance and not any deficiency of Pandas itself), I ended up using the SQLAlchemy ORM with Marshmallow to load the data into SQLite or PostgreSQL. Looking back at the work, I probably ought to have thrown it into a JSON-aware data warehouse somehow, which I think is how the guy behind Spansh does it, but I'm not a big data guy (yet) and have a lot to learn about what's possible.
I like how in Pandas (and in R), I can quickly load data sets up in a manner that lets me do relational queries using familiar syntax. For my Elite: Dangerous project, because I couldn't get Pandas to work for me (which the reader should chalk up to my ignorance and not any deficiency of Pandas itself), I ended up using the SQLAlchemy ORM with Marshmallow to load the data into SQLite or PostgreSQL. Looking back at the work, I probably ought to have thrown it into a JSON-aware data warehouse somehow, which I think is how the guy behind Spansh does it, but I'm not a big data guy (yet) and have a lot to learn about what's possible.