Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At first read this seems really promising. Getting into Elixir/Erlang ecosystem from Python has seemed too hard to take the time. And when there I wouldn't be able to leverage all the Python stuff I've learned. With Pythonx gradual learning seems now much more achievable.

It wasn't mentioned in the article, but there's older blog post on fly.io [1] about live book, GPUs, and their FLAME serverless pattern [2]. Since there seems to be some common ground between these companies I'm now hoping Pythonx support is coming to FLAME enabled Erlang VM. I'm just going off from the blog posts, and am probably using wrong terminology here.

For Python's GIL problem mentioned in the article I wonder if they have experimented with free threading [3].

[1] https://fly.io/blog/ai-gpu-clusters-from-your-laptop-liveboo...

[2] https://fly.io/blog/rethinking-serverless-with-flame/

[3] https://docs.python.org/3/howto/free-threading-python.html



FLAME runs the same code base on another machine. FLAME with Pythonx should just work. FLAME is a set of nice abstractions on top of a completely regular Erlang VM.

Chris Grainger who pushed for the value of Python in Livebook has given at least two talks about the power and value of FLAME.

And of course Chris McCord (creator of Phoenix and FLAME) works at Fly and collaborates closely with Dashbit who do Livebook and all that.

These are some of the benefits of a cohesive ecosystem. Something I enjoy a lot in Elixir. All these efforts are aligned. There is nothing weird going on, no special work you need to do.


Yeah, looks like it works fine, here's an example: https://pastebin.pl/view/a10aea3d

I'll add: FLAME is probably a great addition to pythonx. While a NIF can crash the node it is executed on, FLAME calls are executed on other nodes by default. So a crash here would only hard-crash processes on the same node (FLAME lets you group calls so that a flame node can have many being executed on it at any time).

Errors bubble back up to the calling process (and crash it by default but can be handled explicitly), so managing and retrying failed calls is easy.


Well this seems nice and easy. Thank you for the example. There's local, Kubernetes and fly.io support for FLAME (that I found after short search). I envision running main Erlang VM on a light weight server continuously, and starting beefier machines for Python tasks as needed.


That's a really good option - neatly sidesteps the risk of a NIF crash with practically no extra code.


You do still need some infrastructure - the Flame LocalBackend is for dev mostly and I'm pretty sure it just runs in the same vm as the parent.

But yeah if you're doing ML tasks it makes a lot of sense to farm those out to beefier or GPU-equipped nodes anyway so at that point it's just a natural synergy, AND you get the crash isolation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: