Hacker Newsnew | past | comments | ask | show | jobs | submit | daghamm's commentslogin

Email is visible in commit messages. So nothing new at all


How will this affect tourism?

I can't see people paying thousands of dollars for a US vacation if there is a small chance of them being randomly detained and locked up for days.


No idea, but technically some phones are now water-cooled.


I like librapay, I wish more people would use it.

I also hope they can reduce the ratio paid to paypal/stripe. More money should go to devs.


> What are the payment processing fees?

> The fees vary by payment processor, payment method, countries and currencies. In the last year, the average fee percentages have been 3.1% for the payments processed by Stripe and 5% for the payments processed by PayPal.

Giving 97% of the money to devs is pretty amazing. They also don't take a cut from donations made.


I don't think that would be possible, as Liberapay is essentially just a UI for accepting donations directly into the developers' Stripe/PayPal accounts.


good catch, it is not a payment processor.


But maybe they can negotiate a better deal for their payments?


Doubtful. It's 100% funded by its own Liberapay page.[0] It's the second-most donated to recipient but at about $650 a week, that's not even enough to hire a full time dev let alone the business skills needed to pull something like that off

[0] https://en.liberapay.com/Liberapay/


That's more than median dev salary for most European country.

https://helloastra.com/blog/article/average-software-develop...


That's about $33.8k a year. Before taxes and with no benefits

The numbers in your source also contradict your statement, but point taken that European dev salaries are much nominally lower than American ones. It's still obviously not enough to pay a dev and business expert

EDIT: wow I just checked and the weekly amount they receive just went up to ~$875!


They don't even have the ability to group transactions together for US payments (or payments in Australia, Japan, Mexico, Malaysia, Hong Kong, New Zealand, Singapore, and others [1]) to lower fees, so I think any hope that a project like this would have any leverage is misplaced.

[1]: https://liberapay.com/about/global


I asked Stripe back in 2018 if we could expect their unexplained same-region limitation on transfers to be lifted. They said it would be lifted soon. It's 2025, the limitation still exists and I still don't know why.


Payment processing is dominated by a massive monopoly, leaving no room for negotiation. The industry is controlled by global elites and financial powerhouses who gatekeep access. Only stablecoins have the potential to disrupt this cycle and offer an alternative.


Do you know where to find statistics on accidents and fatalities per vehicle or brand?

Do for example insurance companies provide such data?


I like this approach.

But can you safely do this with multi room devices?


US is pushing for these to replace classic crypto in commercial applications by 2030.

The C in CNSA 2.0 is for "Commercial".


Sorry, can you expand on the issues with KEM?


At his level, the only thing he cares about are tax breaks.



When running on apple silicon you want to use mlx, not llama.cpp as this benchmark does. Performance is much better than what's plotted there and seems to be getting better, right?

Power consumption is almost 10x smaller for apple.

Vram is more than 10x larger.

Price wise for running same size models apple is cheaper.

Upper limit (larger models, longer context) is far larger for apple (for nvidia you can easily put 2x cards, more than that it becomes whole complex setup no ordinary person can do).

Am I missing something or apple is simply currently better for local llms?


Unless something changed recently, I’m not aware of big perf differences between MLX and llama.cpp on Apple hardware.


I'm under the same impression. Llama.cpp's readme used to start with "Apple Silicon as a First Class Citizen", and IIRC Georgi works on Mac himself.


there is a plateau where you simply need more compute and the m4 cores are not enough, so even if they have enough ram for the model the token/s is not useful


For all models fitting 2x 5090 (2x 32GB) that's not a problem, so you can say if you have this problem then RTX is also not an option.

On apple silicons you can always use MoE models, which work beautifully. On RTX it's kind of waste to be honest to run MoE, you'd be better off running single, whole active model that fills available memory (with enough space for the context).


I'm trying to find out about that as well as I'm considering a local LLM for some heavy prototyping. I don't mind which HW I buy, but it's on a relative budget and energy efficiency is also not a bad thing. Seems the Ultra can do 40 tokens/sec on DeepSeek and nothing even comes close at that price point.


The DeepSeek R1 distilled onto Llama and Qwen base models are also unfortunately called “DeepSeek” by some. Are you sure you’re looking at the right thing?

The OG DeepSeek models are hundreds of GB quantized, nobody is using RTX GPUs to run them anyway…


You are missing something. This is a single stream of inference. You can load up the Nvidia card with at least 16 inference streams and get at much higher throughout tokens/sec.

This just is just a single user chat experience benchmark.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: