I don't have a reddit account, but that is pretty much what etcd was built to do.
IIRC it was built for CoreOS to handle lots of linux machines. Later Kubernetes started using it for its metadata storage. It is as a distributed key-value store with clients which can map parts to filesystem, or do pub-sub.
Dunno, I still think it can be characterized as the low-end of average, even if it's closer to 2-3 years of experience. Maybe not in Stockholm, but there are a places that still offer like 30k for entry level hires.
As written in the article talking salaries in Sweden is avoided, so there are probably many offers in that range.
The recommendation is for Sweden in general though, not Stockholm. It is for any kind of engineer, where IT is above average.
What are you referring to in the article? All taxable income (which would include salary) is public information in Sweden and the majority are unionized and do talk about salaries.
Yeah, that's what I thought. I think you read too much into that. He's just being the humble and cool guy he is. It's not as taboo here like it is in the US.[1]
IIRC HVIF is designed to render in one pass as well targeting to be both small and fast. It also use non-standard floating point, to be even more compact. And has more than two level of details. So guessing HVIF is a lot better except for tooling outside Haiku :)
* Two different Prestos, prestodb and prestosql for maximum confusion. (I think one renamed)
* Making Controller highly available by default is hard
* Autoscaling workers is not simple
* Code very dependent on its own webframework that tries to do everything and lacks docs.
* Resource planner for multiple queries is lacking
* Worker configuration takes a lot of skill
All of these could be solved, but in most cases you can find other solutions where you get a simpler set of problems.
Hey I'm a contributor to prestosql (the one that renamed to Trino). I'll provide a few of my opinions into some of these from the vantage point of our project.
* It's definitely confusing but pretty common in open source projects to see the original creators split off when corporate oversight interferes with the OS governance model. (https://www.computerworld.com/article/2746627/hudson-devs-vo...). This is especially true when, as the OP mentioned, it's a pretty cool tech and a lot of interest in it. Now that the names are different, it is clearing up a bit. We're hoping in a few years there will be one project standing so that you won't have to choose. I don't have to tell you which one I think it is.
* Active-active HA is not really necessary IMO as Trino is designed for low latency interactive queries in general. It can handle longer running batch queries but it gives up fault tolerance to fail fast and you just resubmit the query vs predecessors like Hive, Spark, etc... that handle ETL and long running batch processes efficiently but this adds complexity to the query to checkpoint the work. I could see the need for an active-passive HA to have on deck during a failure. Setting up your own active-passive HA is as simple as putting two coordinators behind a proxy and pointing your workers to the proxy address. Then you basically have the proxy run health checks and flip over in the event of an outage. Here's the issue to track native HA though https://github.com/trinodb/trino/issues/391.
* I'm not sure why autoscaling is said to be difficult. I think this is why you have kubernetes and docker to manage this type of workload.
* The only reason this is a pain to me is that engineers wanting to join our community and commit have a bit of a learning curve and depends heavily on us mentoring and guiding them on how the REST API works, which we don't mind. However, I agree with this choice from a design perspective for the user. If you want to use Trino, it's better not to be exposed to this implementation detail or mess with how this works. It will likely cause you more pain.
I think so, since v2 and v3 are two different things they made a mess of things. It should have been two different specs. Continuing with docker-compose name is just bad, release any improvements in a clean spec.
"Users on Ubuntu 20.04 LTS (Focal) can take advantage of additional optimizations found on newer ARM-based processors. The large-system extensions (LSE) are enabled by using the included libc6-lse package, which can result in orders of magnitude performance improvements."
From https://ubuntu.com/blog/ubuntu-aws-graviton2
Another benefit is it removes a lot of otherwise unneccessary imports.
I think var should be used anywhere where the righthand-side expression is clear. Compare
var str = new String();
to
var waat = HERE.be().dragons("purple");
And with Spring-like classes like 'StaticFactoryBananaMakerBuilder', declaring the class makes it less readable.