Hacker Newsnew | past | comments | ask | show | jobs | submit | rockmeamedee's commentslogin

Yeah, uv is cool and a step above conda, but that business model doesn’t look very profitable…

They did say they want their thing to have understanding of the code, so maybe they’ll sell semgrep-like features and SBOM/compliance on top. Semgrep is ok popular, but if it maybe bundled into something else (like the package registry itself) that might get enough people over the line to buy it.

Private registries and “supply chain security” tools individually aren’t the hottest market, but maybe together the bundle could provide enough value. Let’s see how it goes.


yeah but they're still >50% off SFBA salaries. SFBA comp for a sr dev can easily be $200k+ (and can go higher, lots of anecdotes on here about $350k+ salaries at BigtechCos), for an EU dev scratching 90k euro is considered "good". Devaluing the dollar by 10% and increasing the price of EU salaries by 10% doesn't really change the picture.


Oh this is great! I always have this problem. I find that's one of my biggest barriers when reading queueing theory content. I'm only doing it intermittently so I don't have memorized the meanings of ρ,σ,μ,λ...

Visually I also often confuse rho and sigma, and math texts will use psi ψ and phi φ in weird fonts and I can never tell them apart.


I too made a version of this (just a small Go DNS resolver + port forwarding proxy) that lets you do a similar thing: https://gitlab.com/amedeedabo/zoxy

I used the .z domain bc it's quick to type and it looks "unusual" on purpose. The dream was to set up a web UI so you wouldn't need to configure it in the terminal and could see which apps are up and running.

Then I stopped working the job where I had to remember 4 different port numbers for local dev and stopped needing it lol.

Ironically, for once it's easier to set this kind of thing up on MacOS than on Linux, bc configuring a local DNS resolver on linux (cf this taiscale blog post "The Sisyphean Task Of DNS Client Config on Linux" https://tailscale.com/blog/sisyphean-dns-client-linux). Whereas on Mac it's a couple commands.

I think Tailscale should just add this to their product, they already do all the complicated DNS setup with their Magic DNS, they could sprinkle in port forwarding and be done. It'd be a real treat.


Fastest growing but because they participated in a pay-for-play kickback scheme [1][2]?

So that number isn't really signal. Now that they're not paying CISOs to adopt the product they're not going to be growing as fast.

[1] https://www.bankinfosecurity.com/blogs/cyberstarts-program-s... [2] https://www.calcalistech.com/ctechnews/article/b1a1jn00hc


Idk man. It's a nice idea, but it has to be 10x better than what we currently have to overcome the ecosystem advantages of the existing tech. In practice, people in the frontend world already use Apollo/Relay/Tanstack Query to do data caching and querying, and don't worry too much about the occasional overfetching/unoptimized-ness of the setup. If they need to do a complex join they write a custom API endpoint for it. It works fine. Everyone here is very wary of a "magic data access layer" that will fix all of our problems. Serverless turned out to be a nightmare because it only partially solves the problem.

At the same time, I had a great time developing on Meteorjs a decade ago, which used Mongo on the backend and then synced the DB to the frontend for you. It was really fluid. So I look forward to things like this being tried. In the end though, Meteor is essentially dead today, and there's nothing to replace it. I'd be wary of depending so fully on something so important. Recently Faunadb (a "serverless database") went bankrupt and is closing down after only a few years.

I see the product being sold is pitched as a "relational version of firebase", which I think good idea. It's a good idea for starter projects/demos all the way up to medium-sized apps, (and might even scale further than firebase by being relational), but it's not "The Future" of all app development.

Also, I hate to be that guy but the SQL in example could be simpler, when aggregating into JSON it's nice to use a LATERAL join which essentially turns the join into a for loop and synthesises rows "on demand":

  SELECT g.*, 
         COALESCE(t.todos, '[]'::json) as todos
  FROM goals g
  LEFT JOIN LATERAL (
    SELECT json_agg(t.*) as todos
    FROM todos t
    WHERE t.goal_id = g.id
  ) t ON true
That still proves the author's point that SQL is a very complicated tool, but I will say the query itself looks simpler (only 1 join vs 2 joins and a group by) if you know what you're doing.


> Meteor is essentially dead today

Care to explain what you mean by "dead"? Just today v3.2 came out, and the company, the community, and their paid-for hosting service seem pretty alive to me.



That’s awesome, thanks for the pointer.


As a regular programmer, I got really into queuing theory thinking I was going to learn secrets of performance tuning, then was slightly disappointed.

But it turns out the simple parts of it go a long way! Eg at work we have a single deployment queue for a monorepo. At first approx this is an MD1 queue (deploy job takes roughly the same amount of time every time, though arrivals are actually way spikier than poisson), and realized that wait time was inversely proportional to total utilization.

While the infra team was saying “we do X deploys per day out of 4X and are only at 25% capacity”, I realized even hitting 2X would more than double the already bad wait time.

What happened is that a few initiatives were under way to increase capacity, but then out of nowhere the queue got log jammed (bc of high arrival rate variability) and we had to switch to Gitlab merge trains, which run CI concurrently on the optimistic result of merging. I wrote about it here: https://engineering.outschool.com/posts/doubling-deploys-git...

I’m planning on writing a blog post about the math of CI/CD deploy queues as G/D/1 queues.

For a programmer’s view of queuing theory and great performance testing foundations, I highly recommend “Analysing Computer system performance with Perl:PDQ” (don’t worry about the Perl, the book is very relevant) http://www.perfdynamics.com/iBook/ppa_new.html, which shows examples of queues inside computer systems and how to model them. The author has a nice little library to model different computer systems you come across.

I liked realizing that the dreaded “coordinated omission” problem in load testing (when you can only generate X rps and the server can handle more than that, your numbers are bunk) is actually when you think you are modeling an open system but you don’t have enough resources and end up seeing the closed system behaviour.


That's not what coordinated omission is. CO happens when your load generator can do, say, 500 requests/second, and you're aiming for something well below that, e.g. 100, but! the top throughput for the server is 80 requests per second, and instead of building up an infinite queue, the load generator throttles itself to roughly 80 requests per second.

Why would you build a load generator like this? Normally because you run out of threads -- you have 800 parallel requests in flight and you can't open a new one until one has returned.

Correcting for CO takes a mathematical sleight of hand.


That was a neat blog post! Can you clarify this bit: "If a pipeline fails, the associated MR is kicked from the train and all preceding pipelines are restarted." - Do you mean all the pipelines _already started_ are restarted? Because I'm guessing that if a pipeline fails, the preceding pipelines shouldn't be affected, strictly speaking.


Off-topic: I'm toying with GPT-3 and I used it to extend rockmeamedee's comment: https://gist.github.com/cool-RR/12b0fc8106f6df18925de632f2b6...


The number one thing that got me to senior engineer, and will get me to staff as I improve (though staff has more of a relationship management component) is emotional management.

Staying in a long debugging session without ragequitting and getting to the fix.

Pushing through when I feel really dumb for not knowing how a react hook works or how to test a particular feature.

Not getting bored (or recognising it and pushing through) when fixing a bug that is "trivial" but complicated.

Not getting angry when the codebase is not architected well or is hard to understand.

In every one of these situations, better emotional management has made me more productive, calmer, and I had a better outlook of the situation in the end, so my suggestions to external stakeholders about what to do next were more accurate.

And I'm still learning! I get bored and take 3x too long all the time.

This is maybe an exaggeration, but after you learn about for loops and functions, the rest of the job is emotional management.

Ok, maybe EQ becomes important a little bit after for loops, but way earlier than you think. Maybe after 1 year of experience.


> The number one thing that got me to senior engineer, and will get me to staff as I improve (though staff has more of a relationship management component)

I love the little rat race we’ve created for ourselves. When did staff engineer become a thing? I’ve just noticed it but feel like it’s probably been around for a few years.

Next question when you get to staff engineer what next? Do you become a lead engineer or staff engineer II?

Jokes aside

> Not getting angry when the codebase is not architected well or is hard to understand.

This is the most important job hack for a software engineer of any artificial title. Even more important when your title starts with a C.


Usually I see Staff Engineer followed by a title like Architect. Never ever by anything about Lead or Manager because that's the whole point, you should be able to increase your scope so you're making tech decision across the whole org without also becoming a good people manager.


Staff engineer is the name for a “rank” but the rank has existed for a long long time. Staff is the first “leadership” engineering role - roughly equivalent to a manager in scope.

Some places have a manager equivalent of senior engineer but that’s also sometimes a role you’re expected to rapidly leave: either via promotion or back to IC.


This spoke to me. Thank you for the check on my own imposter syndrome.


The term "crisis response" will get you info on PR crises and brings to mind the TV show Scandal.

The term you want for our field is "Incident Response", and the practice of 1)preventing them and 2)handling them 3)learning from them is Resilience Engineering. It's about investigating air plane crashes, nuclear meltdowns, errors during surgery, etc, and learning how humans keep complex systems running.

I recommend "Behind Human Error" by David Woods as a great starter there. A key insight of this field is that incidents aren't just "some idiot didn't follow the safety checklist", but often the safety checklist itself will cause the issue; at some level the errors happen because of complicated interactions between the system and even the safety mechanisms.

An interesting tech industry related document is the STELLA report [1] from a few tech companies comparing notes on incidents.

[1] https://snafucatchers.github.io/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: