Hacker Newsnew | past | comments | ask | show | jobs | submit | nocman's commentslogin

> sed is not "stream editor" as it says above, it's "stream ed"

Well, according to the man page, it is indeed "stream editor":

https://man.cat-v.org/unix_8th/1/sed

I was already aware of its relation to 'ed' (having had to actually use 'ed' in ancient times). However that doesn't change the fact that it does stand for "stream editor".

After reading your post, I thought "That doesn't seem right, I remember it specifically being referred to as 'stream editor'", so I went looking.


> There was strong cultural pressure to be able to write perl in as few bytes as possible

Hard disagree. Many Perl programmers enjoyed engaging in code golf (always just for fun, in my experience), but in my nearly 30 years of programming Perl, I never encountered anything that I would call pressure to do so -- not from anyone.


I don't think in this case that most people who know what Arduino is would be at all mislead by the title. Being "dead" doesn't have to mean that a company ceases to exist. There are plenty of what I would call "dead" companies that still make money every year. "Dead" can be used figuratively. In this case, meaning that though the company continues to exist, the reason for which many people bought their products is now gone.


And Randall deserves EVERY single one of them, IMHO!


Agree but it can definitely be redistributed. The overwhelming majority of users should better think about #936 than #538.


Oh, good, I'm not the only one then.


"With traditional solutions (such as OpenVPN / IPSec) starting to run out of steam" -- and then zero explanation or evidence of how that is true.

I can see an argument for IPSec. I haven't used that for many years. However, I see zero evidence that OpenVPN is "running out of steam" in any way shape or form.

I would be interested to know the reasoning behind this. Hopefully the sentiment isn't "this is over five years old so something newer must automatically be better". Pardon me if I am being too cynical, but I've just seen way too much of that recently.


Seems like you just haven’t been paying attention. Even commercial VPNs like PIA and others now use Wireguard instead of traditional VPN stacks. Tailscale and other companies in that space are starting to replace VPN stacks with Wireguard solutions.

The reasons are abundant, the main ones being performance is drastically better, security is easier to guarantee because the stack itself is smaller and simpler, and it’s significantly more configurable and easier to obtain the behavior you want.


I use and advocate for wireguard but I don't see it's adoption in bigger orgs, at least the ones I've worked in. Appreciate this situation will change over time, but it'll be a long tail.


It’ll take a little bit of time. But for example Cloudflare’s Warp VPN also uses Wireguard under the hood.

So while corp environments may take a long time to switch for various reasons, it will happen eventually. But for stuff like this corp IT tends to be a lagging adopter, 10-20 years behind the curve.


Warp actually uses MASQUE (UDP/IP over QUIC) by default


I hadn’t realized they changed the default. It originally was Wireguard but you’re right - they added MASQUE as a tunnel option and it’s the default.


Bigger orgs for the most part use whatever vpn solutions their (potentially decade old) hardware firewalls support. Until you can manage and endpoint a Wireguard tunnel on Cisco, Juniper, Fortigate (etc) hardware then it's going to take a while to become more mainstream.

Which is a shame, because I have a number of problematic links (low bandwidth, high latency) that wireguard would be absolutely fantastic for, but neither end supports it and there's no chance they'll let me start terminating a tonne of VPNs in software on a random *nix box.


If you use Kubernetes and Calico you can use Wireguard to transparently encrypt in-cluster traffic[1] (or across clusters if you have cluster mesh configured). I wonder if we'll see more "automatic SDN over Wireguard" stuff like this as time goes on and the technology gets more proven.

Problem is IIRC if you need FIPS compliance you can't use Wireguard, since it doesn't support the mandated FIPS ciphers or what-have-you.

[1]https://docs.tigera.io/calico/latest/network-policy/encrypt-...


sure, but I mean "road warrior" client. Typical, average company VPN users. Ironocally getting a technology like wireguard in k8s is easier than replacing an established vendor/product that serves normal users.


The anti-FIPS position of the wireguard implementors is a big problem for adoption.


Exactly. We've looked at using Wireguard at my company, but because it can't be made FIPS compliant, it makes it a hard sell. There is a FIPS Wireguard implementation by WolfSSL, interestingly enough.


Yeah itll be running out of steam not only when regulators _understand_ wireguard, but when its the recommendation and orgs need to justify their old vpn solution


OpenVPN makes SNAT relatively trivial, from what I can tell. So I can VPN into a network, use a node on the network as my exit node, and access other devices on that network, with source-based NAT set up on the exit node to make it appear as if my traffic is coming from the exit node.

Wireguard seems to make this much more difficult from what I can tell, though I don't know enough about networking to know if that's fundamental to wireguard or just a result on less mature tooling.


WG is no different really, but you'll have to set it up yourself unless you use a client like tailscale. WG is just bare bones and you're supposed to use a proper client.

Add SNAT rule, enable forwarding, add allowedIPs to WG config.


Right, so my understanding is essentially correct. OpenVPN makes it trivial to set up a VPN which lets you access a remote LAN, without having to involve third-party SaaS products like Tailscale.


It was just an example, and you could run headscale if you want the mesh feature. There's simple gui clients like wireguard-gui as well.


And wireguard-gui has an easy GUI for source-based NAT?


Wireguard is slowly eating the space alive and thats a good thing.

Here's a very educational comparison between Wireguard, OpenVPN and IPSec. It shows how easy wireguard is to manage compared to the other solutions and measures and explains the noticeable differences in speed: https://www.youtube.com/watch?v=LmaPT7_T87g

Very recommended!


I wouldn't say they're running out of steam (they never had any) but OpenVPN was always poorly designed and engineered and IPSec has poor interop because there are so many options.


Unfortunately (luckily?) I don’t have enough knees about IPsec, but usually things make a lot more sense once you actually know the exact architecture and rationale behind it


Knowledge *


Interestingly tried out just now on one of my devices and Wireguard VPN speed was 5x faster on same configuration to OpenVPN.


IPSec isn’t running out of steam anytime soon. Every commercial firewall vendor uses it, and it’s mandatory in any federal government installation.

WireGuard isn’t certified for any federal installation that I’m aware of and I haven’t heard of any vendors willing to take on the work of getting it certified when its “superiority” is of limited relevance in an enterprise situation.


OpenVPN has both terrible configuration and performance compared to just about anything else. I've seen it really drop off to next to no usage both in companies and for personal use over the past few years as wireguard based solutions have replaced it.


Same here. With openvpn my somewhat modern cpu takes out a whole core @100% at like 200 megabits/s.

With WireGuard I instead max out the internet bandwidth (400 megabits/s) with like 20% cpu usage if that.

I really don’t understand why. We have AES acceleration. AES-NI can easily do more bps… why is openvpn so slow?


Is there a particular pain point (or set of pain points) that you have using git which is removed when you use Jujutsu?

I am interested to know, because there seem to be a small number of people who really seem to like it, and up to this point I haven't been able to understand what it is that they are all so excited about.


For me, it's two things:

1. I understood git better after ten minutes of jj than after fifteen years of git. Git just doesn't expose its underlying data model as well as jj does. I guess, if you already know git well, this isn't going to make a difference for you.

2. This question is a bit like asking what can I do with a calculator that I can't do with pen and paper? Technically, nothing, but everything will be so much easier that you'll be much more likely to use it. Even though I can, technically, stash my worktree and jump to another commit with git, it's so fiddly to unstash (especially with multiple stacked switches/stashes) that I just never did it.

With jj, I leave commits in the middle and jump to other commits (to fix a bug or make a small change I noticed I need while working on a larger change) all the time, because there's zero friction.

jj just removes all the friction that's prevalent in git. Things are easy in jj that in git are merely possible.


> With jj, I leave commits in the middle and jump to other commits (to fix a bug or make a small change I noticed I need while working on a larger change) all the time, because there's zero friction.

For git users who are wondering "What friction? I just git stash and jump to another branch":

In jj, you just jump without needing to type any command like git stash.


git stash is not that simple. you'd need to remember what branch that stash applies to to get back to where you were.

I'm new to jj. I'm still mixed on if I like it not. I think it's mostly familiarity. For example, switching to a commit puts things in the state before the files were committed. All my projects have a presumit step that says "hey! commit your files!" so they are all incompatible with jj at the moment or at leas the default. I end up having to do temp stuff like `jj new` (ok, now they're committed). Now run my presubmit scripts. Then `jj undo` so I don't have this unneeded commit. That said, I'm sure there's a better way, I just haven't gotten used jj yet.

Others have said this, `jj undo` and `jj op restore` have been lifesavers though. No matter what I do I can get back to where I was before I messed up.


> git stash is not that simple. you'd need to remember what branch that stash applies to to get back to where you were.

I use the stash for changes I like or for small experiments, not tied to anything. For any other changes, I just create a wip commit and switch. It’s trivial to switch back and soft reset.


This is easy in jj too, I’ll often try something, jj new @- to try something else, and jj abandon whichever version I don’t end up using.

The nice thing is that all of this is part of the commit graph, not buried in stashes hidden from sight.


> git stash is not that simple.

It's a stack of diffs.

Anyway, I've probably used this to transfer changes between branches thousands of times. Once you grasp the underlying data model all these abstractions introduced by jujutsu seem more confusing.

That said, I do understand most people aren't going to take the day or so to read through any of the hundreds of detailed "explain the data model" articles online.


Yes I know git stash is a stack of diffs. I’m responding to this

>> With jj, I leave commits in the middle and jump to other commits (to fix a bug or make a small change I noticed I need while working on a larger change) all the time, because there's zero friction.

> For git users who are wondering "What friction? I just git stash and jump to another branch"

git stash is not equivalent to jumping to another commit in jj


> Anyway, I've probably used this to transfer changes between branches thousands of times.

You could just use cherry pick.


That involves having to commit


> git stash is not that simple. you'd need to remember what branch that stash applies to to get back to where you were.

This quote confused me for a while. I was thinking "git stash isn't branch specific its just a single bucket". But I realoze you must be making lots of little changes that are highly branch specific and then not wanting to commit those, but instead stashing them. Which would leave you with a hellscape of stashes that can't just be unstashed.

The biggest problem with git is people just inventing asinine ways to do things and ending up with absolutely stupid problems like that. No sane person does these things but yet I do keep encountering people digging holes and falling in them. It's a bit like people who invent the clever idea of having one repository with multiple code bases on different root branches. It's possible but you dont deserve to be working in this industry if you think its a good idea.

Git is simple. It's stupid simple. That's its problem.


> But I realoze you must be making lots of little changes that are highly branch specific and then not wanting to commit those, but instead stashing them

No, I'm specifically responding to the person above who claimed "git stash" is the same as switching to another commit in jj. It's not.


I've taken to summing it up as "jj makes everyone the git guru".


I would think the obvious answer is how jj deals with merge conflicts.

In git, if you get a conflict, you feel like you have to resolve it now.

With jj, most of the times I get merge conflicts, I simply ignore them and deal with them later. A conflict is not at all a blocker.


> In git, if you get a conflict, you feel like you have to resolve it now.

I guess I view that as a positive rather than a negative. I'm not saying that dealing with merge conflicts is a picnic -- it isn't. I just find it difficult to believe that ignoring them and resolving them later will improve the situation in the long run.


It's not about "ignoring" conflicts. In jj you're often working with stacked diffs, and merge conflicts can impact a dozen "branches" all at once. This is trivial to resolve in jj and a nightmare in git. It lets you work on them one piece at a time and upon resolving it, instantly see the conflicts fixed across all your branches at once.


It’s about giving you choice. If you want to then fix them immediately, you can. But you don’t have to if you don’t want to.

But really, it’s about something deeper: rebase is a first-class operation in memory, and not a serious of patch applications via the file system. They’re therefore lightning quick, and will always succeed, which is nice. If you get partway through resolving a conflict and want to change to something else for some reason, that’s possible and simple.


I think the word "later" is unhelpful in this situation because it implies all sorts of different timescales. You don't want to be resolving merge conflicts three weeks after you've created them, you're right!

Typically for me, "later" means "immediately after the rebase command has finished", which is very similar to git's "while the rebase command is running", but has some subtle and important differences.

For example, because the rebase completes first, I get to see roughly what the end-state of the rebase is before I start doing the hard work of fixing conflicts. This is useful as a sanity check - sometimes the reason I'm getting a bunch of merge conflicts is because I was rebasing the wrong things in the first place and included an extra commit somewhere. Seeing the result first gives me the chance to sanity check what I'm doing.

Another thing is that my repository is never in a broken state where I can't continue doing other things. There's no way in git to stash a rebase, say because I've realised a different approach might work better or just because I urgently need to work on something different. I either need to cancel the rebase and start it again later, or keep on going until it's over. In jj, because the conflicts are automatically checked in as part of the commits, I can easily jump backwards and forwards between the work I'm doing resolving my conflicts, and anything else.

Another way of thinking about it is that git is very modal. You check out a branch and are in the "branch" mode, and then you start a rebase and are in the "rebase" mode, and then you do a bisection and are in the "bisect" mode - it's difficult to move freely between these modes, and there's certain things you can only do in some modes and can't do in others. In contrast, jj has exactly one mode: "there is a commit checked out". The different operations like rebasing all happen atomically, so you never see the halfway state. But because things like conflicts can be encoded in the commit itself, you still have all the same power that the modal approach had, just with a simpler conceptual model.


the thing about rebasing/cherry-picking (including just popping the stash) in git is that you only have 2 choices: fix the conflict now or abandon the entire rebase/cherry-pick

with jj, you have the option to fix half of it and come back later. you can take a look and see how bad the conflicts are if you go a certain route and compare to another option


Are you using your editor or a special software/plugin for resolving conflicts. I used Sublime Merge in the patch, but now I’m using Magit+ediff. Resolving conflicts is quite trivial in that case. And I can always ‘rebase -i’ to revert some of the decisions I’ve taken.


> With jj, most of the times I get merge conflicts, I simply ignore them and deal with them later.

Sorry? You what? How do you know which bit from which source goes where?


Here's a typical scenario.

You do a git pull, just so your branch isn't so out of sync. Immediately you get merge conflicts. You then tell jj "Hey, I'll deal with this later", and make a branch off of the last commit that was conflict free and continue your work there. jj stores the conflict as is, but your branch is conflict free.

When you feel you have the energy to deal with the conflict, you switch to the branch that has the conflict, and fix the issue(s). Then you can manipulate the graph (rebase, whatever) so you can have everything in one branch - your changes and the changes you pulled in.


> When you feel you have the energy to deal with the conflict

So you just kick the can down the road and end up with possibly an even more difficult conflict resolution?


> So you just kick the can down the road and end up with possibly an even more difficult conflict resolution?

That sentiment is true for pretty much anything in life one may decide to defer till later :-)

More concretely, it's often not hard to tell if deferring it will make it worse, or merely the same.

The whole point of version control is to give your mind some peace in not worrying about things ("Let's make this change and we can always revert if it doesn't work out"). Conflicts are no different. There's no fundamental reason a conflict needs to be treated like an emergency.


Or none at all, if you decide to abandon that work (as has happened to me a bunch).


Isn't this equivalent to simply hard resetting to before the pull? (Or the commit before the conflict?) Plus then you don't end up with an extraneous branch.


So you essentially do a fast-forward until the first commit with a merge commit and then take the previous one?

Sounds like something that could also become a flag for git merge.


No. The GP making a commit off of the first non-conflict isn’t the essence of the feature he’s talking about, just an example of what he can do next.

He’d also be free to edit upstream to not commit, or split a change in two so that parts unrelated to the conflict are in their own change. The big idea is that he doesn’t need to blindly decide that before seeing what the conflict is.


Git won’t let the portion of the branch that’s still conflicted remain in conflict while you go and work on the other part.


I would just abort the conflict resolution.


Then you’re back to the state before the rebase. Which is fine, the point is just that they’re not equivalent!


Yes. What's the equivalent would be to just keep the successfully rebased part. Right now you need to do keep the ref yourself.

> Sounds like something that could also become a flag for git merge.


Yes and no.

Before I started using Jujutsu, I didn't have any pain points with using Git. I didn't understand what all the fuss was about. Git works well! So I totally understand how most Git users have that same reaction when hearing about Jujutsu.

I think the reason I even tried it out in the first place was because Steve Klabnik wrote a tutorial about it. I have a lot of respect for him, because the Rust book is really good. So I though: If Steve thinks it's worth it, I should probably check it out.

Now that I'm used to jj, going back reveals like 100 things that are immediately super annoying when using git. I don't feel like writing it all down TBH. :-) In a general sense, Jujutsu get's out of your way much better than Git. There are a lot of situations where Git blocks you from continuing to work. Have a merge conflict? Stop working, fix it right now. Want to check out another branch? Nu-uh, clean up your dirty worktree first. jj doesn't do that. Have a conflict? I'll record it in the commit, fix it whenever you like. Checking out another branch? No worries, I'll keep your work in progress safe in a commit.


for me, everything git is a pain point. But its not so much painpoints that it addresses, as it is that it just makes entirely new things dead-simple to do, especially via jjui.

"megamerges" are one such example. ive shared many links, here and in other posts


> for me, everything git is a pain point

Yeah, I was looking for something (or "things") specific. An "I hate everything about it" explanation doesn't really compel me to try out the alternative.

> "megamerges" are one such example. ive shared many links, here and in other posts

I read through one megamerge link you shared ( https://v5.chriskrycho.com/journal/jujutsu-megamerges-and-jj... ). So the argument seems to be (forgive me if I'm reading this wrong), if you have multiple versions of a single set of source files that all have differing changes, for you JuJutsu makes it easier (easier then git, that is) to merge them into the final commit you want to end up with. Is that correct?

Just trying to make sure I understand. Honestly, after reading that article I am still not feeling the need to try Jujustu out. I'm still open to being convinced, but have yet to see anything that makes me go "wow, I need to try that!".


"multiple versions" = feature branches, possibly all in progress, probably all related. In a couple seconds, you can create a merge on top of all of them to join up their combined functionality/changes, work on top of that ON ALL OF THEM AT ONCE, and then squash all the relevant changes into the respective PRs that others (who are just using git) can review and merge into main.

At this point A LOT has been written in this and other threads, as well as lots of essays and tutorials about how jj just completely transforms your workflow. If you're curious, you'll seek it out. If not, that's fine as well.


What does the history look like then? Just a single merge from a ton of branches or?


The parent is describing the megamerge pattern, which is a way to work on multiple branches at once.

You don't have to do that, and you rarely push it to others. History looks the same as git, usually, although I end up rebasing more than I ever did in git, since it's easier and safer.


> I'm still open to being convinced, but have yet to see anything that makes me go "wow, I need to try that!".

You might not find that feature, but I'd suggest giving it a go anyway. The list of jj technical superiorities is short, but the numerous quality-of-life DX improvements all add up to pleasant, fearless version control.

Even without editor support or a UI, I abandoned git forever last year after using jj for a couple weeks.

Just my $.02.


Jjui is an incredible TUI for jj. Its the only way I interact with jj


My read on jj so far has always been "convenience wrapper over a preexisting Git concept" and simultaneously "alternate CLI that might be easier to learn".

Much like BitKeeper was sort of like an automated set of conventions on top of SCCS, (and Git and BitKeeper are near-interchangeable if you don't look at any of the details,) jj is like an automated set of conventions on top of Git.

(I personally wish the jj project had leaned harder into "it's just Git operations, made easier" instead of the whole "abstraction over storage layers" spiel, which needlessly scares a person familiar with Git, and makes the project sound very wishy-washy. When you peek under the hood, it's just Git! If it wasn't, I probably wouldn't use it!)


> Some people just enjoy being contrarian.

And some people just happen to disagree - doesn't automatically mean they just like "being contrarian". I took the "Yup..." to mean "this is what I was expecting, because it agrees with what I have seen before on this topic".

> I always enjoy how on jj articles, 90% of commenters tried it and switched, 10% never bothered to try it, and 0% tried it but decided not to switch.

And some unknown quantity of readers don't see anything compelling enough to either try it and/or comment on it after they have (or have not) tried it.


Spot on to both.


> LLMs are smart enough to make the result seem "organic"

I would never describe the output I've seen from LLMs as "organic".


I'm not sure if you are being serious about not understanding "the whole centering a div meme". Your example handles a trivial case, but does not address the whole of the problem.

As others have pointed out, vertical centering is often the problem being discussed (although difficulties with horizontal centering do happen). Anyone I know that has written any non-trivial web application has run into the situation where they spent way more time than they thought they should have to getting some element in a web application centered on the page the way they wanted it to be.

This article is a good example of the complexity, I think:

https://css-tricks.com/centering-css-complete-guide/

The author makes a decision tree, which illustrates the complexity fairly well, and then there's a conversation in the comments between the author and a reader about whether parts of the decision tree are correct.

CSS is extremely complicated. It's easy to get lost in the complexity, and it can be very frustrating when you know how you want something to look, but can't quite figure out how to get it to happen.

That's why the meme is so popular. LOTS of people who deal with CSS can relate.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: