It was threatening to rain but I thought I could make it to my destination so I bolted out the door. Half way it started pouring. I stopped in the middle of the side walk, under a tree. I was watching a cat further down the sidewalk that was in the same situation so I didn't notice a lady pull up with her car.
She just handed me an umbrella and drove off after I said thank you.
Nano was originally TIP which stood for "TIP Isn't Pico" but was later changed to Nano so as not to conflict with another Unix utility called tip [0]. Presumably nano was chosen as the metric prefix next larger than pico.
Personally, I'd prefer choosing a random string of 3-8 letters for command line tools. At least that would be better than naming programs using generic names (Keep, Bamboo, Chef, Salt) which leads to all sorts of name collisions.
From the article:
> This would be career suicide in virtually any other technical field.
The mascot for an $8.8T dollar (supply side) software industry, larger than Google, Microsoft and Apple combined, is a cartoon penguin [1].
"never had it in the first place" is absolutely correct.
> "never had it in the first place" is absolutely correct.
To be clear: I didn't mean to imply this is a bad thing.
GNU's Not Unix, Pine Is Not Elm, TIP Isn't Pico all share one important characteristic — their audience is expected to know what Unix, Elm, Pico are, and saying "X is not Y" implies "X is specifically, deliberately an alternative to Y, in the same style as Y".
If you know what GNU and YACC are, you probably don't need to be told twice that "Bison" is GNU's YACC implementation — the pun makes it instantly memorable.
One of my personal favourites is Ubuntu's version naming scheme. The "alliterative animal" form is highly memorable, and gives you two different words to latch on to, either of which is enough to uniquely identify a version. The fact they're alphabetical also makes it easy to check which version is newer (Letter collisions happen on a 13-year cycle, which makes it highly unlikely to be a source of confusion).
> their audience is expected to know what Unix, Elm, Pico are
Of course, the context for these references are all kind of anchored in the 90s. Someone first discovering Bison in the year of our lord 2025 is unlikely to have the foggiest clue what YACC was...
I wouldn't expect that most people couldn't, with enough time and resources, tell a better story. Isn't the part of the point of giving a talk to convey the ideas so that other people can use them? If they've internalized the ideas and seen your presentation, can't they then improve it and give a better talk? Haven't you failed if they can't do that?
Does me being the best person to teach them matter? Doesn't it matter more that I am the person teaching them when no one else is?
There's room for personalization, making sure the talk compliments your style and gives insight into why you think it's important and how you solved it, but none of this really relies on the uniqueness of the person.
If Stallman got up and gave a talk on "what it's like to be me", I would find it much less interesting than a talk about "how to invent free software and build a movement around it".
Stallman can give a talk about "how to invent free software and build a movement around it" because Stallman has invented free software and built a movement around it. For Stallman, there is a significant overlap between "what it's like to be me" and "how to invent free software" - his version of that story is exactly the story nobody else can tell.
It's not about telling a better story. It's about telling a story better.
> We also suspect that the geographical and cultural factors may have influenced interaction patterns, given that all our participants were residing in Türkiye.
There's no link to the paper, so I can only infer, but, if I understand correctly, this is a very simple idea: take a single Gaussian splat "tile" and find a cut when two copies are placed near each other and overlapping, using dynamic programming to vary the size of overlap and where the cut should be. Have a variety of cuts to break a uniform tiling (the Wang tiles part) and now you have different tiles with different nearest neighbor constraints that you can use to tile the plane.
Probably a lot of details to be worked out in how to stitch Gaussian splats together but I imagine it's pretty do-able.
I think one of the problems with Gaussian splatting is generating content. You can take a static picture of something but it's hard to know how to use it for interaction. This is a way to generate 3d textured sheets, like sunflower fields, walls, caves, etc.
Restricting licenses in this way stops it from being libre/free/open source. A fundamental aspect of libre/free/open source is that it's possible to use in a commercial setting. The FSF FAQ addresses this point specifically [0].
If the author wants to abandon libre/free/open source licenses, they should state so explicitly. As it stands, the blog post is ambiguous about whether the author wants to abandon libre/free/open source for a proprietary license or whether they want to strip libre/free/open source licenses of their freedom. I don't follow alternative licenses of this sort but I've seen licenses that allow gratis use up until some threshold of users or income is reached. For example, the Unreal engine license has something along these lines [1].
If the author wants to remain libre/free/open source while mitigating bad behavior by large corporate actors, the AGPL is a fine choice as it legally coerces the copyleft even behind network based software. I'm not sure I have any hard evidence but I've heard that large corporate actors avoid the AGPL for this reason.
I'm a little incredulous that authors choose one of the most "business friendly" but least libre/free/open source (while still being FOSS) licenses and then are shocked when businesses use it without any thought to remuneration. I've seen a few instances of people providing software under and MIT license, such as the helmet.js package discussed in this blog post, and then regretting their decision.
The MIT license is used as a "business friendly" license that is still libre/free/open but doesn't have the copyleft clause to mitigate bad behavior. Why did you choose the MIT license in the first place? Why abandon other libre/free/open source license alternatives and go straight to a proprietary solution?
I don't even know how to begin to address the issue of who gets to decide who the "bad guys" are and who the "good guys" are.
In my opinion, the reason for the success of FOSS is because it's an answer to overly restrictive copyright by enriching the commons. The commons, by definition, is free for public use. If you don't agree with creating a rich commons so that everyone can benefit, that's absolutely your right, just please don't call it open source.
The idea is that if you're winning you can just do a binary search, but if you're losing, it's better to take some risks by making narrower guesses.
For example, let's say it's the last turn and your opponent is about to win. Say you may have 2 options but your opponent has 4 options. Instead of whittling it down to 2 options, it's better to guess one of the four. How outrageous should your guesses be is the content of the result and paper.
> For example, let's say it's the last turn and your opponent is about to win.
Or lose. Last month I played Guess Who with my Indian wife who hadn’t encountered it before, and in a couple of rounds she made mistakes in eliminating tiles, so that my wild guess saved her from losing to her own incorrect final guess.
I find it somewhat surprising that the optimal play when you're ahead is still just binary search. Is there an intuitive reason why it's not productive to make riskier guesses? Why not use my lead to have some chance of sealing my victory immediately, while still maintaining my lead if I'm wrong?
An entropy argument for optimal strategy when winning? Finite size/bounds arguments for losing?
If you have 100 options available to you, the maximum information gain is if you eliminate half. So, if you can, you always want to employ that strategy.
The problem comes with when you're losing, you might get maximum entropy gain by eliminating half but, because of finite effects of the game ending, that might not matter.
Take the example I gave: the next move you lose and you have 4 options to choose from. Eliminating half (2 in this case) will give you maximum entropy gain but guarantee a loss, since you're not whittling down the remaining list to 1 option. Better to take the hit on entropy in order to at least have a chance at winning.
I don't claim to have deep knowledge but this seems like finite size scaling effects. There's a kind of "continuum limit" of these processes but when you get to actual real-world/finite instances, there are issues "at the edges", where the continuum becomes finite. The finite size of the problems creates a kind of non-linear issue at the edges. All this is very hand-waivy, so don't take it too seriously but that's the intuition I have, at least.
You can see an edge effect in bidding-based card games when someone is close to victory.
Say you're in a game to 500 points and you're losing 460 to 480. There are 13 tricks and a trick is worth 10 points if you bid it.
The other team bids 5 tricks. Assuming they can make this (very safe) bid, they will have 530 points. You are collectively good for about 6 tricks. What should you bid?
If you bid reflecting your hand, you'll score 60 points and lose the game 520 to 530. You could go higher; you can take 8 tricks without even needing to set the other team. That would convert your loss into a win. But it's extremely unlikely that you'll be able to make those 8 tricks.
If you're playing duplicates and getting scored based on how good your result was compared to other teams playing the same hand, you should bid 6. If you're playing this as a one-off and getting scored based on whether you win or lose, you should bid 8 despite the fact that you can't make 8.
This becomes a manners issue in some games where your bid is an important input into later players' bids.
> This becomes a manners issue in some games where your bid is an important input into later players' bids.
Yes, I learned bridge playing duplicate where preemptive bids[1] are totally fine, but at some rubber bridge tables you won't get invited back if you have a habit of bidding them.
If you're behind and you're doing the same strategy as your opponent, you'll never catch up. If you're behind doing the risky bet strategy, most times you will never catch up either because your risky bets don't pay off, but a few times they will pay off.
This is largely how all complex competitive games work. At some point there is a shared valuation of which player is ahead and the behind player must take steps that are outside of optimal play to attempt to leapfrog ahead. The GWENT card game was particularly well designed for this. ie How many extra cards to play/sacrifice for round control if you drew badly, based on meta-matchups.
I have always asserted that some games (like Heroes of the Storm) suffer from not having catch up mechanics beyond player skill. This is problematic, when player skill can be quantized to an average value that has led to the losing state. This makes it much less likely to ever be a useful catchup mechanic, in comparison to some intrinsic gamble mechanics.
The lack of catch up mechanics also means the games are less interesting because risks are only worth taking after the known state, not casually during as a chaotic factor that might be capitalized on.
Sure, I think it makes intuitive sense to me that you should play riskier when you're behind. The surprising part to me is that when you're ahead, even if you know that your opponent will play "sub-optimally", that doesn't change your own optimal move.
Binary search minimizes the number of expected moves until you find the target. If you are already ahead, this is a natural thing to want to do. The reason why this doesn't work when you're behind is that your opponent can also do that and probabilistically maintain their lead.
I know that it minimizes the expected number of moves. But, the goal is to maximize the probability that you win in fewer moves than your opponent, not minimize the expected number of moves. Given that your opponent is playing some riskier strategy, it's not intuitively obvious to me that your optimal moves for those two objectives are the same.
If it helps your intuition: Even at 3-4 remaining, you'll still win at the next turn. Above that your chances of getting it right are too low compared to the reduction (assuming there is an option eliminating enough).
This could be made more complicated/interesting if you play a series of games and are awarded points based on either how many rounds it took to win or how many remaining cards you still had.
In addition to other answers, one way to think about it is that the options are symmetric around the midpoint: a guess that partitions the space into (1/4 of options, 3/4 of options) is the same as one that does (3/4, 1/4). So (1/2, 1/2) is special in some way – it has to be either a local minimum or local maximum. And if the function is convex (or close enough), then (1/2, 1/2) is a global minimum/maximum.
But (1/2, 1/2) is clearly a better choice than just guessing a specific individual. So it must be the best choice.
This is all right, but it just kicks the intuition into the assumption that the function is convex. As far as I can tell from the paper, this turns out to be exactly the argument they use to prove that (1/2, 1/2) is the optimal guess. But the majority of that proof is dedicated to showing that the function is indeed convex.
I think this is getting traction because of the new Odyssey movie coming out.
I find Mary-Beard satisfying to watch. I'm having trouble finding it but she was on a panel and asked about the fall of Rome and her response was something to the effect of "Asking why Rome fell is the wrong question. A better question is why was it so successful in the first place."
Her reasons were, if I remember correctly, though Romans were brutal, for a long time and for the most part, they provided a better quality of life to many of the subjugated people and provided a path to citizenship. Further, they were adaptable about the places they governed, at least relative to other options at the time, keeping established powers in play, so long as they pledged allegiance to the Roman empire.
From what I gather, Mary-Beard's reasons for why Rome eventually fell was because they became too insular, eventually denying citizenship to larger cohorts of people and succumbing to corruption. I remember her saying that Rome was on the knife's edge of collapse many times and that it was more about their successes that pulled them through than about avoiding failure.
Just as an aside, I've heard that the concept of cyclops might have been from finding old mammoth skulls. The hole in the middle is for the nose cavity could be mistaken for an eye socket. Many pictures show cyclops as having tusks.
> Rome eventually fell was because they became too insular, eventually denying citizenship to larger cohorts of people
That's weird to say they were too insular and that at the same time there was large cohorts of non-romans. It read more like an opinion based on modern sensibilities than history
This sounds a little off since Roman citizenship expanded until 212 when it was granted to all free men in the empire. But perhaps she was talking about the failure to absorb "barbarian" tribes that came over the border later, that wanted to become Roman and sometimes thought of themselves as Roman.
The sack of Rome in 410 was a shock, but the end of the western Roman empire later that century probably wasn't understood as such at the time since they didn't know that decentralization would be permanent; after terrible civil wars, another emperor would usually reunite the empire. And even much later there were often claims to be a continuation.
Contrast with China where new dynasties would rise after the old one falls.
> they provided a better quality of life to many of the subjugated people and provided a path to citizenship
That varied. The taxation was very oppressive and there is some evidence that QoL (based on skeletal remains) did improve in quite a few places after the empire collapsed for some time.
> for a long time and for the most part, they provided a better quality of life to many of the subjugated people and provided a path to citizenship. Further, they were adaptable about the places they governed, at least relative to other options at the time, keeping established powers in play, so long as they pledged allegiance to the Roman empire.
Sounds quite a lot like Ghengis Khan, who oversaw the largest empire in history until the British one.
I keep reading this online and I find it to be nonsense. Over a thousand years earlier the Romans developed all the conquered lands. They built massive infrastructure projects: roads, ports, aqueducts, buildings. And brought sanitation and education. Ghengis Khan only brought peace and trade networks, something Rome also brought with them.
Next up, how Carthaginians were actually the good guys and child sacrifice was not that bad.
He didn't say Genghis Khan and the Mongols did everything the Ancient Romans did.
He said both had their rise to power rooted in a (for-the-time) unique meritocratic element, where people would join you compared to the alternative options due to the ability to advance.
To be clear, if the quality and quantity of output of the hard worker exceeds someone who works hard and is smart about it, smart and hard would be preferable?
Here's my counter theory: People's moral righteousness on whether they think a person can be judged by a morally neutral and inconsequential action sheds light on their true moral character. Especially so if the judged action is insignificant but socially frowned on.
I know this is all in half-jest but the article and discussion seem pretty mean-spirited to me.
>People's moral righteousness on whether they think a person can be judged by a morally neutral and inconsequential action sheds light on their true moral character.
You call it moral righteousness. I call it emotional intelligence. You realize as you grow up that your small actions shape and reflect your larger self. And you can see it in others too.
We call it "work ethic" in white collar jobs, and I'm sure you wouldn't defend someone who's otherwise an excellent programmer submitting sloppy reports, having inconsistent time estimates, or simply making snarky PR's. It's a shame we don't value it when it's not about maximizing shareholder value.
She just handed me an umbrella and drove off after I said thank you.
reply