> Lower-level languages don’t have this same problem to the same extent.
Of course they have.
If the computer would directly execute what you write down in what you call "low level language" this would be slow as fuck.
Without highly optimizing compilers even stuff like C runs pretty slow.
If something about the optimizer or some other translation step of a compiler changes this has often significant influence on the performance of the resulting compilation artifacts.
"Priced in" I guess. I mean look at Warner Bros stock, steadily climbing the last couple months until it hit basically exactly the price shareholders will get in exchange for their shares as part of this deal.
Whenever one of my friends says they're thinking about getting into daytrading, all I can think is good luck beating the funds... they either can predict the future or just write it themselves.
It's less the fact that someone owns JS's trademark, and more that it's specifically Oracle (they got it when they bought Sun).
Oracle is an incredibly litigious company. Their awful reputation in this respect means that the JS ecosystem can never be sure they won't swoop in and attempt to demand rent someday. This is made worse by the army of lawyers they employ; even if they're completely in the wrong, whatever project they go after probably won't be able to afford a defense.
> Oracle is an incredibly litigious company. Their awful reputation in this respect means that the JS ecosystem can never be sure they won't swoop in and attempt to demand rent someday. This is made worse by the army of lawyers they employ; even if they're completely in the wrong, whatever project they go after probably won't be able to afford a defense.
That is why on one level I am surprised by the petition. They are talking to a supercharged litigation monster and are asking it "Dear Oracle, ... We urge you to release the mark into the public domain". You know what a litigation happy behemoth does in that case? It goes asks some AI to write a "Javascript: as She Is Spoke" junk book on Amazon just so they can hang on to the trademark. Before they didn't care but now that someone pointed it out, they'll go out of their way to assert their usage of it.
On the other hand, maybe someone there cares about their image and would be happy to improve it in the tech community's eyes...
> It goes asks some AI to write a "Javascript: as She Is Spoke" junk book on Amazon just so they can hang on to the trademark.
IANAL, but I don't think that wouldn't be enough to keep the trademark.
Also the petition was a "we'll ask nicely first so we can all avoid the hastle and expense of legal procedings", they are now in the process of getting the trademark invalidated, but Oracle, illogically but perhaps unsurprisingly is fighting it.
I was just using it as an example of doing the absolute minimum. They could write a dumb Javascript debugger or something with minimal effort.
But yeah, IANAL either and just guessing, I just know Oracle is shady and if you challenge them legally they'll throw their weight around. And not sure if responding to a challenge with a new "product" is enough to reset the clock on it. Hopefully a the judge will see through their tricks.
Trademark law is kind of about hypotheticals though. The purpose of a trademark is to prevent theoretical damages from potential confusion, neither of which you ever have to show to be real
In this case the trademark existing and belonging to Oracle is creating more confusion than no trademark existing, so deleting it is morally right. And because Oracle isn't actually enforcing it it is also legally right
Imho this is just the prelude to get better press. "We filed a petition to delete the JavaScript trademark" doesn't sound nearly as good as "We collected 100k signatures for a letter to Oracle and only got silence, now we formally petition the USPTO". It's also a great opportunity to find pro-bono legal council or someone who would help fund the petition
The other aspect here is that general knowledge (citation needed) says that if a company doesn't actively defend their trademark, they often won't be able to keep it if challenged in court. Or perhaps general knowledge is wrong.
Assuming Oracle did decide to go down that route, who would they sue? No one really uses the JavaScript name in anything official except for "JavaScriptCore" that Apple ships with Webkit.
My bad, after reading more it seems Deno is trying to get Oracle's trademark revoked, but I found out that "Rust for Javascript" devs have received a cease and desist from Oracle regarding the JS trademark, which may have triggered Deno to go after Oracle.
The incredibly litigious company here is Deno. Deno sued on a whim, realized they were massively unprepared, then asked the public to fund a legal campaign that will benefit Deno themselves, a for-profit, VC-backed company.
This personal vendetta will likely end with the community unable to use the term JavaScript. Nobody should support this.
1. Oracle is the litigious one here. My favorite example is that time they attacked a professor for publishing less-than-glowing benchmarks of their database: https://danluu.com/anon-benchmark/ What's to stop them from suing anyone using the term JavaScript in a way that isn't blessed by them? That's what Deno is trying to protect against.
2. Deno is filing a petition to cancel the trademark, not claim it themselves. This would return it to the public commons.
It should be obvious from these two facts that any member of the public that uses JavaScript should support this, regardless of what they think of Deno-the-company.
The fact that you wrote it wrong is hilariously ironic.
JavaScript is simply the better term, and marketing is everything. Reminds me of Java's POJOs, which was a very simple pattern that no one used, until someone gave them a fancy name.
ECMAScript is a horrible technical name. Might as well call it ACMEScript considering how willie e. coyote it feels to develop with it...
> POTS = Plain Old Telephony System
I worked for NY Telephone for years in the '80s, and it was referred to there as "Plain Old Telephone Service" not System. Not that it's a big deal at this point!
This is extremely ironic given that JavaScript was so named because people do give a damn about names so Netscape/Sun leveraged the Java success to push JS, hence they named it JAVAscript despite it having nothing to do with Java.
Not everybody knows. People who learn JavaScript don't know. In fact, they must learn this. And from my experience, most learning resources don't mention this, let alone teach this. It took me a really long time to understand what ECMAScript is and how it relates to JavaScript. And the effort I put in this understanding... I would have preferred to not having needed that.
Technology alone rarely wins a market ... success usually comes from marketing, referrals, and network effects.
It’s the same reason why vibe coding a better version of Airbnb (even if it’s just a simple CRUD app) wouldn’t actually threaten Airbnb as a business. The product isn’t the moat; the ecosystem is.
What I've learned is that the fewer flags is the best path for any long lived project.
-O2 is basically all you usually need. As you update your compiler, it'll end up tweaking exactly what that general optimization does based on what they know today.
Because that's the thing about these flags, you'll generally set them once at the beginning of a project. Compiler authors will reevaluate them way more than you will.
Also, a trap I've observed is setting flags based on bad benchmarks. This applies more to the JVM than a C++ compiler, but never the less, a system's current state is somewhat random. 1->2% fluctuations in performance for even the same app is normal. A lot of people won't realize that and ultimately add flags based on those fluctuations.
But further, how code is currently layed out can affect performance. You may see a speed boost not because you tweaked the loop unrolling variable, but rather your tweak may have relocated a hot path to be slightly more cache friendly. A change in the code structure can eliminate that benefit.
That's great if you're compiling for use on the same machine or those exactly like it. If you're compiling binaries for wider distribution it will generate code that some machines can't run and won't take advantage of features in others.
To be able to support multiple arch levels in the same binary I think you still need to do manual work of annotating specific functions where several versions should be generated and dispatched at runtime.
A CPU produced after a certain date is not guaranteed to have the every ISA extension, e.g. SVE for Arm chips. Hence things like the microarchitecure levels for x86-64.
I don't understand if your comment is ironic. Intel is notorious for equipping different processors produced in the same period with different features. Sometimes even among different cores on the same chip. Sometimes later products have less features enabled (see e.g. AVX512 for Alder Lake).
You should at a minimum add flags to enable dead object collection (-fdata-sections and -ffunction-sections for compilation and -Wl,--gc-sections for the linker).
-O3 gained a reputation of being more likely to "break" code, but in reality it was almost always "breaking" code that was invalid to start with (invoked undefined behavior). The problem is C and C++ have so many UB edge cases that a large volume of existing code may invoke UB in certain situations. So -O2 thus had a reputation of being more reliable. If you're sure your code doesn't invoke undefined behavior, though, then -O3 should be fine on a modern compiler.
Oh, there are also plenty of bugs. And Clang still does not implement the aliasing model of C. For C, I would definitely recommend -O2 -fno-strict-aliasing
That's a little vague, I'd put that more pointedly: they don't understand how the C and C++ languages are defined, have a poor grasp of undefined behaviour in particular, and mistakenly believe their defective code to be correct.
Of course, even with a solid grasp of the language(s), it's still by no means easy to write correct C or C++ code, but if your plan it to go with this seems to work, you're setting yourself up for trouble.
Compiler speed matters. I will confess to not as much practical knowledge of -O3, but -O2 is usually reasonable fast to compile.
For cases where -O2 is too slow to compile, dropping a single nasty TU down to -O1 is often beneficial. -O0 is usually not useful - while faster for tiny TUs, -O1 is still pretty fast for them, and for anything larger, the increased binary size bloat of -O0 is likely to kill your link time compared to -O1's slimness.
Also debuggability matters. GCC's `-O2` is quite debuggable once you learn how to work past the possibility of hitting an <optimized out> (going up a frame or dereferencing a casted register is often all you need); this is unlike Clang, which every time I check still gives up entirely.
The real argument is -O1 vs -O2 (since -O1 is a major improvement over -O0 and -O3 is a negligible improvement over -O2) ... I suppose originally I defaulted to -O2 because that's what's generally used by distributions, which compile rarely but run the code often. This differs from development ... but does mean you're staying on the best-tested path (hitting an ICE is pretty common as it is); also, defaulting to -O2 means you know when one of your TUs hits the nasty slowness.
While mostly obsolete now, I have also heard of cases where 32-bit x86 inline asm has difficulty fulfilling constraints under register pressure at low optimization levels.
You have to profile for your specific use case. Some programs run slower under O3 because it inlines/unrolls more aggressively, increasing code size (which can be cache-unfriendly).
Yeah, -O3 generally performs well in small benchmarks because of aggressive loop unrolling and inlining. But in large programs that face icache pressure, it can end up being slower. Sometimes -Os is even better for the same reason, but -O2 is usually a better default.
Most people use -O2 and so if you use -O3 you risk some bug in the optimizer that nobody else noticed yet. -O2 is less likely to have problems.
In my experience a team of 200 developers will see 1 compiler bug affect them every 10 years. This isn't scientific, but it is a good rule of thumb and may put the above in perspective.
The estimate includes visual studio, and other compilers that are not open source for whatever optimization options we were using at the time. As such your question doesn't make sense (not that it is bad, but it doesn't make sense).
In the case of open source compilers the bug was generally fixed upstream and we just needed to get on a newer release.
People keep saying "O3 has bugs," but that's not true. At least no more bugs than O2. It did and does more aggressively expose UB code, but that isn't why people avoid O3.
You generally avoid O3 because it's slower. Slower to compile, and slower to run. Aggressively unrolling loops and larger inlining windows bloat code size to the degree it impacts icache.
The optimization levels aren't "how fast do you want to code to go", they're "how aggressive do you want the optimizer to be." The most aggressive optimizations are largely unproven and left in O3 until they are generally useful, at which point they move to O2.
Sure. All I am saying is that there are still plenty of compiler bugs related to optimization, which is reason enough for me to recommend being careful with optimization in contexts where correctness is important.
Sure, I guess? In my experience I turn on the optimizer mostly without fear because I know that if, in the rare case I need to track down an optimizer bug, it would look the same as my process for identifying any other sort of crazy bug and in this case it will at least have a straightforward resolution.
More aggressive optimization is necessarily going to be more error prone. In particular, the fact that -O3 is "the path less traveled" means that a higher number of latent bugs exist. That said, if code breaks under -O3, then either it needs to be fixed or a bug report needs to be filed.
Doesn’t that strategy only work in games like Clue, where everyone is trying to uncover the same hidden character?
In Guess Who, you’re identifying your opponent’s character, not a shared one, so any misdirection only hurts you … because it doesn’t generate extra signal for your opponent, so there’s no strategic benefit to misleading them.
The problem is when you bisect an odd sized group. You necessarily have to make one half larger than the other. So you're not trying to misdirect, you're trying to avoid creating a signal. But to do this you have to sometimes put your character in the smaller half, which trades off against your other goal of shrinking the pool as fast as possible.
> Are you primary using electron-based apps, or true native macOS apps?
Maybe I’m lucky but I run macOS daily without any problems.
There’s an in-between abomination — Catalyst based apps from/by Apple (quickly migrated from iOS to macOS). Reminders, Notes and others are downright unnavigable and unusable with a keyboard and are so, so terrible in their UX. It’s a shame that Apple hasn’t spent any effort in fixing those and making them true native macOS apps.
For the last several years, there has been nobody at Apple who has good taste and a deep and committed interest in UX.
Most Apple apps are somewhat bad nowadays. It largely defeat the marketing/purpose of the "ecosystem" because the 3rd party stuff doesn't necessarily integrate the "special sauce" (like sharing for passing stuff around). So if you end up just running 3rd party apps that are just some web app wrapper or custom implementation UI it begs the question of even using Apple hardware. Yes it's top of the line, but it is also very expensive at any given level of performance.
They are just milking their media/dev niches at this point and mostly caters to the common denominator with low expectation for premium prices.
If you gotta run Chrome, Microsoft Office, Google Web Apps and the likes it doesn't feel worth it. Meanwhile the indie app market is insane with expensive subscription for utilities that are basically free elsewhere.
And I lowkey hate what iOS has become. Convoluted and unpredictable. Now ugly as well.
Because if I understand them correctly, aren’t they a wrapper around all the major LLMs (focused specially on developer use cases?
reply