I used i3 for the longest time and I'd say a wayland based alternative like sway or miracle is a better choice nowadays. Even KDE Plasma recently dropped x11 support [1] so going forward, most apps will target wayland first.
Migrating my i3 config to sway hardly took any effort. I was also able to get rid of a lot of xorg specific configurations from various x11 dotfiles and put them directly in the sway config (Such as Natural Scrolling)
I've been using Emacs 30 on my android tablet for a few months now with a bluetooth keyboard. Needless to say, you can't really leverage eglot so it's basically a no-go for any meaningful software development. I've been using it for org-mode and it is fantastic for that.
Not to criticize you - I also use eglot and it's great - but let me mention that people have been doing pretty meaningful software development for several decades now, and LSPs are, I don't know, 5 years old?
There's a saying in my language, "the appetite grows while you eat"...
I think it's a fair complaint. You're on a setup with bad ergonomics as it is (tablet + Bluetooth keyboard.) Dealing with that and no LSP is rough. I'd be happy writing code on a desktop without an LSP, though I'd be happiest with both.
I did my share of coding on a Commodore 64 (have you seen that keyboard?) with a cassette tape as the only external storage, no debugger (just a very poor BASIC variant) and (of course) a mono CRT tv set as a monitor. No internet, of course, just a few books/magazines.
I think the C64 had a fine keyboard? It's mostly a standard layout and a lot chunkier than the small Bluetooth keyboards that tend to cause wrist issues. I also began coding in the CRT days so idk why that would be a barrier, I guess you mean for resolution? My issues are ergonomic not functionality oriented.
the fdroid build of android doesn't have a real linux environment that you can install arbitrary binaries on to. you can switch to a termux-ish proot environment and do x-forwarding or TUI emacs but those are shenanigans
Not having gpg-agent is a huge deal breaker for me. I feel gpg-agent doesn't get enough love. Not only can it do all the ssh-agent operations, it can also be used with gpgme-json[1] to do web authentication with your [A] key. It's truly a shame that hardly any applications leverage the powerful cryptography afforded by GPG.
What do you mean? I use GPG with SSH (or SSH with GPG) all the time, and I need gpg-agent for that. GPG's agent replaces ssh-agent and serves SSH keys derived from your GPG key.
Can you do this with Age? If not, then I am going to stick to GPG.
Oh well, let us just agree on that comparing Age to GPG is silly, ergo "Switching from GPG to Age" is silly, unless it is "Switching from GPG to Age for file encryption".
Age doesn't do signing, key infrastructure, or email. Minisign/signify only sign. None are GPG replacements. They're partial feature subsets that are simpler because they do less.
So, to summarize these tools:
- Age: Only does file encryption, no signing, no key management infrastructure, no email integration
- Minisign/Signify: Only signing, no encryption
- GPG: Encryption, signing, key management, email integration, multiple recipients, subkeys, revocation certificates, web of trust (even if unused), smart card support, etc.
You cannot just simply switch from GPG to Age unless you are only doing file encryption. If this is the case, then sure, you can.
One distinction is that compilers generally translate from a higher-level language to a lower-level language whereas Transpilers target two languages which are very close in the abstraction level.
For example a program that translated x86 assembly to RISC-V assembly would be considered a transpiler.
The article we are discussing has "Transpilers Target the Same Level of Abstraction" as "Lie #3", and it clearly explains why that is not true of the programs most commonly described as "transpilers". (Also, I've never heard anyone call a cross-assembler a "transpiler".)
I don't really agree with their argument, though. Pretty much all the features that Babel deals with are syntax sugar, in the sense that if they didn't exist, you could largely emulate them at runtime by writing a bit more code or using a library. The sugar adds a layer of abstraction, but it's a very thin layer, enough that most JavaScript developers could compile (or transpile) the sugar away in their head.
On the other hand, C to Assembly is not such a thin layer of abstraction. Even the parts that seem relatively simple can change massively as soon as an optimisation pass is involved. There is a very clear difference in abstraction layer going on here.
I'll give you that these definitions are fuzzy. Nim uses a source-to-source compiler, and the difference in abstraction between Nim and C certainly feels a lot smaller than the difference between C and Assembly. But the C that Nim generates is, as I understand it, very low-level, and behaves a lot closer to assembly, so maybe in practice the difference in abstraction is greater than it initially seems? I don't think there's a lot of value in trying to make a hard-and-fast set of rules here.
However, it's clear that there is a certain subset of compilers that aim to do source-to-source desugaring transformations, and that this subset of compilers have certain similarities and requirements that mean it makes sense to group them together in some way. And to do that, we have the term "transpiler".
Abstraction layers are close to the truth, but I think it's just slightly off. It comes down to the fact that transpilers are considered source-to-source compilers, but one man's intermediate code is another man's source code. If you are logically considering neither the input and the output to be "source code", then you might not consider it to be a transpiler for the same reasons that an assembler is rarely called a compiler, even though assemblers can have compiler-like features: consider LLVM IR, for example. This is why a cross-assembler is not often referred to as a transpiler. Of course, terminology is often tricky: the term "recompiler" is often used for this sort of thing, even though neither the input nor the output is generally considered "source code", probably because they are designed to essentially construct a result as similar as possible to if you were able to recompile the source code for another target. This seems to contrast fairly well with "decompiler", as a recompiler may perform similar reconstructive analysis to a decompiler, but ultimately outputs more object code. Not that I am an authority on anything here, but I think these terms ultimately do make sense and reconcile with each-other.
When people say "Same Level of Abstraction", I think what they are expressing is that they believe both of the programming languages for the input and output are of a similar level of expressiveness, though it isn't always exact, and the example of compiling down constructs like async/await shows how this isn't always cut-and-dry. It doesn't imply that source-to-source translations, though, are necessarily trivial, either: A transpiler that tries to compile Go code to Python would have to deal with non-trivial transformations even though Python is arguably a higher level of abstraction and expressiveness, not lower. The issue isn't necessarily the abstraction level or expressiveness, it's just an impedance mismatch between the source language and the destination language. It also doesn't mean that the resulting code is readable or not readable, only that the code isn't considered low level enough to be bytecode or "object code". You can easily see how there is some subjectivity here, but usually things fall far away enough from the gray area that there isn't much of a need to worry about this. If you can decompile Java bytecode and .NET IL back to nearly full-fidelity source code, does that call into question whether they're "compilers" or the bytecode is really object code? I think in those cases it gets close and more specific factors start to play into the semantics. To me this is nothing unusual with terminology and semantics, they often get a lot more detailed as you zoom in, which becomes necessary when you get close to boundaries. And that makes it easier to just apply a tautological definition in some cases: like for Java and .NET, we can say their bytecode is object code because that's what they're considered to be already, because that's what the developers consider them to be. Not as satisfying, but a useful shortcut: if we are already willing to accept this in other contexts, there's not necessarily a good reason to question it now.
And to go full circle, most compilers are not considered transpilers, IMO, because their output is considered to be object code or intermediate code rather than source code. And again, the distinction is not exact, because the intermediate code is also turing complete, also has a human readable representation, and people can and do write code in assembly. But brainfuck is also turing complete, and that doesn't mean that brainfuck and C are similarly expressive.
> Lie #3: Transpilers Target the Same Level of Abstraction
> This is pretty much the same as (2). The input and output languages have the syntax of JavaScript but the fact that compiling one feature requires a whole program transformation gives away the fact that these are not the same language
It is not really the same as (2), you can't cherry pick the example of Babel and generalise it to every transpiler ever. There are several transpilers which transpile from one high-level language to another high-level language such as kotlin to swift. i.e; targeting the same level of abstraction.
Wonder what this person would say about macro expansions in scheme, maybe that should also be considered a compiler as per their definition.
BabelJS is the central example of "transpilers"; if BabelJS lacks some purported defining attribute of "transpilers", that definition is unsalvageable, even if there are other programs commonly called "transpilers" that do have that attribute.
I use KaTeX for my blog, and indeed KaTeX was faster than MathJax 2, but MathJax 3 (a complete rewrite) has significantly improved performance from the previous version and is now a bit faster than KaTeX in my experience.
It is amusing that, in my particular environment, KaTeX is slower than MathJax 3 in processing time but actually faster when font loading time is accounted for. Both loads from your domain so there should be no routing issue; KaTeX fonts turned out to be substantially smaller than MathJax 3 at least in this particular case. Is this intentional or just a lucky coincidence for KaTeX? (KaTeX is also ~70% smaller than MathJax 3 in their gzipped forms, so it might well be intentional!)
"Page complete" never stops with either library for me. It just keeps counting milliseconds. Chrome on Windows, no change when I disabled uBlock Origin Lite.
> This extends at first to the Rust compiler and standard library, and the Sequoia ecosystem.
By Sequoia, are they talking about replacing GnuPG with https://sequoia-pgp.org/ for signature verification?
I really hope they don't replace the audited and battle-tested GnuPG parts with some new-fangled project like that just because it is written in "memory-safe" rust.
Sequoia-PGP is 8 years old at this point, their 1.0 happened half a decade ago.
Meanwhile, GnuPG is well regarded for its code maturity. But it is a C codebase with nearly no tests, no CI pipeline(!!), an architecture that is basically a statemachine with side effects, and over 200 flags. In my experience, only people who haven't experienced the codebase speak positively of it.
It's rather that GnuPG is ill-regarded for its code immaturity tbh. You don't even need to read the code base, just try to use it in a script:
It exits 0 when the verification failed, it exits 1 when it passed, and you have to ignore it all and parse the output of the status fd to find the truth.
It provides options to enforce various algorithmic constraints but they only work in some modes and are silently ignored in others.
> It is interesting though how this same conversation doesn't exist in the same way in other areas of computing like video game consoles
Yes, there needs to be a lot more uproar for these cases as well. One of the most appalling cases is that of macOS. To distribute your app (as a .dmg for instance), you need to sign up and pay for a Developer ID, sign the app with a Developer ID certificate and then notarize it, EVEN if you don't intend to use their App Store.
You can self sign without a developer account and self distribute and all it does is notify the user that the software is from the internet the first time they run it. They can still use the app. If it is completely unsigned, users may have to bypass gatekeeper, but that is just a setting.
If you want to sign using a cert trusted by apple, and distribute on their infrastructure, you do need a paid account.
This seems like a reasonable compromise, quite honestly. That is based on remembering the bad old days of just having to trust that the software you downloaded from some random shareware site hadn't been modified maliciously.
99% of users are not going to understand why they can't just double click the app to run it. And the second they see macOS gaslight them into thinking self-signed applications are radioactive biohazards via scary warnings, they aren't going to take additional complicated steps to run the app they wanted to run in the first place.
Users will just assume the app is broken, a virus or that you're a hacker, all because of the way macOS treats apps from developers who didn't pay the Apple tax or submit the app to Apple's panopticon for approval.
Users should not have to know some cursed and arcane ritual to run the apps they want to run.
Wait, do you need to do that? I've never attempted distribution, but I've created multiple local apps with Electron and Tauri for myself, and they are just a .app on my Applications folder. Wouldn't it be as easy as sharing this file with anyone else if I wanted to distribute them?
No, macOS treats your machine's self-signed certificates in a special way so that running apps signed with them is transparent to you, but a nightmare to anyone you dare to distribute the apps to without Apple's approval.
They need to try to open it, visit Settings > Privacy & Security, scroll down quite a bit, hit Open Anyway, try to open it again, and confirm one last time.
(Might be quicker for some in Terminal if supported.)
I think it used to be Right Click > Open, then confirm.
I remember using these for impedance matching back when I was in college. Basically when you connect two transmission lines (like coax cables), you need to match their impedances so the signal does not bounce back. (Ik this is a gross oversimplification but yeah)
Obviously, it's the voter's responsibility to cross-check whether whatever BS the LLM spat out is credible or not. I believe if the voter can be trusted with the vote, then they can also be trusted to make an informed decision.
Migrating my i3 config to sway hardly took any effort. I was also able to get rid of a lot of xorg specific configurations from various x11 dotfiles and put them directly in the sway config (Such as Natural Scrolling)
[1]: https://itsfoss.com/news/kde-plasma-to-drop-x11-support/.