Hacker Newsnew | past | comments | ask | show | jobs | submit | adalacelove's commentslogin

Most screenshots for these well known guys are quite boring. Coincidence? I think if you want to be good at something you need focus.

They are just old school. When you learn coding before GUIs were mainstream, you don't care that much about exciting UIs.

Heck, Kernighan was one of the original developers of Unix. In 2015 he was already coding for more than 40-50 years, more time than most from Hacker News are alive. The only constant from that time is the terminal, so no wonder most people in the post gravitate towards that


Coincidence? No, these are people for whom the computer is a tool. My smartest and most productive colleague run stock KDE and a more or less unconfigured Vim. He truly does not give a shit.

So you're telling me my juiced-up i3 with the anime girl and equalizer in the background doesn't make me the hacker wizard I thought I was?

Everything is OK. I love looking at desktops, but I became old to put the effort. I think you are a bard more than a wizard.

Stock ubuntu LTS is the biggest flex.

The more 'h4ck3r' screenshot you have with useless toys at /r/unixporn in Reddit, the less you actually know about computers.

Most i3 setups there are for showoff; cwm has better defaults and conmuting between tags it's far more manageable than fighting with tiles where often the window resolutions are either useless or scramble your content.

Also most fluxbox or *box users will have far better setups than i3 ones because they use their actual setups to do actual stuff instead of posting screenshots.


Hard disagree. The fact that they have customized their system to such a degree shows they do know how to use computers. I think you're trying to conflate that with other things like programming ability, which are orthogonal.

Wrong. Knowing to customize a theme != knowing to use a computer != knowing how computers work.

I can say I know computers and how they work pretty well, but these days I have much better things to do to learn the best way to themes my shell so that it matches my waybar, and that both switch colourscheme when dark mode activates. I could learn if I wanted, but I’m not a teenager anymore; I don’t care. Incidentally, when I had the time concern myself with GTK themes and wallpapers and Compiz, my knowledge in computers was a tenth of what it is now.

It would be like saying a car decorator is the most expert of mechanics.


This. Nowadays I just use Zukitre for GTK/Qt and the Tango icon theme, it suits TWM/CWM and any other minimal WM without tons of effort. A dull gray theme combines with everything, even with my Cyan titlebars for TWM (they make a great contrast with red borders and wheat yellow icons/menus). You know, I want to use my computers and the titlebars stand out like crazy. And, actually, I've just borrowed an old color config used from a university.

The background?

       xsetroot -solid gray20
I never understood the trend on dark/bright modes; the gray themes from my childhood/early teens with W98SE (and used by Mac OS 7/8 too) are just neutral and barely 'sit there'.

Maybe what we need is a screenshot of their brain thinking

Boring is good.

You cannot uninstall Apple Music. That alone is alienating.

IMO, the benefits of an immutable OS install outweigh being able to uninstall/remove particular apps.

Apple makes excellent hardware (laptop, phone, mini...) to the point I'm willing to pay more for it, but I would prefer a lot to customize my SW. And so I avoid their hardware.

do the benefits of a closed-source OS outweigh being able to do whatever you want?

You cannot uninstall Music.app indeed. There is one toggle that hides all of Apple Music. It completely hides it, to the point that a link online to the service will error out.


I assure you that toggle is still present to remove Apple Music in the latest version of macOS. It's unchecked on my Mac as I type this.

Everybody chooses a favorite depending on their domain.

A function executes, and some error happens:

- Return error value: try to handle the error ASAP. The closer to the error the more detailed the information. Higher probability of recovery. Explicit error code handling throughout the code. Example: maybe you try again in one millisecond because the error is a very low probability but possible event.

- Exception: managing errors requires a high-level overview of the program state. Example: no space left on device, inform the user. You gather the detailed information where the error happened. The information is passed as-is or augmented with more information as it bubbles up the stack until someone decides to take action. Pros: separate error handling and happy path code, cleaner code. Cons: separate error handling and happy path code, unhandled errors.

Worst case scenario: you program in C. You don't have exceptions. You are forbidden to use setjmp because rules. A lot of errors are exposed directly to the programmer because this is a low-level language. You return error codes. Rules force you to handle every possible return code. Your code gets incorporated as an appendix to the Necronomicon.


One reason in Julia for having an organization with multiple repositories is how unnecessary is in Julia to have big packages. It is better to have smaller more focused packages and combine them as necessary. Julia needs to improve some things but I don't think I have found a more modular language.


People adapt to the circumstances. A lot of Python uses are no longer about fast iteration on the REPL. Instead of that we are shipping Python to execute in clusters on very long running jobs or inside servers. It's not only about having to start all over after hours, it's simply that concurrent and distributed execution environments are hostile to interactive programming. Now you can't afford to wait for an exception and launch the debugger in postmortem. Or even if you do it's not very useful.

And now my personal opinion: If we are going the static typing way I would prefer simply to use Scala or similar instead of Python with types. Unfortunately in the same way that high performance languages like C attracts premature optimizers static types attract premature "abstracters" (C++ both). I also think that dynamic languages have the largest libraries for technical merit reasons. Being more "fluid" make them easier to mix. In the long term the ecosystem converges organically on certain interfaces between libraries.

And so here we are with the half baked approach of gradual typing and #type: ignore everywhere.


Here we are because:

* Types are expensive and dont tend to pay off on spikey/experimental/MVP code, most of which gets thrown away.

* Types are incredibly valuable on hardened production code.

* Most good production code started out spikey, experimental or as an MVP and transitioned.

And so here we are with gradual typing because "throwing away all the code and rewriting it to be "perfect" in another language" has been known for years to be a shitty way to build products.

Im mystified that more people here dont see that the value and cost of types is NOT binary ("they're good! theyre bad!") but exists on a continuum that is contingent on the status of the app and sometimes even the individual feature.


> Types are expensive and dont tend to pay off on spikey/experimental/MVP code, most of which gets thrown away.

I find I’ve spent so much time writing with typed code that I now find it harder to write POC code in dynamic languages because I use types to help reason about how I want to architect something.

Eg “this function should calculate x and return”, well if you already know what you want the function to do then you know what types you want. And if you don’t know what types you want then you haven’t actually decided what that function should do ahead of building it.

Now you might say “the point of experimental code is to figure out what you want functions to do”. But even if you’re writing an MVP, you should know what that each function should do by the time you’ve finished writing it. Because if you don’t know who to build a function then how do you even know that the runtime will execute it correctly?


Python doesn’t have “no types,” in fact it is strict about types. You just don’t have to waste time reading and writing them early on.

While a boon during prototyping, a project may need more structural support as the design solidifies, it grows, or a varied, growing team takes responsibility.

At some point those factors dominate, to the extent “may need” support approaches “must have.”


My point is if you don’t know what types you need, then you can’t be trusted to write the function to begin with. So you don’t actually save that much time in the end. typing out type names simply isn’t the time consuming part of prototyping.

But when it comes to refactoring, having type safety makes it very easy to use static analysis (typically the compiler) check for type-related bugs during that refactor.

I’ve spent a fair amount of years in a great many different PL paradigms and I’ve honestly never found loosely typed languages any fast for prototyping.

That all said, I will say that a lot of this also comes down to what you’re used to. If you’re used to thinking about data structures then your mind will go straight there when prototyping. If you’re not used to strictly typed languages, then you’ll find it a distraction.


Right after hello world you need a list of arguments or a dictionary of numbers to names. Types.

Writing map = {}, is a few times faster than map: Dictionary[int, str] = {}. Now multiply by ten instances. Oh wait, I’m going to change that to a tuple of pairs instead.

It takes me about three times longer to write equivalent Rust than Python, and sometimes it’s worth it.


Rust is slower to prototype than Python because Rust is a low level language. Not because it’s strictly typed. So that’s not really a fair comparison. For example, assembly doesn’t have any types at all and yet is slower to prototype than Rust.

Let’s take Visual Basic 6, for example. That was very quick to prototype in even with “option explicit” (basically forcing type declarations) defined. Quicker, even, than Python.

Typescript isn’t any slower to prototype in than vanilla JavaScript (bar setting up the build pipeline — man does JavaScript ecosystem really suck at DevEx!).

Writing map = {} only saves you a few keystrokes. And Unless you’re typing really slowly with one finger like an 80 year old using a keyboard for the first time, you’ll find the real input bottleneck isn’t how quickly you can type your data structures into code, but how quickly your brain can turn a product spec / Jira ticket into a mental abstraction.

> Oh wait, I’m going to change that to a tuple of pairs instead

And that’s exactly when you want the static analysis of a strict type system to jump in and say “hang on mate, you’ve forgotten to change these references too” ;)

Having worked on various code bases across a variety of different languages, the refactors that always scare me the most isn’t the large code bases, it’s the ones in Python or JavaScript because I don’t have a robust type system providing me with compile-time safety.

There’s an old adage that goes something like this: “don’t put off to runtime what can be done in compile time.”

As computers have gotten exponentially faster, we’ve seemed to have forgotten this rule. And to our own detriment.


Rust has many high-level constructs available as well as libraries ready and available if you stick to "python-like" things. Saving a "few keystrokes" is not what I described, it was specific: `: Dictionary[int, str]`, this is hard to remember, write, and read, and there's lots of punctuation. Many defs are even harder to compose.

Cementing that in early on is a big pre-optimization (ie waste) when it has a large likelyhood of being deleted. Refactors are not large at this point, and changes trivial to fix.


I've found the transition point where types are useful to start even within a few hundred lines of code, and I've found types are not that restrictive if at all, especially if the language started out typed. The rare case I need to discard types that is available usually, and a code smell your doing something wrong.

Even within a recent toy 1h python interview question having types would've saved me some issues and caught an error that wasn't obvious. Probably would've saved 10m in the interview.


Yep, depends on your memory context capacity.

For me I often don't feel any pain-points when working before about 1kloc (when doing JS), however if a project is above 500loc it's often a tad painful to resume it months later when I've started to forget why I used certain data-structures that aren't directly visible (adding types at that point is usually the best choice since it gives a refresher of the code at the same time as doing a soundness check).


The transition to where type hints become valuable or even necessary isnt about how many lines of code you have it is about how much you rely upon their correctness.

Type strictness also isnt binary. A program with lots of dicts that should be classes doesnt get much safer just because you wrote : dict[str, dict] everywhere.


> * Types are expensive and dont tend to pay off on spikey/experimental/MVP code, most of which gets thrown away.

This is what people say, but I don't think it's correct. What is correct is that say, ten to twenty years ago, all the statically typed languages had other unacceptable drawbacks and "types bad" became a shorthand for these issues.

I'm talking about C (nonstarter for obvious reasons), C++ (a huge mess, footguns, very difficult, presumably requires a cmake guy), Java (very restrictive, slow iteration and startups, etc.). Compared to those just using Python sounds decent.

Nowadays we have Go and Rust, both of which are pretty easy to iterate in (for different reasons).


> Nowadays we have Go and Rust, both of which are pretty easy to iterate in (for different reasons).

It's common for Rust to become very difficult to iterate in.

https://news.ycombinator.com/item?id=40172033


I think Java was the main one. C/C++ are (relatively) close to the metal, system-level languages with explicit memory management - and were tacitly accepted to be the "complicated" ones, with dynamic typing not really applicable at that level.

But Java was the high-level, GCed, application development language - and more importantly, it was the one dominating many university CS studies as an education language before python took that role. (Yeah, I'm grossly oversimplifying - sincere apologies to the functional crowd! :) )

The height of the "static typing sucks!" craze was more like a "The Java type system sucks!" craze...


For me it was more the “java can’t easily process strings” craze that made it impractical to use for scripts or small to medium projects.

Not to mention boilerplate BS.

Recently, Java has improved a lot on these fronts. Too bad it’s twenty-five years late.


> * Types are expensive and dont tend to pay off on spikey/experimental/MVP code, most of which gets thrown away.

Press "X" to doubt. Types help _a_ _lot_ by providing autocomplete, inspections, and helping with finding errors while you're typing.

This significantly improves the iteration speed, as you don't need to run the code to detect that you mistyped a varible somewhere.


Pycharm, pyflakes, et all can do most of these without written types.

The more interesting questions, like “should I use itertools or collections?” Autocomplete can’t help with.


In some fields throwing away and rewriting is the standard, and it works, more or less. I'm thinking about scientific/engineering software: prototype in Python or Matlab and convert to C or C++ for performance/deployment constraints. It happens frequently with compilers too. I think migrating languages is actually more successful than writing second versions.


The issue with moving the ship where it's passanger wants it to be makes it more difficult for new passengers to get on.

This is clearly seen with typescript and the movement for "just use JS".

Furthermore, with LLMs, it should be easier than ever to experiment in one language and use another language for production loads.


I don't think types are expensive for MVP code unless they're highly complicated (but why would you do that?) Primitives and interfaces are super easy to type and worth the extra couple seconds.


Software quality only pays off on the long time. For the short time, garbage is quick and gets the job done.

Also, in my experience, the long time for software arrives in a couple of weeks.


PHP is a great example of the convergence of interfaces. Now they have different “PSR” standards for all sorts of things. There is one for HTTP clients, formatting, cache interfaces, etc. As long as your library implements the spec, it will work with everything else and then library authors are free to experiment on the implementation and contribute huge changes to the entire ecosystem when they find a performance breakthrough.

Types seem like a “feature” of mature software. You don’t need to use them all the time, but for the people stuck on legacy systems, having the type system as a tool in their belt can help to reduce business complexity and risk as the platform continues to age because tooling can be built to assert and test code with fewer external dependencies.


Python is ubiquitous in ML, often you have no choice but to use it


From TFA:

"There is no legal responsibility for a landlord to enforce the payment of taxes by their tenants, nor any suggestion that Aziz should be paying the bill."

This usually makes some sense. But in this case it is obviously being abused. I guess that a police investigation would need to track the flow of money back to the landlord. Or put in place better legislation. Banks for example cannot claim ignorance about their clients and are required to deny access to their services if something does not look good.



I'm a developer and: - I hate ORMs, they are the source for a lot of obscure errors behind layers and layers of abstractions. - I prefer analytical APIs for technical reasons, not just the language.

Reasons: - I can compose queries, which in turn makes them easier to decompose - It's easier to spot errors - I avoid parsing SQL strings - It's easier to interact with the rest of the code, both functions and objects

If I need to make just a query I gladly write SQL


Well, the problem in ORM is the O. Objection-orientation is just a worse way to organise your data and logic than relational algebra.

It's just a shame that many languages don't support relational algebra well.

We had relations as a datatype and all the relevant operations over them (like join) in a project I was working on. It was great! Very useful for expressing business logic.


The problem in ORM is the M, the mapping is always lossy and a leaky abstraction.


Yes. So if you have the same thing in both the database and in your program, you don't need no mapping.

And I suggest that having relations in both places is the way to go.


It's not enough, there is no way to represent some db stuff (not relational, more impl oriented) or some special queries in the program with an ORM. No ORM I have seen comes close to 100% of what you could do with the DB. So still the best way to get closest to the db is using raw queries in the program, but of course that does not scale with more tables, does not scale with more db engines, and does not scale with juniors taking over your codebase and not understanding it.

So to conclude, objection-orientation is fine and relational is fine too, the issue is that the optimal way to translate between them does not scale.


I'm with the first part of your comment. But why do you suddenly conclude that object orientation is good?

If your language supports relation, there's no need to badly translate to objects. (And even if your language doesn't support everything your database does, there's still less of an impedance mismatch if you use an 'RRM' instead of an ORM.)


According to Wikipedia [1] we consume 9717 Mtoe or equivalently 408 TJ (per year, although it is not explictely stated, which I find annoying).

The earth moment of inertia is about I=8e37 kg m2 [2]

The energy extracted by a slowdown of angular speed from wa to wb would be 1/2 I(wa2-wb2).

Approx wa=2pi/86400 and wb=2pi/86401. Energy extracted: 4.9e24J=4.9e12TJ.

We would have energy for about 12 billion years.

If I double check with Kagi's assistant with Claude 3.7 (I'm in my phone and I could easily have made an error) it starts with my exact reasoning and figures but messes up final numbers (so close!!!) to give a total of 40 billion years, which nevertheless is the correct order of magnitude.

[1] https://en.m.wikipedia.org/wiki/World_energy_supply_and_cons...

[2] https://scienceworld.wolfram.com/physics/MomentofInertiaEart...


It's because the quaternion is part of the state of the Kalman filter.


Not in any intrinsic way, it’s just a mildly better way of representing attitude if your state vector includes attitude.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: