Hacker Newsnew | past | comments | ask | show | jobs | submit | frankie_t's commentslogin

It is true, but it's also not the case. The steep learning curve is flattened quite a bit by available "starter pack" configs and the amount of fresh articles. So you can get a functional editor and then gradually bend it to your needs. Also, LLMs turned out to be quite good at generating working elisp and helping out in general.


> lags behind in features of modern editors

I have been using emacs for around 7 years, but it never worked for me as the main editor, it just sucked too bad compared to IDE-like features of other editors and actual IDEs. So I only used it for org-mode, doing an attempt to use it for something else every couple of years.

I'm currently in the process of trying this again, and I have to say things feel very different this time. By adding native tree-sitter and LSP support, the IDE-like features are outsourced to where they should be done. It wasn't perfect, but I had issues of the same degree or worse with other editors. A proprietary IDE still would beat it in stability and features, but the experience is _crazy good_ for free software.

What I like the most is the hacker mentality it encourages. When I see something I don't like, I don't go like "I wish they did it differently", I ask "well how do I change that?".

The only thing that feels truly outdated is single-threaded nature and blocking UI when long-running operation (like an update) is happening. And maybe non-smooth scroll (there is a package but it makes text jump).


To add onto this, I really don't think emacs has that big of an initial learning curve nowadays.

If you enable cua-mode and get the LSPs working, you get the same experience as any other big editor like VSCode or Zed pretty close to out of the box. The arrow keys, mouse, and cut-copy-paste do exactly what you'd expect. There's menus, there's toolbars, there's scrollbars. Don't let the "emacs ricer" screenshots fool you; a lot of people disable those things for aesthetic reasons. Probably the kludgiest thing emacs has still is the default scrolling mode which scrolls through a page and then bumps the entire page forward by 1, like older editors. You can change these with a few lines in your config.

Alternatively you can get good out of the box experiences with an emacs distribution (like Doom Emacs) or one of the many minimal configs out there (I'm partial to [1])

Lumping this in with something like vim/neovim is a bit silly because the basic navigation commands and editing experience of emacs is mostly the same as other editors. Sure, underneath this is all run by an Elisp VM and an event loop which maps Elisp commands to keypresses, but as a user you only need to dive in when you feel comfortable.

[1]: https://github.com/jamescherti/minimal-emacs.d


My morale is extremely low. But I have different circumstances: I live under war, with my future life perspectives unknown. Software engineering, apart from being enjoyable, provided the sense of security. I felt that I could at least either relocate to some cheap country and work remotely, or attempt to relocate to an expensive country with good jobs.

With AI, the future seems just so much worse for me. I feel that productivity boost will not benefit me in any way (apart from some distant trickle down dream). I expect the outsource, and remote work in general to be impacted negatively the most. Maybe there's going to be some defensive measures to protect domestic specialists, but that wouldn't apply to me anyway unless I relocate (and probably acquire citizenship).

>Is your company hiring more/ have they stopped hiring software engineers

Stopped hiring completely and reduced workforce, but the reasons stated were financial, not AI.

>Is the management team putting more pressure to get more things done

with less workforce, there is naturally more work to do. But I can't say there is a change in pressure, and no one forces AI upon you.


Sorry for writing something a bit tangential, I'm mostly replying to the heading not the content.

I keep seeing the same point that argues against how "not fun, depressing, worse a <thing> has gotten these days". The most recent incarnation of that is how programming with AI feels worse than programming on your own.

I don't think the problem is inability to find a way to derive fun, the way you could previously. The problem is deriving fun while still getting paid for it.

To reiterate on the web-dev, you probably can make it fun again, given that you were able to have fun with it previously. But it probably will have to be done in your spare time after job.


Exactly. AI makes everything less fun because it turns what was once fun into an industry by its design.


Not sure about that, I've had great fun vibe coding like another commenter said, as I can simply write what I want in English and see a result immediately. Of course, I'd never use this for production, but for prototyping, it's nice. This is the opposite of industry, as you state.


I'm not talking about short-term gains like you having fun, but long-term effects on the industry of programming. Of course technology always provides some short-term fun in terms of elevating activity to higher industry levels in the long run.


At the end of the day, the people who put in the effort get ahead. I don't worry about the short or long term at all, as long as one is competent. If fewer are competent due to vibe coding their entire career, all the better for me as a competent professional, as with lower supply comes higher demand.


Lol, meritocracy... As if...


There is a parallel to this Celtic imitations that is found primarily in modern Ukraine[1], attribute to Cherniakhov culture[2]. The theory for them is that once the trade with Roman Empire ceased, the locals needed bigger supply of coins and started minting their own.

There is a curious thing with this "branch", I'm not sure if it's the same in the Celtic one. The last time I talked to people researching this, I was told that: a. The findings are mostly unique, it's hard to find two copies of the same coin. Sometimes obverse of one coin could be found on another, but reverses don't match. b. These coins are not cast, they are minted through "hammering", which requires a stamp. However, not a single stamp has been found so far. A much easier way to make currency out of existing one would be to just slap existing coin into some clay, make a casting mold and just pour molten metal into it.

This of course is more of a curiosity/rumor level, I don't have any qualifications to back it up.

[1]: http://barbarous-imitations.narod.ru/ (apologize for a .ru website, but it's the best catalogue to my knowledge.)

[2]: https://en.wikipedia.org/wiki/Chernyakhov_culture


I feel like the religious explanation is a copout (the rest of the details are in the spiritual realm lol) and maybe there are practical ones.

Like maybe coins are rare enough that they're not completely fungible, there is a slight preference for being able to know which one is which?

Say, people want to have affordances to trace the provenance if that starts to matter.

It could also be a legitimate aesthetic preference for currency units to be unique rather than uniform.

Maybe uniformity is hard but why make the dies much bigger than they need to be and use different parts?

Striking is downstream of casting, technologically because you need to make harder things and perform extra steps to do that.


Maybe it's my learning limitations, but I find it hard to follow explanations like these. I had similar feelings about encapsulation explanations: it would say I can hide information without going into much detail. Why, from whom? How is it hiding if I can _see it on my screen_.

Similarly here, I can't understand for example _who_ is the owner. Is it a stack frame? Why would a stack frame want to move ownership to its callee, when by the nature of LIFO the callee stack will always be destroyed first, so there is no danger in hanging to it until callee returns. Is it for optimization, so that we can get rid of the object sooner? Could owner be something else than a stack frame? Why can mutable reference be only handed out once? If I'm only using a single thread, one function is guaranteed to finish before the other starts, so what is the harm in handing mutable references to both? Just slap my hands when I'm actually using multiple threads.

Of course, there are reasons for all of these things and they probably are not even that hard to understand. Somehow, every time I want to get into Rust I start chasing these things and give up a bit later.


> Why would a stack frame want to move ownership to its callee

Rust's system of ownership and borrowing effectively lets you hand out "permissions" for data access. The owner gets the maximum permissions, including the ability to hand out references, which grant lesser permissions.

In some cases these permissions are useful for performance, yes. The owner has the permission to eagerly destroy something to instantly free up memory. It also has the permission to "move out" data, which allows you to avoid making unnecessary copies.

But it's useful for other reasons too. For example, threads don't follow a stack discipline; a callee is not guaranteed to terminate before the caller returns, so passing ownership of data sent to another thread is important for correctness.

And naturally, the ability to pass ownership to higher stack frames (from callee to caller) is also necessary for correctness.

In practice, people write functions that need the least permissions necessary. It's overwhelmingly common for callees to take references rather than taking ownership, because what they're doing just doesn't require ownership.


I think your comment has received excellent replies. However, no one has tackled your actual question so far:

> _who_ is the owner. Is it a stack frame?

I don’t think that it’s helpful to call a stack frame the owner in the sense of the borrow checker. If the owner was the stack frame, then why would it have to borrow objects to itself? The fact that the following code doesn’t compile seems to support that:

    fn main() {
        let a: String = "Hello".to_owned();
        let b = a;
        println!("{}", a);  // error[E0382]: borrow of moved value: `a`
    }
User lucozade’s comment has pointed out that the memory where the object lives is actually the thing that is being owned. So that can’t be the owner either.

So if neither a) the stack frame nor b) the memory where the object lives can be called the owner in the Rust sense, then what is?

Could the owner be the variable to which the owned chunk of memory is bound at a given point in time? In my mental model, yes. That would be consistent with all borrow checker semantics as I have understood them so far.

Feel free to correct me if I’m not making sense.


I believe this answer is correct. Ownership exists at the language level, not the machine level. Thinking of a part of the stack or a piece of memory as owning something isn’t correct. A language entity, like a variable, is what owns another object in rust. When that object goes at a scope, its resources are released, including all the things it owns.


I think it's funny how I had this kind of sort of "clear" understanding of Rust ownership from experience, and asking "why" repeatedly puts a few holes in the illusion of my understanding being clear. It's mostly familiarity of concepts from working with C++ and RAII and solving some ownership issues. It's kind of like when people ask you for the definition of a word, and you know what it means, but you also can't quite explain it.

I would say you're correct that ownership is something that only exists on the language level. Going back to the documentation: https://doc.rust-lang.org/book/ch04-01-what-is-ownership.htm...

The first part that gives a hint is this

>Rust uses a third approach: memory is managed through a system of ownership with a set of rules that the compiler checks.

This clearly means ownership is a concept in the Rust language. Defined by a set of rules checked by the compiler.

Later:

>First, let’s take a look at the ownership rules. Keep these rules in mind as we work through the examples that illustrate them:

>

>*Each value in Rust has an owner*.

>There can only be one owner at a time.

>*When the owner goes out of scope*, the value will be dropped.

So the owner can go out of scope and that leads to the value being dropped. At the same time each value has an owner.

So from this we gather. An owner can go out of scope, so an owner would be something that lives within a scope. A variable declaration perhaps? Further on in the text this seems to be confirmed. A variable can be an owner.

>Rust takes a different path: the memory is automatically returned once the variable that owns it goes out of scope.

Ok, so variables can own values. And borrowed variables (references) are owned by the variables they borrow from, this much seems clear. We can recurse all the way down. What about up? Who owns the variables? I'm guessing the program or the scope, which in turn is owned by the program.

So I think variables own values directly, references are owned by the variables they borrow from. All variables are owned by the program and live as long as they're in scope (again something that only exists at program level).


> Ownership exists at the language level, not the machine level.

Right. That's the key here. "Move semantics" can let you move something from the stack to the heap, or the heap to the stack, provided that a lot of fussy rules are enforced. It's quite common to do this. You might create a struct on the stack, then push it onto a vector, to be appended at the end. Works fine. The data had to be copied, and the language took care of that. It also took care of preventing you from doing that if the struct isn't safely move copyable.

C++ now has "move semantics", but for legacy reasons, enforcement is not strict enough to prevent moves which should not be allowed.


> Why can mutable reference be only handed out once?

Here's a single-threaded program which would exhibit dangling pointers if Rust allowed handing out multiple references (mutable or otherwise) to data that's being mutated:

    let mut v = Vec::new();
    v.push(42);
    
    // Address of first element: 0x6533c883fb10
    println!("{:p}", &v[0]);
    
    // Put something after v on the heap
    // so it can't be grown in-place
    let v2 = v.clone();
    
    v.push(43);
    v.push(44);
    v.push(45);
    // Exceed capacity and trigger reallocation
    v.push(46);
    
    // New address of first element: 0x6533c883fb50
    println!("{:p}", &v[0]);


The analogous program in pretty much any modern language under the sun has no problem with this, in spite of multiple references being casually allowed.

To have a safe reference to the cell of a vector, we need a "locative" object for that, which keeps track of v, and the offset 0 into v.


> The analogous program in pretty much any modern language under the sun has no problem with this, in spite of multiple references being casually allowed.

And then every time the underlying data moves, the program's runtime either needs to do a dynamic lookup of all pointers to that data and then iterate over all of them to point to the new location, or otherwise you need to introduce yet another layer of indirection (or even worse, you could use linked lists). Many languages exist in domains where they don't mind paying such a runtime cost, but Rust is trying to be as fast as possible while being as memory-safe as possible.

In other words, pick your poison:

1. Allow mutable data, but do not support direct interior references.

2. Allow interior references, but do not allow mutable data.

3. Allow mutable data, but only allow indirect/dynamically adjusted references.

4. Allow both mutable data and direct interior references, force the author to manually enforce memory-safety.

5. Allow both mutable data and direct interior references, use static analysis to ensure safety by only allowing references to be held when mutation cannot invalidate them.


That’s a different implementation, and one you can do in Rust too.


> // Put something after v on the heap

> // so it can't be grown in-place

> let v2 = v.clone();

I doubt rust guarantees that “Put something after v on the heap” behavior.

The whole idea of a heap is that you give up control over where allocations happen in exchange for an easy way to allocate, free and reuse memory.


It certainly doesn't guarantee it, this is just what's needed to induce a relocation in this particular instance. But this makes Rust's ownership tracking even more important, because it would be trivial for this to "accidentally work" in something like C++, only for it to explode as soon as any future change either perturbs the heap or pushes enough items to the vec that a relocation is suddenly triggered.


That’s correct.


> Why would a stack frame want to move ownership to its callee, when by the nature of LIFO the callee stack will always be destroyed first, so there is no danger in hanging to it until callee returns.

It definitely takes some getting used to, but there's absolutely times when you could want something to move ownership into a called function, and extending it would be wrong.

An example would be if it represents something you can only do once, e.g. deleting a file. Once you've done it, you don't want to be able to do it again.


> Could owner be something else than a stack frame?

Yes. There are lots of ways an object might be owned:

- a local variable on the stack

- a field of a struct or a tuple (which might itself be owned on the stack, or nested in yet another struct, or one of the other options below)

- a heap-allocating container, most commonly basic data structures like Vec or HashMap, but also including things like Box (std::unique_ptr in C++), Arc (std::shared_ptr), and channels

- a static variable -- note that in Rust these are always const-initialized and never destroyed

I'm sure there are others I'm not thinking of.

> Why would a stack frame want to move ownership to its callee, when by the nature of LIFO the callee stack will always be destroyed first

Here are some example situations where you'd "pass by value" in Rust:

- You might be dealing with "Copy" types like integers and bools, where (just like in C or C++ or Go) values are easier to work with in a lot of common cases.

- You might be inserting something into a container that will own it. Maybe the callee gets a reference to that longer-lived container in one of its other arguments, or maybe the callee is a method on a struct type that includes a container.

- You might pass ownership to another thread. For example, the main() loop in my program could listen on a socket, and for each of the connections it gets, it might spawn a worker thread to own the connection and handle it. (Using async and "tasks" is pretty much the same from an ownership perspective.)

- You might be dealing with a type that uses ownership to represent something besides just memory. For example, owning a MutexGuard gives you the ability to unlock the Mutex by dropping the guard. Passing a MutexGuard by value tells the callee "I have taken this lock, but now you're responsible for releasing it." Sometimes people also use non-Copy enums to represent fancy state machines that you have to pass around by value, to guarantee whatever property they care about about the state transitions.


> Why would a stack frame want to move ownership to its callee

Happens all the time in modern programming:

callee(foo_string + "abc")

Argument expression foo_string + "abc" constructs a new string. That is not captured in any variable here; it is passed to the caller. Only the caller knows about this.

This situation can expose bugs in a run-time's GC system. If callee is something written in a low level language that is resposible for indicating "nailed" objects to the garbage collector, and it forgets to nail the argument object, GC can prematurely collect it because nothing else in the image knows about that object: only the callee. The bug won't surface in situations like callee(foo_string) where the caller still has a reference to foo_string (at least if that variable is live: has a next use).


> _who_ is the owner. Is it a stack frame?

The owned memory may be on a stack frame or it may be heap memory. It could even be in the memory mapped binary.

> Why would a stack frame want to move ownership to its callee

Because it wants to hand full responsibility to some other part of the program. Let's say you have allocated some memory on the heap and handed a reference to a callee then the callee returned to you. Did they free the memory? Did they hand the reference to another thread? Did they hand the reference to a library where you have no access to the code? Because the answer to those questions will determine if you are safe to continue using the reference you have. Including, but not limited to, whether you are safe to free the memory.

If you hand ownership to the callee, you simply don't care about any of that because you can't use your reference to the object after the callee returns. And the compiler enforces that. Now the callee could, in theory give you back ownership of the same memory but, if it does, you know that it didn't destroy etc that data otherwise it couldn't give it you back. And, again, the compiler is enforcing all that.

> Why can mutable reference be only handed out once?

Let's say you have 2 references to arrays of some type T and you want to copy from one array to the other. Will it do what you expect? It probably will if they are distinct but what if they overlap? memcpy has this issue and "solves" it by making overlapped copies undefined. With a single mutable reference system, it's not possible to get that scenario because, if there were 2 overlapping references, you couldn't write to either of them. And if you could write to one, then the other has to be a reference (mutable or not) to some other object.

There are also optimisation opportunities if you know 2 objects are distinct. That's why C added the restrict keyword.

> If I'm only using a single thread

If you're just knocking up small scripts or whatever then a lot of this is overkill. But if you're writing libraries, large applications, multi-dev systems etc then you may be single threaded but who's confirming that for every piece of the system at all times? People are generally really rubbish at that sort of long range thinking. That's where these more automated approaches shine.

> hide information...Why, from whom?

The main reason is that you want to expose a specific contract to the rest of the system. It may be, for example, that you have to maintain invariants eg double entry book-keeping or that the sides of a square are the same length. Alternatively, you may want to specify a high level algorithm eg matrix inversion, but want it to work for lots of varieties of matrix implementation eg sparse, square. In these cases, you want your consumer to be able to use your objects, with a standard interface, without them knowing, or caring, about the detail. In other words you're hiding the implementation detail behind the interface.


I wonder if doing classical processing of real-time data as a pre-phase before you feed into NN could be beneficial?


Yes, it’s part of the process of data augmentation, which is commonly used to avoid classifying on irrelevant aspects of the image like overall brightness or relative orientation.


The author gave a pretty good reasoning why is it a bad idea, in the same section. However, for the demonstration purposes I think the they should have included their vision on how the request scoped data should be passed.

As I understand they propose to pass the data explicitly, like a struct with fields for all possible request-scoped data.

I personally don't like context for value passing either, as it is easy to abuse in a way that it becomes part of the API: the callee is expecting something from the caller but there is no static check that makes sure it happens. Something like passing an argument in a dictionary instead of using parameters.

However, for "optional" data whose presence is not required for the behavior of the call, it should be fine. That sort of discipline has to be enforced on the human level, unfortunately.


> As I understand they propose to pass the data explicitly, like a struct with fields for all possible request-scoped data.

So basically context.Context, except it can't propagate through third party libraries?


If you use a type like `map[string]any` then yes, it's going to be the same as Context. However, you can make a struct with fields of exactly the types you want.

It won't propagate to the third-party libraries, yes. But then again, why don't they just provide an explicit way of passing values instead of hiding them in the context?


> why don't they just provide an explicit way of passing values instead of hiding them in the context?

Hiding them in a context is the explicit way of passing values through oblivious third-party libraries.

In some future version of Go, it would be nice to just have dynamic scoping. But this works now, and it’s a good pattern. The only real issue is the function-colouring one, and that’s solvable by simply requiring that every exported function take a context.


Precisely because you need to be able to pass it through third party libraries and into callbacks on the other side where you need to recover the values.


Yeah most people talking here are unlikely to have worked on large scale Go apps.

Managing a god-level context struct with all the fields that ever could be relevant and explaining what they mean in position independent ways for documentation is just not scalable at all.

Import cycles mean you’re forced into this if you want to share between all your packages, and it gets really hairy.


can you elaborate on typing out for didactic purposes, please?


It's a new iteration on the ancient form of learning by copying. I've only ever seen people copy stuff when writing by hand when wanting to memorize something though, I imagine with a keyboard the memory-enhancement effect of writing by hand is lost, but it's probably more effective than just reading alone.


Sounds like a good advice.

Unfortunately, my feelings tell me otherwise. What is it that we humans are better at? Is it a chore of managing people, sitting on meetings and aligning stakeholders' interests? Feels to me more like a politician job than an engineering.


Humans are the ones with the money. They exchange it for goods and services.

So if you want money, you need to provide a good or service that other humans want. Humans have an inside advantage providing goods and services to humans.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: