Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think OOP itself is bad. Bad abstractions are bad, no matter the language, and that is the main problem. Abstractions and domain models are created based on our understanding of a given problem as well as what is possible within a certain programming language/paradigm. Most of the time, we initially do not have a good understanding of the domain we are working in, and because we are conditioned to keep code "organized", we tend to fall back to established "patterns" (which OOP has an abundance of...), but these almost always end up being the wrong abstraction. By the time we figure this out though, it's already too late to change it.

A functional language such as Haskell tend to make us think a little bit more about what abstractions we use, because they are such an integral part of the language you can hardly do anything without them. We can "converse" with the compiler to come to a better understanding of how our code should be structured. You usually end up with a better domain model. However, this can also be very limiting. Lack of control of execution and no control of memory layout is a problem in my domain (video games). It's also slower to develop for because you can't take "shortcuts", which can be helpful to get something up and running without having to understand the problem domain.

I still use OOP daily (C++), but I never start out with abstractions when writing new code. I tend to create C-style structs and pure functions. Eventually, I will have a better understanding of how this new module should be organized, and will make an API that is RAII-compliant and whatnot. I think you can easily write clean code in OOP as long as the abstraction makes sense.



Out of all the comments here this one resonated with me by far the most.

You're totally correct: The goal is to find the right abstractions. Bad abstractions keep getting in your way either by leaking too much or by being based on invariants that turn out to be not actually be all that invariant, whereas good abstractions have no need to be touched again later.

For me, the additional layer of "how hard is it to change the abstraction if it turns out to be wrong" is also quite important. In my environment (algorithmic R&D) you never quite know which invariant your next idea will break. And this is where OOP can really bite you - untangling code from a class hierarchy or twisting interfaces to bend to your new ideas is anywhere between infeasible and a complete mess. Composition over inheritance can help save a lot of frustration here. But sometimes interface/abstract base class + a single implementation also provides a very clean abstraction to separate the core model/algorithm from the fiddly details. Just don't fall for deep hierarchies or large interfaces would be my takeaway so far.

Maybe we just need to get better at giving up on abstractions when they fail? I think that's very tough because they tend to still dictate how we think about problems (plus the usual cost of rewriting code). Your approach of delaying the abstraction choice sounds very interesting (somewhat akin to Extreme Programming?), I will try to keep that in mind for the future. In thinking about this I'm mostly afraid of code sort of remaining at this "makeshift"/provisional level and never actually settling into something more organized (due to lack of incentive/time to improve it). I think that can be kept under control, but do you have any suggestions on that front?


> Your approach of delaying the abstraction choice sounds very interesting (somewhat akin to Extreme Programming?), I will try to keep that in mind for the future. In thinking about this I'm mostly afraid of code sort of remaining at this "makeshift"/provisional level and never actually settling into something more organized (due to lack of incentive/time to improve it). I think that can be kept under control, but do you have any suggestions on that front?

Actually, I find that more often than not, code written in this manner is actually easier to maintain, because there are no layers and concepts to untangle - just the raw functionality and data transformations. You don't need to put yourself in the mindset of the person who wrote the code, who might not have had the right concepts in their head at the time. Instead, you just parse the code like a computer - This makes it easier to figure out what the code actually does rather than being distracted by the intent of the author.

The main purpose of abstraction is to intentionally hide implementation details so other people can use your code without requiring a full understanding of all the details. However, when maintaining code, obfuscation is the exact opposite of what you want - You want to fully understand what the code is actually doing. Internally, I forgo most abstractions. For public interfaces, I expose highly specific functions and datatypes rather than "generic" interfaces, and only the bare minimum. Variants should favor duplication over generalization when uncertain. It's up to the consumer of those APIs (which is often myself!) to choose whether or not to implement abstractions on top of those. In my experience, it's either super-obvious when it's needed or unnecessary if unsure - So you can usually push back abstraction up to that point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: