The least complex software is the one never built. We often over-automate. Automation is brittle and inflexible and easily ends up lined with edge cases and other types o complexity.
Automation is great when it replaces a stable, well-working manual process.
The way to introduce automation is to first experiment with humans doing something manually until you have a great process. Then take the dumbest, most reliable part of that and turn it into a computer program that alerts a human if it encounters an edge case.
Try not to handle edge cases in the software, but try to remove them from the greater system in which they occur.
Automate by elimination, not by complexity.
Good books in this vein would be anything with Taiichi Ohno in the title, or, perhaps better, as the author.
Conversely, automation creates a clear and well defined workflow for completing a task. There is nothing to "automate" if there is not already a manual process to doing it. Sure, sometimes the automation you introduce can be an over-engineered solution that's more difficult to maintain (or just not work) compared to a human-done manual process. To that point I think a valid solution would be to make the process itself more automation friendly, if possible given your circumstances.
Yes. This is what Taiichi Ohno would call "standard work": well-defined processes intended to be executed literally, and evolved with and by the people who perform the execution.
Standard work is a great source of things to automate, but automation is not a replacement for it.
I think there is an aspect worthy of analysis there.
He has clearly been reading up on The Toyota Way and one aspect of it is that they look to understand well a process before automating it.
I have found that a lot of complexity comes from mindlessly automating processes: you end up with systems integrations and data conversions that add a lot of complexity and are often brittle too.
This is indeed very typical for many digital automation projects created by bureaucracies. As the head of a German airline said, "If you digitize a shitty process, then you’ll end up with a shitty digital process".
This is like someone asking how to build a simpler house and your answer is to tell them to live in the woods.
If someone is trying to design a better plane, it doesn't make sense to tell them to just convince everyone asking for a better plane to stop traveling. It is nonsense and seems like you have unrelated ideas that you are trying to shoehorn into this question.
That's not how I see it. Imagine for a moment that we had an epidemic where every family just had to build a five-story mansion complete with service buildings to feel like they keep up with the Joneses.
That's what I see a lot of in software right now.
I'm merely suggesting that maybe a regular two-story family house is enough. Maybe even one story, depending on needs.
Note that only the first sentence of my comment said "don't automate". The other eight described a way to automate with less complexity when you have to.
I think this is the problem - you are imagining a question that isn't being asked by telling someone to just not make that problem.
This person asked for book on managing complexity. You are saying 'just don't make software as complex and do it by not making software at all'. That is two steps removed from being any sort of answer to this question.
I don't see it that way. When I have had trouble managing complexity, I have been happy for advice on how to reduce it going forward.
Step one to solving a problem is often to find a technique by which time is no longer stacked against you. Then you can attack the other thing in peace.
I have been happy for advice on how to reduce it going forward.
You didn't do that, your solution was 'don't write software'.
If someone asks for tips on organizing their kitchen and without knowing anything more than that, you say 'make your kitchen smaller and don't use your kitchen' do you think that is reasonable advice?
Not writing as much software will result in reduced complexity going forward.
Sure, if someone built tens of industrial kitchens every month for regular families and just in the middle of nowhere (which is what i see a lot of in software) and then asked "how can I reduce my kitchen expenses" I would suggest "how about going forward you make your kitchens smaller and don't build them at all where nobody uses them?"
No one asked if they "should write software they don't need".
You are answering a question that no one would ever ask and you can't answer it because you don't have any other information other than the actual question that was asked.
That doesn't mean anything. If someone asks how to cook and you say 'just don't eat lol', that's not an answer and is just a self indulgent response based on nothing.
Wow, people on the internet really will argue about anything.
@kqr, thank you for your insights. While it's not exactly relevant to my problem domain (self-driving cars - lots of essential complexity there!), it's still generally useful advice.
Software is expensive to write, but it's even more expensive to maintain. That's why my favorite projects are those that delete large amounts of now-useless code. Not writing it in the first place is an even better alternative.
Mine automation works for years without an issue. Flaky automation is meaningless.
> that alerts a human if it encounters an edge case.
No. It shouldn't alert anybody almost never if it's good. When you have 300 automation routines, even if it alerts once a year you will have some alert every day. So it needs to alert lower then that, when the world falls out or something...
Yes, you can spend a lot of time and complexity on truly robust automation. But it's not always the most ROI move economically. Especially not in highly volatile businesses where your processes may need to change rapidly.
You didn't calculate ROI right. If YOU are the one that does intervention, then your time is lost constantly doing manual fixing of failed scripts. Not to mention reputation loss, end user dissatisfaction etc.
Automation routines MUST be robust, must handle all weird cases that happen frequently (at least once a year), and must notify when they fail to do so always. Then you should come back and see how to not make them fail even then.
It's easy to spend upward of $10000 on really robust automation, when the same manual process would cost only $3000 over its usable lifetime, and the economical-but-less-robust automation costs $1000 over the same period.
The robust automation, in that case, has over 10× worse ROI. What's wrong with that calculation?
The thing about really robust automation is that for it to pay off, the process have to be static over a large number of executions. For many business needs, the process, or its inputs, change every few executions, and you never get to reap the benefits of robust automation before it needs to be redone at great expense.
As for thinking that it's a dichotomy between "no automation" and "absolutely robust automation"... well, I think you're robbing yourself of a large chunk of the strategy space by refusing to see any middle ground but the two extremes.
Edit: also note that I'm not talking about "failed scripts" at any point. I'm talking about scripts that do exactly what they are supposed to, but they are performing a narrow, easily automated slice of the work. A human can chain such scripts together in the requisite sequence by spending very few minutes of their day.
> It's easy to spend upward of $10000 on really robust automation, when the same manual process would cost only $3000 over its usable lifetime.
Manual process is incomparable to automation, because $3k human will make mistakes as humans are not good robots. Also, your miserable $3k human can now do normal thing.
> The thing about really robust automation is that for it to pay off, the process have to be static over a large number of executions.
It doesn't have to be static, it just mustn't be random. Also, how often process changes is important and automation with scripts (that can be changed ad hoc) allows for quick flexibility when problems arise.
> As for thinking that it's a dichotomy between "no automation" and "absolutely robust automation"... well, I think you're robbing yourself of a large chunk of the strategy space by refusing to see any middle ground but the two extremes.
You are also robbing yourself of time to do other things which may lead to more progress, since you are fixing flaky automation all the time.
> A human can chain such scripts together in the requisite sequence by spending very few minutes of their day.
I LOLed. A minute for a single script. You must have missed that in enterprise there are hundreds of scripts. Heck, I usually have 20-30 on a single project.
Humans indeed deviate from standard process. This causes mistakes but it also prevents them.
It sounds like you think robust automation takes zero minutes to create, since you think of robust automation as always freeing up time. In my experience, robust automation is something that takes considerable time to create and maintain.
Maybe you know of some trick I don't. But since you keep writing about "failing scripts" and "flaky automation" despite my attempts to correct such misunderstandings, I'm starting to suspect you're interpreting my comments as what you want them to say for the sake of your argument, rather than what I'm trying to say.
I have never experienced that someone wants to keep human if machine could be put to use. No, humans do not prevent mistakes for highly unimaginative repetitive work. Even if it happens, its outlier.
> It sounds like you think robust automation takes zero minutes to create, since you think of robust automation as always freeing up time. In my experience, robust automation is something that takes considerable time to create and maintain.
It takes, and it gets MUCH better with experience. However, the time is finite, unlike that with human corrections.
> Maybe you know of some trick I don't.
Probably - I know how to write robust and resilient automation scripts that over time converge to almost 0 failures.
You are making some really bold claims and you really might be an expert but I think you should open your mind to the slight possibility that others might also know what they are talking about. The OP specifically started with the premise that fast changing businesses would spend all their time fixing their automation and it might not make sense in that situation. Can’t comprehend how that doesn’t make immediate sense.
Let’s say I have a crawler which automates some data gathering. It’s sources keep changing frequently, robust automation here is probably a research project and simple automation is orders of magnitude more bang for buck.
This page is report of the PowerShell framework I developed mostly in first year of development (https://github.com/majkinetor/au) that checks ~250 web sites for updates on various software. Today it has 6 errors and usually never much more. On my own location I keep ~60 packages and I I tackle errors maybe once a year. Stuff just work, and you rarely have to visit, otherwise I would be involved entire day into this and I am not, while those packages have many millions of users.
Now I spend almost 0 time maintaining packages and I am one of the top choco package owners.
Check out the options used, some of which make it so robust:
> It’s sources keep changing frequently, robust automation here is probably a research project and simple automation is orders of magnitude more bang for buck.
Even if the source changes frequently its better to automate. Its not when it keeps changing daily or more then that. By automating you learn something new, so it pays more for your experience. Manually working every day the same thing (that may move around) doesn't involve complex thinking and is just waste of time.
> Even if the source changes frequently its better to automate. Its not when it keeps changing daily or more then that.
Now I think we can get somewhere! Is this an admission that automation is not worth it when the processes or inputs change too often?
If so, then this frequency (which you have given as daily) depend entirely on the business needs in question.
Often, there's no business case to run an automated process daily.
Weekly or even monthly are very common intervals for processes in business. For a process that needs to run monthly, you only get twelve executions in a year. If the inputs change every six months, do you still think spending 60+ commits (as in your settings example) is worth it every six months, when there are cheaper ways to do it with limited human intervention?
> Often, there's no business case to run an automated process daily.
Almost 100% of the cases I have run daily, hourly and even less (5,10,20,30 minutes schedule are common). I even had one recently that executed millions of requests to some REST API daily, running every few seconds. I call those "app supporting scripts", and I offload specific features of the main app to those.
Must be architectural thing I guess, I work as principal architect and I design most of my services so that they rely heavily on automation support.
> Is this an admission that automation is not worth it when the processes or inputs change too often?
I don't work in a vacuum. For me there are no rules about anything, context is most important (patterns, best practices etc. are for newbies). That case does lean to the manual side on first thought, but it all depends on other factors.
Automation is great when it replaces a stable, well-working manual process.
The way to introduce automation is to first experiment with humans doing something manually until you have a great process. Then take the dumbest, most reliable part of that and turn it into a computer program that alerts a human if it encounters an edge case.
Try not to handle edge cases in the software, but try to remove them from the greater system in which they occur.
Automate by elimination, not by complexity.
Good books in this vein would be anything with Taiichi Ohno in the title, or, perhaps better, as the author.