One of the biggest benefits of theory is spotting inevitable pain points, before wasting resources on longer scenic routes that just bring you back to the same crunch later.
Another is to identify when it's insufficient to bet the farm on one of several parallel strategies, because these strategies do not fully encompass each other after all.
Let's apply these concepts to that old hobby-horse of mine - which just so happens to be one of consumerland IT's most common crises - management of active malware.
We know that malware can embed itself in the core code set, hold control so that other tasks can't start, detect system changes, and take punitive action. That's enough to predict that formal "look-don't-touch" detection scanning will be safe, but that informal detection scanning and formal clean-up may not be, and informal clean-up is even less likely to be. By "formal", I mean "without running any code from the infected system".
From that I conclude the only lasting SOP to detect malware safely is to do so formally, without leaving any detectable footprints in the system being scanned. I also conclude that one can't predict an always-safe SOP to clean active malware, so it's best to unlink the detection and cleanup phases of the operation so that off-system research on what has been detected can guide the cleanup process around any caveats that may apply.
Maintain or wipe?
This is one of several common bunfights that assume one of the two alterantives fully encompasses the other. With good enough maintenance, you'd never need to suffer the collateral damage of "just wipe, rebuild and restore". With good enough "backups", you'd never need to bother with malware identification or cleaning, and suffer the risk of thinking everything's been cleaned when it has not.
One can point out that circumstances may force one approach or the other, and thus no matter how well you develop one strategy, you cannot afford to abandon the other. Or that adopting a "wipe, rebuild and restore" strategy does not obviate the need to identify malware, in case it is in the "data" you restored or in case it's using an entry point that will be as open on the rebuilt system as it was on the originally-infected system.
Two further points arise from the above, when it comes to the thorny issue of backup theory. Firstly, we see that the pain point of distinguishing data from code is a nemesis that can't be avoided. Secondly, we see there's a classic backup conundrum of how to scope out unwanted changes you are restoring to avoid, when it comes to the "rebuild" part of "just wipe, rebuild and restore 'data'".
When code was expected to be a durable item, it was meaningful to speak of rebuilding the code base from a cast-in-stone boilerplate that dated from the software's initial release, and that is definately free of malware. Once you entertain the notion of "code of the day" patching, you cannot be sure your code base is new enough to be non-exploitable, and yet old enough not to contain malware that's been designed to stay hidden until payload time.
"Ship now, patch later" is another nemesis that won't go away - theory predicts that no matter how you design the system, you will always need bullet-proof code, just as no matter how you manage the system, you will always need to be able to safely identify malware. For example, how do you know your updates are not malware spoofing themselves as such? Whatever code applies that verification, has to be bullet-proof.
PS: Yes, I know how to spell "nemesis", singular.
You didn't seriously think you'd have only one, did you?