Recent post about undo/redo in Map Editor attracted several comments along the lines “there’s a way to make it more efficient, why don’t you do it that way?” I’ll try to reply in detail inside.
What is efficiency, how do we grade if something is efficient or not? There are two main approaches: Empirical and Measured. Empirical efficiency is what we believe should be efficient, it often depends on our somewhat limited knowledge. Measured efficiency is something we test on benchmark, when we compare A and B in real life surroundings. Simple example is storing RGB image in memory for processing: common sense says it would be better to store each RGB triplet in 24 bits, so that the image takes exactly the space it needs; Measurements say otherwise, each RGB triplet should take 32 bits because that is how memory access is aligned in CPU (they are called 32/64 bit for a reason) meaning that reading chunks from memory would be faster. However there’s another, third, approach – Practical. It does not matters how long it takes to process an RGB image if the difference is below 100 milliseconds.
When we are dealing real life tasks we have certain use-case scenarios. For example we design an icon processing software that deals with up to 256 x 256 pixel images. It does not matters how we store that image for processing purposes in memory simply because the difference is only as much as 65 Kb in size or 100 microseconds (1/1000 of a millisecond) in time. Should we even spend time analyzing which approach is faster and by what amount? Nope. Because any approach we choose is NOT inefficient.
I don’t like catchy phrases, but this one is mandatory: “Premature optimization is the root of all evil (or at least most of it) in programming“. Before optimizing something we need to ensure this is in fact a bottleneck. Small example: is it better to speedup 50 ms task by 200% or a 1000 ms task by 20% if they are run in succession? Despite the bigger percentage we can slate only 33 ms from the first task, but from the second task we can slate whole 200 ms at once – several times more that the whole first task!
Some might argue about the future, what if icons become bigger and bigger (think uber-retina) and we will need to process 4096 x 4096 pixel images in our icon processing software? Well, if we spend worrying about each and every possible perspective, we will waste a whole lot of time on polishing the product and adding features ahead of time instead of releasing the product now, when it is up to date and is needed by users. If we put market aside we are left with what is called over-engineering. That is unnecessary complicated solution to a simple problem without a good reason.
The disadvantage of seemingly efficient solutions is not only in sheer amount of work required to make, but also that the more complicated the system gets, the more time it takes to add a piece of functionality to it. In other words simple solution can be made in 1 day and hit the market next day. It may be slow but it will work and it will be better than nothing, it can be improved iteratively on market demands. Complicated “efficient” solution will take months to engineer and weeks to debug, by the time it hits the market it will be already outnumbered by simple solutions that already adapted to market needs.
In the end I would like to return to the initial case – undo/redo buffer in Map Editor of KaM Remake. It is not important how crude it is made as long as it works and it is not important how empirically inefficient it is as long as it is not causing noticeable lags. Said all that I hope I have convinced our readers that this time Simple is in fact the better engineered 😉