I remember when I was first learning the art and science of software engineering, a really big part of almost all of the techniques was around “reusability”. I suppose even then schedule overruns were fairly common (and given that Brooks’s The Mythical Man-Month was published in 1975, I suspect I’m right), and it was assumed that the biggest factor creating these schedule overruns was “there’s so much code to write”. So if you’re an unimaginative problem-solving manager, and someone says to you, “we are behind schedule because of all this code we are writing”, then the obvious answer (immediately after “just type faster and spend longer typing” which takes you down a completely different rabbit-hole of problems) becomes “can you reuse any code you’ve already written?”
When I learned object-oriented principles, the whole idea was to create these objects that could then be reused over and over again. That’s the whole point of, say, a “fruit” object that can then be inherited into a “banana” class or an “apple” class. You are reusing all that generic code that is used by every fruit, or something like that. But one of the first lessons I learned was that this never really works out in real life.
So the first thing I learned was that this magical idea that you could author some sort of library of fruit/banana/apple classes that could be reused across the enterprise organization – nay, the world! – didn’t work. First of all, the definition of a fruit is just too different between systems. “Fruit” for a cooking system is a completely different concept than “fruit” for a warehouse system, and even more different for farming, transportation, or genetic experimentation. “Fruit” classes are not portable across domains. I’ve seen people even just try to do data modeling across domains and it doesn’t work, either. You end up with either something so generic that every single definition and implementation has to be overridden anyway, or you have this ginormous over-complex idea of a fruit that contains an aggregation of every single property you’ve ever thought of.
What did work, however, was at least being able to save some time on a specific project. Sure, a fruit library was not shareable between a cook and a farmer, but at least it was shareable between cooks. Being pragmatic and scoping your OO design to just your system usually resulted in success. Now, and this will cook your noodle, is this because you are reusing code, with generic implementations? Or is it because you’ve now created a shared architecture and definition, and a way for developers to cleanly communicate with one another? A thought for another day (although I’ve previously addressed it here: https://nuggets-knowledge.com/2018/10/04/what-does-architect-really-mean-in-the-it-world/)
What’s interesting is I’ve seen this same issue replay itself in modern software engineering. Namely, the idea of creating reusable architecture modules. When coupled with the inversion-of-control model, there’s this idea that generic modules can be authored and dropped into place for any application, thus saving a ton of time in development. Now in some cases, this idea has worked beautifully, namely UI design. This is exactly what a Widget or a UI Control is…it’s a generic module that can be reused over and over again across different webapps and saves a huge amount of development time. But in my career, I’ve now been at three different organizations where this same architecture Grand Vision was attempted to be implemented across the problem space. And each time I saw the same thing…any module published for reuse was only suited to a single use case, so it was never reused by anyone else. Instead, there was now a giant library of modules, each used by only one application.
I’m not saying that trying to save time with reusability is wrong. What I’m saying is, if you choose to try and “save time” by reusing implementations, you need to ensure you are being thoughtful about the scoping of the reusability, and pragmatic about how the reusable modules can be reused.