So the other day I’m sitting around reading about geeky technical stuff (as I am wont to do) and come across an interesting synthesis of thoughts that can combined in a fascinatingly gestalt-ish way.
I’m reading about the history of microservices and service oriented architectures, and how the original thinking for them evolved from a desire to create emergent systems. The line of thinking was that the way to allow technology platforms and systems to truly innovate is to let them evolve naturally. In order to do this you have to figure out how to break the system down into its atomic, native parts, called primitives, then allow the primitives to interact with each other in a loosely coupled manner, and then allow the system to evolve naturally.
In my own head I had already figured some of this out, but I was looking through the narrow lens of software development. I knew that breaking your service layer into disparate API-based services, and versioning and deploying them individually, was a good thing. Here’s why: let’s say you’ve got 3 APIs. If you let each one be its own CI/CD pipeline and DevOps deployment cycle, then you’ll be able to achieve much faster engineering velocity. If one API gains new functionality, why would you wait for the other 2 to be finished for a lock-step release? That one API’s new functionality is sitting there unused waiting for the release. Let that API test and release, get that functionality into the wild! Allow flexibility. Fail fast, is the mantra here.
So reading about this history of microservices and primitives and it flipped on a little light bulb in my head, that its more than just about software engineering lifecycles and velocity. Its above allowing the system to evolve more fluidly and more rapidly. As the legendary pilot John Boyd said, “Speed of iteration beats quality of iteration.” Its about allowing your technology capabilities more easily and more rapidly evolve to fit the ever-changing and unclear business needs.
Shortly after making this connection, I made another one in my head as well. A few years ago (maybe more than a few) I read the seminal Google white paper on MapReduce. The content of that paper so wholly impacted my way of thinking about computer science and software development that it forever changed the way I write code. In the enterprise business world, its not just about speed to MVP (minimum-viable-product). Its about scalability. I can build the greatest algorithm in the world but if I can’t scale it, its basically useless from a business perspective.
I learned this lesson harshly when trying to do big-data analysis on raw performance data from groups of Cisco and Brocade switches in a datacenter. I wrote really good algorithms at crunching the data, but to run it at any sort of scale (hundreds of switches, hundreds of days, millions of data points) was impossible. I tried to just throw more and more computing hardware at it but still would hit constraints and bottlenecks. It wasn’t until I understood the concept of atomic work units, immutable source data, and the ability to trivially scale outwards, that I was finally able to crack the code of getting meaningful information out of all that raw data.
So what’s the synthesis here? It’s applying these principles to engineering management. I frequently get frustrated when I step back and look at the monolith of the engineering projects in front of me that aren’t making much visible progress, that seem slow to react to market shift or customer needs, and that are constantly breaking each other with integration interdependencies. The trick here is to apply the same systems-primitive thinking to your projects.
Here’s an example: a project to release a new application server stack. The team working that project is doing the entire thing soup-to-nuts. They are basically learning how to install Windows Server, secure it, and prepare it for the application install. But this task is really a primitive! It’s a basic building block of what we do all across the enterprise routinely. Now granted, we have shared systems teams that collaborate so there is already some synergy in place for automation and scripting reuse and whatnot. But the task of installing a Windows Server and securing it should not be in the work breakdown structure of the app release project! There should be another microservice-ish project to create secured Windows servers, and then via loosely-coupled business processes, hand that server image over to the app team. Its a huge time-saver, scalability-enabler, and evolution-driver because a dedicated team can work on incrementally improving the Windows Server build process completely independently of the application development process. If a new Windows security vulnerability appears, it can quickly be fixed and inserted into the pipeline without impacting the application’s deployment rollout schedule.
The lesson learned here is to take the time to identify those business primitives, and get your teams focused on those tasks and no others. Its likely you already have a Windows systems team building an image, but take a look at your other project teams and see how many of them are actually consuming that server image output instead of just doing it themselves. By doing this, you can take the guiding principles of loosely-coupled service architectures and apply it to your business processes as well.
Postscript: So there’s an interesting counterpoint to this. An older mantra that I have leaned on before is, “Find the dependencies and eliminate them.” The thinking is that you can increase your own velocity by eliminating reliance on others. I’m starting to wonder if that idea is wrong in some cases. It hinges on the assumption that your external dependency is slow and a potential impediment. But if its part of the same organization then maybe instead you should focus on how your teams can help each other instead of just building your own silo. Food for thought.