When building a project spec from scratch, one of the items that often gets left off the list is deployment. The spec misses it, estimates miss it, and many project teams don't even discuss it until it's time to do it.
A well-thought-out deployment process must be one of the primary checklist items for the project planning process.
The question, though, is what kind of deployment process? Options range in scale, repeatability, complexity, and reliability.
The usefulness of a scripted deployment pipeline is well-established. Sure, Joe Coder could connect to a server and run all the steps manually from a checklist. However, that type of operation is error-prone due to the human factor. It's not automated, it does not repeat well, and deployments will typically take longer to complete (if/when done correctly).
So we want a pipeline, but how complex does it need to be? And does the pipeline need to reach full maturity right away?
Those are good questions, and the answers depend on the project goals.
Complexity
Often, when starting a custom software development project from scratch, we want quick iterations and rapid deployments. The goal at that stage of the project is to have a tight feedback loop between the development team and the project stakeholders.
In contrast, I've worked with teams that consider full continuous deployment (CD) as a basic piece of the process. Load up Kubernetes right away, prepare for that scale right away, and set up that pipeline very early in the project.
Is that the best place to spend developer effort early in a project? Or is it better to have a simple, established deployment pipeline that just requires a developer to start a script and then scale up to CD later on?
During the part of the project lifecycle that benefits from quick iterations and quick deployments, waiting for CD pipelines can stifle a fast response to issues and bugs. Fully containerized deployments, in my experience, can take a while for building a container to complete.
Scale Limitations
As the solution grows and takes shape, we have to consider scale limitations. Having a developer fire off an Ansible (or similar) script to deploy will get the solution very far down the road to completion, but we become limited in our deployment targets.
If the solution scales up to need a fully containerized solution that uses Kubernetes or something similar, that's one of the signs that the platform is "growing up" to the point that it will require development resources focused on that process.
Early in platform development, the developers have very little context at that stage for what may eventually be required:
- What will the constraints be?
- What kinds of tasks need to run periodically?
- What will be the actual scale that needs support? These are all questions that point toward optimizations needed.
We do not want to prematurely optimize when those circumstances are unknown. Allow the usage scale to determine when it is best to make the switch to a more distributed architecture.