Pricing Login
Pricing
Chris Riley

Chris Riley

Chris Riley is a bad coder turned DevOps analyst. His goal is to break the enterprise barrier to modern development. He can be found on Twitter @HoardingInfo

Posts by Chris Riley

Blog

Docker Logs vs. VM Logs: The Essential Differences

Blog

AWS Security vs. Azure Security

Blog

Local Optimization - The Divide Between DevOps and IT

Blog

Do Containers Become the DevOps Pipeline?

The DevOps pipeline is a wonderful thing, isn’t it? It streamlines the develop-and-release process to the point where you can (at least in theory, and often enough in practice) pour fresh code in one end, and get a bright, shiny new release out of the other. What more could you want? That’s a trick question, of course. In the contemporary world of software development, nothing is the be-all, end-all, definitive way of getting anything done. Any process, any methodology, no matter how much it offers, is really an interim step; a stopover on the way to something more useful or more relevant to the needs of the moment (or of the moment-after-next). And that includes the pipeline. Consider for a moment what the software release pipeline is, and what it isn’t. It is an automated, script-directed implementation of the post-coding phases of the code-test-release cycle. It uses a set of scriptable tools to do the work, so that the process does not require hands-on human intervention or supervision. It is not any of the specific tools, scripting systems, methodologies, architectures, or management philosophies which may be involved in specific versions of the pipeline. Elements of the present-day release pipeline have been around for some time. Fully automated integrate-and-release systems were not uncommon (even at smaller software companies) by the mid-90s. The details may have been different (DOS batch files, output to floppy disks), and many of the current tools such as those for automated testing and virtualization were not yet available, but the release scripts and the tasks that they managed were at times complex and sophisticated. Within the limited scope that was then available, such scripts functioned as segments of a still-incomplete pipeline. Today’s pipeline has expanded to engulf the build and test phases, and it incorporates functions which at times have a profound effect on the nature of the release process itself, such as virtualization. Virtualization and containerization fundamentally alter the relationship between software (and along with it, the release process) and the environment. By wrapping the application in a mini-environment that is completely tailored to its requirements, virtualization eliminates the need to tailor the software to the environment. It also removes the need to provide supporting files to act as buffers or bridges between it and the environment. As long as the virtualized system itself is sufficiently portable, multi-platform release shrinks from being a major (and complex) issue to a near-triviality. Containerization carries its own (and equally clear) logic. If virtualization makes the pipeline work more smoothly, then stripping the virtual environment down to only those essentials required by the software will streamline the process even more, by providing a lightweight, fully-tailored container for the application. It is virtualization without the extra baggage and added weight. Once you start stripping nonessentials out of the virtual environment, the question naturally arises — what does a container really need to be? Is it better to look at it as a stripped-down virtual box, or something more like a form-fitting functional skin? The more that a container resembles a skin, the more that it can be regarded as a standardized layer insulating the application from (and integrating it into) the environment. At some point, it will simply become the software’s outer skin, rather than something wrapped around it or added to it. Containerization in the Pipeline What does this mean for the pipeline? For one thing, containerization itself tends to push development toward the microservices model; with a fully containerized pipeline, individual components and services become modularized to the point where they can be viewed as discrete components for purposes such as debugging or analyzing a program’s architecture. The model shifts all the way over from the “software as tangle of interlocking code” end of the spectrum to the “discrete modules with easily identifiable points of contact” end. Integration-and-test becomes largely a matter of testing the relationship and interactions of containerized modules. In fact, if all of the code moving through the pipeline is containerized, then management of the pipeline naturally becomes management of the containers. If the container mediates the application’s interaction with its environment, there is very little point in having the scripts that control the pipeline directly address those interactions on the level of the application’s code. Functional testing of the code can focus on things such as expected outputs, with minimal concern about environmental factors. If what’s in the container does what it’s supposed to do when it’s supposed to do it, that’s all you need to know. At some point, two things happen to the pipeline. First, the scripts that constitute the actual control system for the pipeline can be replaced by a single script that controls the movement and interactions of the containers. Since containerization effectively eliminates multiplatform problems and special cases, this change alone may simplify the pipeline by several orders of magnitude. Second, the tools used in the pipeline (for integration or functional testing, for example) will become more focused on the containers, and on the applications as they operate from within the containers. When this happens the containers become, if not the pipeline itself, then at least the driving factor that determines the nature of the pipeline. A pipeline of this sort will not need to take care of many of the issues handled by more traditional pipelines, since the containers themselves will handle them. To the degree that container do take over functions previously handled by pipeline scripts (such as adaptation for specific platforms), they will then become the pipeline, while the pipeline becomes a means of orchestrating the containers. None of this should really be surprising. The pipeline was originally developed to automatically manage release in a world without containerization, where the often tricky relationship between an application and the multiple platforms on which it had to run was a major concern. Containerization, in turn, was developed in response to the opportunities that were made possible by the pipeline. It’s only natural that containers should then remake the pipeline in their own image, so that the distinction between the two begins to fade.

Blog

Change Management in a Change-Dominated World

DevOps isn’t just about change — it’s about continuous, automated change. It’s about ongoing stakeholder input and shifting requirements;about rapid response and fluid priorities. In such a change-dominated world, how can the concept of change management mean anything? But maybe that’s the wrong question. Maybe a better question would be this: Can a change-dominated world even exist without some kind of built-in change management? Change management is always an attempt to impose orderly processes on disorder. That, at least, doesn’t change. What does change is the nature and the scope of the disorder, and the nature and the scope of the processes that must be imposed on it. This is what makes the DevOps world look so different, and appear to be so alien to any kind of recognizable change management. Traditional change management, after all, seems inseparable from waterfall and other traditional development methodologies. You determine which changes will be part of a project, you schedule them, and there they are on a Gantt chart, each one following its predecessor in proper order. Your job is as much to keep out ad-hoc chaos as it is to manage the changes in the project. And in many ways, Agile change management is a more fluid and responsive version of traditional change management, scaled down from project-level to iteration-level, with a shifting stack of priorities replacing the Gantt chart. Change management’s role is to determine if and when there is a reason why a task should move higher or lower in the priority stack, but not to freeze priorities (as would have happened in the initial stages of a waterfall project). Agile change management is priority management as much as it is change management — but it still serves as a barrier against the disorder of ad-hoc decision-making. In Agile, the actual processes involved in managing changes and priorities are still in human hands and are based on human decisions. DevOps moves many of those management processes out of human hands and places them under automated control. Is it still possible to manage changes or even maintain control over priorities in an environment where much of the on-the-ground decision-making is automated? Consider what automation actually is in DevOps — it’s the transfer of human management policies, decision-making, and functional processes to an automatically operating computer-based system. You move the responsibilities that can be implemented in an algorithm over to the automated system, leaving the DevOps team free to deal with the items that need actual, hands-on human attention. This immediately suggests what naturally tends to happen with change management in DevOps. It splits into two forks, each of which is important to the overall DevOps effort. One fork consists of change management as implemented in the automates continuous release system, while the other fork consists of human-directed change management of the somewhat more traditional kind. Each of these requires first-rate change management expertise on an ongoing basis. It isn’t hard to see why an automated continuous release system that incorporates change management features would require the involvement of human change management experts during its initial design and implementation phases. Since the release system is supposed to incorporate human expertise, it naturally needs expert input at some point during its design. Input from experienced change managers (particularly those with a good understanding of the system being developed) can be extremely important during the early design phases of an automated continuous release system; you are in effect building their knowledge into the structure of the system. But DevOps continuous release is by its very nature likely to be a continually changing process itself, which means that the automation software that directs it is going to be in a continual state of change. This continual flux will include the expertise that is embodied in the system, which means that its frequent revision and redesign will require input from human change management experts. And not all management duties can be automated. After human managers have been relieved of all of the responsibilities that can be automated, they are left with the ones that for one reason or another do not lend themselves well to automation — in essence, anything that can’t be easily turned into an algorithm. This is likely to include at least some (and possibly many) of the kinds of decision that fall under the heading of change management. These unautomated responsibilities will require someone (or several people) to take the role of change manager. And DevOps change management generally does not take its cue from waterfall in the first place. It is more likely to be a lineal descendant of Agile change management, with its emphasis on managing a flexible stack of priorities during the course of an iteration, and not a static list of requirements that must be included in the project. This kind of priority-balancing requires more human involvement than does waterfall’s static list, which means that Agile-style change management is likely to result in a greater degree of unautomated change management than one would find with waterfall. This shouldn’t be surprising. As the more repetitive, time-consuming, and generally uninteresting tasks in any system are automated, it leaves greater time for complex and demanding tasks involving analysis and decision-making. This in turn make it easier to implement methodologies which might not be practical in a less automated environment. In other words, human-based change management will now focus on managing shifting priorities and stakeholder demands, not because it has to, but because it can. So what place does change management have in a change-dominated world? It transforms itself from being a relatively static discipline imposed on an inherently slow process (waterfall development) to an intrinsic (and dynamic) part of the change-driven environment itself. DevOps change management manages change from within the machinery of the system itself, while at the same time allowing greater latitude for human guidance of the flow of change in response to the shifting requirements imposed by that change-driven environment. To manage change in a change-dominated world, one becomes the change.