by Greg Cox
Part one of a three-part series
There is an article that has been quoted many times over the years, that originally appeared in The Wall Street Journal in 2011, written by Marc Andreessen and titled, “Why Software Is Eating the World.” The idea being companies are differentiating themselves and making fortunes, to a large extent, based on the software they have created. This may seem obvious now, with what are now massively successful companies that rely heavily on software development. There are a number of reasons why software companies have made such progress and are so successful; leveraging open source, the availability of equity funding, the quick adoption of technology in people’s daily lives, the ability to iterate very quickly and the sharing of software for the entire industry to benefit. Companies that don’t rely heavily on software continue to be very successful and adapt to customer demands, but the rate and scale of their iteration is very different than software development. But how does software iteration happen, how are some companies able to handle so much change, and make significant forward progress towards their development goals? How do developers work in these types of environments? What is a common thing that most, if not all, of them have in common? I am not talking about a programming language that allows for modules or packages to interoperate, or a language that allows for faster development. I also am not talking about Agile or Scrum, although they are a common theme among successful software development companies.
Consider some successful open source projects that are made up of hundreds or thousands of developers working in a global community that make significant progress and use what is available in that open global community without talking directly to each other, in a lot of cases. What do successful open source projects and most, if not all, modern successful software and companies that differentiate themselves with software, have in common? The answer to these questions is Version Control Systems (VCS), and the systems and workflows built around version control (GIT/Mercurial/SVN/CVS, Github/Bitbucket/Gitlab/many others, CI/CD).
I know this isn’t a new flashy technology, but for a lot of infrastructure engineers who work in IT departments and are not working, or haven’t worked as a developer, this may be a new way of working with these technologies. You may have used part of these tools, or just know of them, but so many people don’t understand how they are used as part of a daily workflow to make massive gains. If you haven’t learned how to use these tools in the ways that allow you to gain the collaborative, organisation, and continuous improvement benefits they provide, this will be something new. If you work with Version Control systems and already work this way, or are already familiar with Continuous Integration Systems, or in an organisation that has embraced Agile/Scrum, this may be just how you work. Either way, I want to show how to shift from constantly reacting, and trying to keep up, to pushing the environment where you need it to go predictably, fast, and with less risk.
Most infrastructure engineers and enterprise IT engineers have known about version control for as long as it has been around. They may have used CVS/SVN as a backup system long ago, or still do but not significantly. Or, if you are a networking person, you have set up Rancid for networking configs, used Github and Bitbucket a lot for the projects that were available at those sites. What you may not know is that the daily workflow and tools that come with and interact with the Version Control system, if used correctly, are the huge differentiator. What’s important here is the fundamental workflow and how it’s the key difference for how groups work and the advantages they provide.
There are a number of practices that are part of or related to DevOps, like Infrastructure as Code, 12Factor Apps, Continuous Integration, and Continuous Deployment. A fundamental building block for all of these practices that is not new and doesn’t get much attention is Version Control. I believe there are a lot of people who don’t realise the workflow benefits that can go along with using a Version Control system, and the benefits of how those workflows interact with other tools (CI/CD). There are a lot of people that think a Version Control system is just a backup system and move on, but the Version Control system is the basis that allows everything else to really work. In fact, it can be argued that the most beneficial parts have little to do with revisions. Now, using a Version Control System is not technically a requirement to do some of the things mentioned, such as DevOps (no concrete rules). It is not disputed that there is a corollary between use of Version Control systems and successful organisations that claim they follow the practices mentioned above, or that they are successful and have anything to do with technology, for that matter.
The next post of this three-part series will be building the environment, which will be Gitlab CE on an AWS instance using CloudFormation and Gitlab-CI (Gitlab’s Continuous Integration software for automated building, testing and deploying). In the last post, I will cover an example workflow using the systems above. With this, I hope to lower the barrier to entry for using these tools and workflows for anyone even if you have little or no experience with them today.