The world of containers can be confusing. What do you really need to get started using containers today? Starting small can lead to big results.
Containers are the new focus of any conversation surrounding DevOps. A composable, executable and portable unit of work our industry has been seeking for years and has everyone trying to figure out how to integrate them into solutions yesterday.
Unfortunately, the starting point is not always clear as the world of containers is growing faster than almost any technology in recent memory. The ecosystem is young, vast and rapidly changing. There are overwhelming choices on how to implement networking, storage, deployment, lifecycle management and monitoring. So, where to begin?
Keep it simple, start small
Focus on taking a small application or a component of a larger application and add a Dockerfile to it. Get comfortable building and rebuilding an image from scratch and move traditional configuration and package management closer to the code as it is developed. If the native functions of the Dockerfile are not enough, choose a tool like Ansible to ensure consistent image builds.
Once complete, you will have an immutable, shippable image that can be run on any system with a modern Linux kernel. As the container moves from the developer, to test, to QA, to production, it will not change. There is no drift in underlying OS dependencies, no reason to re-implement dependency configuration and management every step of the deployment process. The time to move code from the developer to production decreases and all the same tools for testing and deployment can be used along the way.
The smallest change can be the biggest step to keep technology moving forward. There is no reason to jump directly to reengineering entire environments and deployment strategies. Keep current development and testing processes constant and move development to containers. That incremental change alone can speed time to delivery and in turn shorten continuous delivery cycles.
Eventually, containers you make will lead the way to using more complex Datacentre as a Service (DCOS) tools such as Mesos, Kubernetes and Rancher and implementing service discovery and monitoring with tools like as Sysdig and WeaveWorks,
It all starts with the first container, which can be created, tested and used in production today.