Sorry, the language specified is not available for this page

    PODCAST

    What are cloud containers, and how should businesses use and secure them?

    August 24, 2022 | 13 minutes

     

    Overview

    Container adoption has been increasing for years, and the trend is expected to continue. Gartner forecasts that 70% of businesses will run containerized applications by 2023 and 90% will do so by 2026

    Yet, while organizations constantly throw around the term containerization, many don’t fully understand the concept.

    On this episode of IT Availability Now, host Servaas Verbiest and guest Erik Krogstad, Senior National Cloud Architect at Sungard AS, explore this popular topic, breaking down what a container is, and how businesses should use and secure them. Listen to this full episode to learn:

    • The benefits of containers 

    • Why organizations use containers, and what’s driving recent adoption

    • What containers can be used for

    • Container security best practices

    • How compliance requirements influence container usage

    As Director of Product Field Strategy at Sungard AS, Servaas Verbiest assists businesses and organizations in realizing the full potential of cloud computing by thinking strategically, deploying rapidly, and acting as an ambassador for the cloud ecosystem. While at Sungard AS, Servaas has worked with more than 1,000 unique clients across multiple industries on complex application deployments, re-platforming, public cloud integrations, private cloud deployments, application lifecycle, and hybrid cloud model development.

    Erik Krogstad is the Senior National Cloud Architect at Sungard AS and serves as lead for the company’s Cloud Center of Excellence. With more than 20 years in the IT industry, Erik has extensive experience in designing and transforming IT infrastructures and architectures to meet the business and IT needs of fortune 1000 companies. 

    Transcript

    SERVAAS VERBIEST (SV): Welcome to IT Availability Now, the show that tells stories of business resilience from the people who keep the digital world available. 

    I'm your host, Servaas Verbiest and today, I'm joined by Erik Krogstad, Senior National Cloud Architect at Sungard Availability Services, and we're going to be discussing cloud containers. 

    Thanks for joining us on the show today, Erik. 

    ERIK KROGSTAD (EK): It's always a pleasure to be here. 

    (SV): And it's always a pleasure to have you. 

    So, container adoption is something that we hear frequently and frankly, I don't see any signs of it slowing down. Gartner has made it a point to emphasize that 70% of organizations are going to be running containerized applications by 2023. And that number is only going to increase. In a majority of the deployments that I discuss - and I know that you help me on some of them - customers bring up containerization at least once. Yet, while the concept of containerization gets thrown around a lot, many organizations really don't understand what containerization is. 

    Why don't we start by setting the baseline and giving everybody an understanding of what it means to really containerize something?

    (EK): Sure. It's kind of like a step back process. What is a container really? What does it do? 

    So take hardware for instance, let's go with a physical server. You have an operating system, parts, things that make it a monolith. Lots of software installed. And then, you know, virtualization came around and you could either take that out or share resources. So multiple operating systems can live inside that box now. And then you have things like VFM - virtualized file machines - that took it down even more. And then Docker created things called containers and containers are basically the application and a group of resources from within an operating system. 

    So if you want an application to run, you don't necessarily need every piece of software or library or thing that makes up that virtual machine. Now you can just take what you need and go. Ultimately just the most stripped down version of what needs to be run.

    (SV): No, that makes sense. And I really appreciate you walking us through the explanation of what a container is. 

    Now that we've set that expectation of really defining what a container is, what are some of the reasons that you believe an organization would want to choose to leverage containerization? How do you feel that those things might be driving increased adoption?

    (EK): One of the main benefits when virtualization started was the idea of smaller attack surfaces. Less vulnerabilities, you can trim that fat away and really just use what you need. You get that in a security sense from the container but you also can create a cluster of containers. You have more flexible resource sharing between them. They talk in the same network protocols, you have a CNI interface. Scalability is the biggest so you can have thousands of containers running in one host. So you can scale up, scale down, you can move them and it runs that service on hardware much cheaper. And then really, you get faster deployments. You can create new instances, we can just push a button and have them deployed. And you can deploy them anywhere and it maintains your applications easier that way because you don't have a memory leak from the operating system that ruins the applications. Everything's more streamlined.

    (SV): And, you know, I can say with benefits like that, it's no surprise that organizations want to incorporate them. But, as we know, sometimes an organization will look at those benefits and they’ll immediately say, “hey, I not only want to containerize applications, but I want to try to containerize everything.” It usually puts us in a situation where we start to have that conversation about what it makes sense to containerize and what is frankly just a poor idea to try to put it into a container. 

    So for the sake of helping people navigate through what could be some dicey waters, what really establishes what can be containerized and what should be avoided at all costs?

    (EK): Well, for what can be containerized, the sky's the limit. So it's really what should be containerized?

    Let's just put it out there now. You can containerize a database, but you should never containerize a database because now you're not able to cache that data. The data is running in a memory cache with nothing solid behind it. So, really, you want things to be stateless. So what I mean by stateless is you have your application, you have an API. An API doesn't change, it just takes data, forms it on how it is, understands the header, understands the destination and passes it along and it’s seamless.

    Containers should be the same way. You should be able to create an application. That application consists of certain, let's say Python libraries, some Python scripts and a resource sharing with a back end database. Perfect example of what should be containerized. Any application that’s not going to sit there and hold data. You know, you can have caching services or even a memcached server in front of it. That could be your caching. But the actual application that talks to the user, handles the traffic, handles the pulling of data, that's perfect for a container.

    (SV): It's good that we set that standard because it's a tough conversation for sure to kind of rely on that old movie adage: just because we can doesn't mean that we should, frankly.

    (EK): Yeah, that’s the thing. “I want to containerize my SAP environment.” Well, let's talk about that. You probably shouldn't be doing that. 

    (SV): That's the thing, right? The way these are structured, you do get fantastic resource allocation and utilization because, to your point, you're only drawing on the things that that application needs. You don't need to maintain multiple operating systems or other baseline components that are tied to running critical infrastructure. But at some point, there comes a time where that game of musical chairs gets a little tough because things that are persistent, like a database that’s the heart and brain of any critical application or any application for that matter, needs those resources to function appropriately. 

    (EK): Absolutely. And even like SAP that I use, it stores a lot of data and memory in memory, which is weird because you have to have memory optimized systems to run that software, which would not be good for a container.

    (SV): And I'm sure their service agreements would love to know that you're running that containerized environment, right? At the end of the day, that's the other thing you’ve got to consider with these applications. You have service agreements - and you can tell me if I'm off base here, but I feel pretty certain that I’m not - there are service conditions tied to how you're going to get support if that application has a problem based on certain hosting methodologies and containerization can take you out of compliance. 

    (EK): Absolutely. Absolutely. And that brings up a great point about compliance and containers. Obviously, you have companies where you have to stay in compliance with things like you just mentioned - a hosting environment. But you also have things like NIST, and certain security protocols that you must maintain as well and when dealing with containers, they usually live in a repository. You have artifacts, which are old scripts or old versions of your standard that you've updated and these things should all manually be scanned. There is that security aspect that sometimes there's vulnerabilities, sometimes a repository that you use to use your software was compromised. And it's important that you maintain scanning these environments to keep a security compliance and security basis.

    (SV): And that makes sense, right? Because security is one of the - I’d say - foundational pillars to how you're going to run an infrastructure in general. So, I'd imagine that you probably have to consider what impact containerization is going to have on process, on tooling, and things like that. So, outside of just doing those continuous scans, do you have any other best practices or key things that organizations need to take into consideration when they launch containers that would impact their security posture? 

    (EK): Sure. So when you have containers, you have a group of containers and that’s usually called a cluster. So within that cluster, you need to be managing what ports are open to it. If another container’s been added, they can create a security problem where a piece of code gets put into a pipeline, or CI/CD pipeline in your repo and somebody puts a malicious container in there because they exist as well. And it could open up an API back door to somewhere else where you don't want it to go. So understanding that the pipeline and managing your CI/CD pipeline, managing who has access to those repos, is of the utmost importance. And from within there, there are tools like Prometheus that can check what's being cached, what ports are open and do security. Aqua is another really great security tool that you can use just to manage your clusters and manage where they're going, because you could have your cluster running in AWS, you could have it locally in a private cloud, you can have it Azure and it can be deployed everywhere because it's very movable and operable. But with that comes a security problem. Why is the DNS not recording the correct things? Did this get moved without any authorization? Was it supposed to be there? So putting things into your container cluster to manage and monitor the overall health, things like that is of the utmost importance.

    (SV): No, I couldn't agree more. Really, when you think about all the benefits this technology has, you can't forget that there are things that are going to have to come with it like changing your security posture and really thinking about how you operate and deploying in a thoughtful and deliberate way. 

    You know, Erik, I really appreciate you taking the time to join us today and highlight the fact that people need to understand what a container is, why you would actually use it, when it makes sense to use it - because while it's great technology, you shouldn't use it just anywhere - and how you shouldn’t lose focus on the secure components of the environment and how a container can impact things like posture, tooling, and how you need to look at security in totality. 

    So thank you for joining us on the show today.

    (EK): Pleasure’s all mine. And remember, secure your build, secure the infrastructure and then secure the workloads. 

    (SV): Couldn't agree more. 

    Erik Krogstad is the Senior National Cloud Architect at Sungard Availability Services. 

    You can find the show notes for this episode at SungardAS.com/ITAvailabilityNow. Please subscribe to the show on your podcast platform of choice to get new episodes as soon as they're available. IT Availability Now is a production Sungard Availability Services. 

    I'm your host, Servaas Verbiest, and until next time, stay available.

    Container adoption has been increasing for years, and the trend is expected to continue. Gartner forecasts that 70% of businesses will run containerized applications by 2023 and 90% will do so by 2026

    Yet, while organizations constantly throw around the term containerization, many don’t fully understand the concept.

    On this episode of IT Availability Now, host Servaas Verbiest and guest Erik Krogstad, Senior National Cloud Architect at Sungard AS, explore this popular topic, breaking down what a container is, and how businesses should use and secure them. Listen to this full episode to learn:

    • The benefits of containers 

    • Why organizations use containers, and what’s driving recent adoption

    • What containers can be used for

    • Container security best practices

    • How compliance requirements influence container usage

    As Director of Product Field Strategy at Sungard AS, Servaas Verbiest assists businesses and organizations in realizing the full potential of cloud computing by thinking strategically, deploying rapidly, and acting as an ambassador for the cloud ecosystem. While at Sungard AS, Servaas has worked with more than 1,000 unique clients across multiple industries on complex application deployments, re-platforming, public cloud integrations, private cloud deployments, application lifecycle, and hybrid cloud model development.

    Erik Krogstad is the Senior National Cloud Architect at Sungard AS and serves as lead for the company’s Cloud Center of Excellence. With more than 20 years in the IT industry, Erik has extensive experience in designing and transforming IT infrastructures and architectures to meet the business and IT needs of fortune 1000 companies. 

    SERVAAS VERBIEST (SV): Welcome to IT Availability Now, the show that tells stories of business resilience from the people who keep the digital world available. 

    I'm your host, Servaas Verbiest and today, I'm joined by Erik Krogstad, Senior National Cloud Architect at Sungard Availability Services, and we're going to be discussing cloud containers. 

    Thanks for joining us on the show today, Erik. 

    ERIK KROGSTAD (EK): It's always a pleasure to be here. 

    (SV): And it's always a pleasure to have you. 

    So, container adoption is something that we hear frequently and frankly, I don't see any signs of it slowing down. Gartner has made it a point to emphasize that 70% of organizations are going to be running containerized applications by 2023. And that number is only going to increase. In a majority of the deployments that I discuss - and I know that you help me on some of them - customers bring up containerization at least once. Yet, while the concept of containerization gets thrown around a lot, many organizations really don't understand what containerization is. 

    Why don't we start by setting the baseline and giving everybody an understanding of what it means to really containerize something?

    (EK): Sure. It's kind of like a step back process. What is a container really? What does it do? 

    So take hardware for instance, let's go with a physical server. You have an operating system, parts, things that make it a monolith. Lots of software installed. And then, you know, virtualization came around and you could either take that out or share resources. So multiple operating systems can live inside that box now. And then you have things like VFM - virtualized file machines - that took it down even more. And then Docker created things called containers and containers are basically the application and a group of resources from within an operating system. 

    So if you want an application to run, you don't necessarily need every piece of software or library or thing that makes up that virtual machine. Now you can just take what you need and go. Ultimately just the most stripped down version of what needs to be run.

    (SV): No, that makes sense. And I really appreciate you walking us through the explanation of what a container is. 

    Now that we've set that expectation of really defining what a container is, what are some of the reasons that you believe an organization would want to choose to leverage containerization? How do you feel that those things might be driving increased adoption?

    (EK): One of the main benefits when virtualization started was the idea of smaller attack surfaces. Less vulnerabilities, you can trim that fat away and really just use what you need. You get that in a security sense from the container but you also can create a cluster of containers. You have more flexible resource sharing between them. They talk in the same network protocols, you have a CNI interface. Scalability is the biggest so you can have thousands of containers running in one host. So you can scale up, scale down, you can move them and it runs that service on hardware much cheaper. And then really, you get faster deployments. You can create new instances, we can just push a button and have them deployed. And you can deploy them anywhere and it maintains your applications easier that way because you don't have a memory leak from the operating system that ruins the applications. Everything's more streamlined.

    (SV): And, you know, I can say with benefits like that, it's no surprise that organizations want to incorporate them. But, as we know, sometimes an organization will look at those benefits and they’ll immediately say, “hey, I not only want to containerize applications, but I want to try to containerize everything.” It usually puts us in a situation where we start to have that conversation about what it makes sense to containerize and what is frankly just a poor idea to try to put it into a container. 

    So for the sake of helping people navigate through what could be some dicey waters, what really establishes what can be containerized and what should be avoided at all costs?

    (EK): Well, for what can be containerized, the sky's the limit. So it's really what should be containerized?

    Let's just put it out there now. You can containerize a database, but you should never containerize a database because now you're not able to cache that data. The data is running in a memory cache with nothing solid behind it. So, really, you want things to be stateless. So what I mean by stateless is you have your application, you have an API. An API doesn't change, it just takes data, forms it on how it is, understands the header, understands the destination and passes it along and it’s seamless.

    Containers should be the same way. You should be able to create an application. That application consists of certain, let's say Python libraries, some Python scripts and a resource sharing with a back end database. Perfect example of what should be containerized. Any application that’s not going to sit there and hold data. You know, you can have caching services or even a memcached server in front of it. That could be your caching. But the actual application that talks to the user, handles the traffic, handles the pulling of data, that's perfect for a container.

    (SV): It's good that we set that standard because it's a tough conversation for sure to kind of rely on that old movie adage: just because we can doesn't mean that we should, frankly.

    (EK): Yeah, that’s the thing. “I want to containerize my SAP environment.” Well, let's talk about that. You probably shouldn't be doing that. 

    (SV): That's the thing, right? The way these are structured, you do get fantastic resource allocation and utilization because, to your point, you're only drawing on the things that that application needs. You don't need to maintain multiple operating systems or other baseline components that are tied to running critical infrastructure. But at some point, there comes a time where that game of musical chairs gets a little tough because things that are persistent, like a database that’s the heart and brain of any critical application or any application for that matter, needs those resources to function appropriately. 

    (EK): Absolutely. And even like SAP that I use, it stores a lot of data and memory in memory, which is weird because you have to have memory optimized systems to run that software, which would not be good for a container.

    (SV): And I'm sure their service agreements would love to know that you're running that containerized environment, right? At the end of the day, that's the other thing you’ve got to consider with these applications. You have service agreements - and you can tell me if I'm off base here, but I feel pretty certain that I’m not - there are service conditions tied to how you're going to get support if that application has a problem based on certain hosting methodologies and containerization can take you out of compliance. 

    (EK): Absolutely. Absolutely. And that brings up a great point about compliance and containers. Obviously, you have companies where you have to stay in compliance with things like you just mentioned - a hosting environment. But you also have things like NIST, and certain security protocols that you must maintain as well and when dealing with containers, they usually live in a repository. You have artifacts, which are old scripts or old versions of your standard that you've updated and these things should all manually be scanned. There is that security aspect that sometimes there's vulnerabilities, sometimes a repository that you use to use your software was compromised. And it's important that you maintain scanning these environments to keep a security compliance and security basis.

    (SV): And that makes sense, right? Because security is one of the - I’d say - foundational pillars to how you're going to run an infrastructure in general. So, I'd imagine that you probably have to consider what impact containerization is going to have on process, on tooling, and things like that. So, outside of just doing those continuous scans, do you have any other best practices or key things that organizations need to take into consideration when they launch containers that would impact their security posture? 

    (EK): Sure. So when you have containers, you have a group of containers and that’s usually called a cluster. So within that cluster, you need to be managing what ports are open to it. If another container’s been added, they can create a security problem where a piece of code gets put into a pipeline, or CI/CD pipeline in your repo and somebody puts a malicious container in there because they exist as well. And it could open up an API back door to somewhere else where you don't want it to go. So understanding that the pipeline and managing your CI/CD pipeline, managing who has access to those repos, is of the utmost importance. And from within there, there are tools like Prometheus that can check what's being cached, what ports are open and do security. Aqua is another really great security tool that you can use just to manage your clusters and manage where they're going, because you could have your cluster running in AWS, you could have it locally in a private cloud, you can have it Azure and it can be deployed everywhere because it's very movable and operable. But with that comes a security problem. Why is the DNS not recording the correct things? Did this get moved without any authorization? Was it supposed to be there? So putting things into your container cluster to manage and monitor the overall health, things like that is of the utmost importance.

    (SV): No, I couldn't agree more. Really, when you think about all the benefits this technology has, you can't forget that there are things that are going to have to come with it like changing your security posture and really thinking about how you operate and deploying in a thoughtful and deliberate way. 

    You know, Erik, I really appreciate you taking the time to join us today and highlight the fact that people need to understand what a container is, why you would actually use it, when it makes sense to use it - because while it's great technology, you shouldn't use it just anywhere - and how you shouldn’t lose focus on the secure components of the environment and how a container can impact things like posture, tooling, and how you need to look at security in totality. 

    So thank you for joining us on the show today.

    (EK): Pleasure’s all mine. And remember, secure your build, secure the infrastructure and then secure the workloads. 

    (SV): Couldn't agree more. 

    Erik Krogstad is the Senior National Cloud Architect at Sungard Availability Services. 

    You can find the show notes for this episode at SungardAS.com/ITAvailabilityNow. Please subscribe to the show on your podcast platform of choice to get new episodes as soon as they're available. IT Availability Now is a production Sungard Availability Services. 

    I'm your host, Servaas Verbiest, and until next time, stay available.