The Business Case for Container Technology
When you see the word “containers”, you probably immediately think of Docker. But containers are not a new concept. It dates back to Linux-VServer, initially released in 2001. Then in 2008, containers improved with the release of the Linux Container Project (LXC). Docker came along in 2013, making things easier by using the command line interface to interact with LXC. But you didn’t come here for a history lesson. Even though “container” seems to be a buzzword right now, as you can see, they’re not something new. They’ve been around for about sixteen years now. And Docker isn’t the only container technology, either. There are other projects in the container ecosystem. And rkt (another flavor) makes a good comparison on its site.
I also won’t give you the vendor speech about containers—I’m not a salesman. But let me tell you something: if you want portability, efficiency, and security for your applications, you should consider using containers. With containers, you can deploy the same code through all environments, without any modifications. You can make your database endpoint configurable so that it’s just a matter of configuring it for the environment in which you’d like to run the app. It can be on your computer, in the cloud, on a virtual machine, on bare metal or a mix of everything.
Before we dive in, you should know that containers are not a silver bullet. Using them requires a change in culture, not just in code. (In fact, you might not even need to change the code). But make sure you’re clear on all the benefits and requirements before adopting the container approach. Don’t do it just because everybody is doing it. All that being said, let’s look at the business case for containers. This should give you a clear idea of where and why you’d use containers.
Portability as First-Class Citizen
One of the first things you’ll hear about containers is that they don’t only work on your machine but also on others’ machines. You can easily recover from a disaster and move your environment to another place—from on-prem to the cloud or from one cloud provider to another. You avoid vendor lock-in. Your application will continue working, guaranteed…or at least it will work as long as you make sure all dependencies are in place too, as with databases.
You can also reduce risk by repetition. By the time you’re ready to deploy to production, you’ve proven that the code works by testing it in familiar environments. It doesn’t matter if you don’t have automation in place; you’re just moving the container from one place to another. How can this be achieved, you might be wondering? Well, since you’re isolating the application in its own environment, you can make use of environment variables. That means making it behave differently is just a matter of how you run the container and what values you use.
You can pass a list of values to your container, like calling a parameterized function in code. Or you can use just one value to define the environment and then have the app grab the values it needs from another location. The compiled code inside the container will remain the same regardless. This way, you’re gaining consistency. You can increase the level of trust because your containers are completely isolated one from another, and they’re especially isolated from the host. Gone are the days when you need to reboot a server because your app is having problems. Now, if you’re having issues with your app, you just kill the bad container and create another fresh one in seconds.
Be More Responsive to Change
How much time do you need to take your application code from a developer machine to a production server? How many validations in different environments do you need to perform before saying, “It’s ready to go?” And how many people are involved in that process? How many manual interventions need to happen? If the answer to these questions is “a lot,” start tracking how much time you actually invest in moving your code around, as opposed to developing new features. In the end, if you’re not an infrastructure company, you shouldn’t need to bother with these activities. Make your life easier.
With containers, you focus on what’s important: providing value to customers. That means less time spent debugging problems and more time spent delivering useful features. When you pack your application code in a container with all its dependencies, you can iterate faster. That’s because you know the code will work in all environments. And you know the build won’t fail before going to production since you only have to build once to deploy many times. You’ll get to the point where you won’t be adding more external dependencies or environment configurations. And finally, you’ll make more changes in code than anywhere else. See? Fewer and fewer infrastructure headaches.
One of the nice things about packing the application code and its dependencies is that you can use versioning. Instead of modifying an existing version, you create a new one. It’s making your container immutable by going forward, not backward. But what if something’s wrong with the new release? If you’re practicing continuous integration, you’ll stop everything, make the correction, and deploy again. In the meantime, just use the previous container version image and deploy again to avoid any interruptions. But you have to be able to do this fast. Another way to do this is to just turn off a feature by changing the configuration (a.k.a. feature flags) and then deploy again.
It Can Be Adopted to Legacy Systems
Maybe you’re thinking that when you adopt containers, you’ll have to make big changes. Or maybe you think you’ll have to adopt a microservices architecture. Save those worries for later because they’re not valid in this case. Don’t believe me? I don’t blame you—I bet you’re thinking of that Windows service you’ve been running forever. Or maybe you’re thinking of that old app that you don’t even dare to look at, let alone touch.
If that Windows case sounds familiar, there’s a Powershell module called Image2Docker where you’ll be porting your application using Docker without making any changes. Is your issue not a Windows one? Don’t worry, Docker likes challenges. They have a program called “Modernize Traditional Applications” (MTA). It won’t force you to modify the code or make other application changes. And the folks at Docker guarantee that they’ll help you in less than five days. If they need more time, it means things will need to be added to the platform.
Make Better Use of Infrastructure
All enterprises like to reduce cost, whether it be through cutting waste like idle resources or making sure servers aren’t overprovisioned. The cloud gives you a set of server types with predefined characteristics for memory, CPU, and networking. This means you’re more likely to overprovision resources. In the case of on-premises, the chances of idle resources increase. You have to plan ahead of time. But with containers, if you need more capacity, you just spin up more containers on the same host. You don’t have to worry about more infrastructure. Just make sure you have enough resources available. A good, simple example of this is the AWS service called ECS. In ECS, you configure your auto-scaling policies with the “reservation capacity” metric. It doesn’t really matter how many resources you use but rather how much you reserve. In the end, resource usage will serve as a metric to add more containers, whereas reserved capacity will be used to add more hosts.
Maybe you won’t see big savings at the beginning, but you’ll see a real benefit when you’re in need of scaling. Instead of adding big servers, you can have small servers and add just the blocks you need, little by little. On the other hand, without containers, you might need to buy the same size of server (on-prem or cloud) because that’s the capacity your app is currently needing. But you won’t be able to use all resources, leading to waste.
You might be asking yourself, “Aren’t we making things more complex?” Maybe. But for the example of on-prem environments, are VMs getting recycled when the app isn’t behaving well? No. At least not automatically. In the cloud, you have self-recovery built in, but on-premises, you need to manage that by yourself. If you’re using vSphere, for example, you can rest assured that if something goes wrong with the host, that VM will be moved quickly to another one without any downtime. That’s cool. But what if the host is OK? What if the VM is OK? What if the problem is inside the application because it’s using so many resources? In vSphere, you do have the ability to auto-scale, but it’s not something that comes built in, sadly.
Increase Security for Your Applications
People sometimes talk negatively about containers’ security. After all, if you have access to the host, you can inspect and log in to any container easily. But as the industry learns more about containers, that perception is rightly starting to change. This doesn’t mean that you won’t need to worry anymore about security in your applications (for example, in SQL injections). It means that if for some reason, the attacker is able to gain access to the container, you can limit the damage that can be done. This is thanks to the fact that containers can work with only what’s needed. You don’t need to have all OS capabilities inside a container unless you specifically need it.
For this reason, Docker recently launched a program that certifies images. If an image is certified, it means it’s been tested, it’s implementing best practices, and it doesn’t include common vulnerabilities. It’s also supported by Docker and the provider. So unless you can run a security scan, don’t use other people’s images, and you should be good on that front.
If for some reason an attacker gains access to the container, the attack surface is limited. That’s because, in Docker, containers run in an unprivileged way by default. Why is this important? Well, if that weren’t true, the attacker could gain root privileges and infect the host or other containers. But now, you don’t have to worry about that.
Containers are also isolated thanks to namespaces in Linux. You can reserve a certain capacity (memory and CPU) to containers thanks to cgroups. This means you have a truly isolated environment for containers. It doesn’t matter if one container uses all its resources. It won’t affect the rest of containers and, more significantly, it won’t affect the host. (By default, Docker terminates the container if it uses 100 percent of memory.) Next time you see that something bad is happening to a container, you can just kill it and create another one. And again, you can do this in seconds. Want to know more? Take a look at this great security cheat sheet here.
Container Adoption is Increasing Rapidly
The adoption of containers is finally starting to grow. More enterprises are signing on for the adventure. You can check out a great study from Datadog to get a better idea of how Docker adoption works. What’s interesting from that report is that large companies are the ones adopting containers. More and more tools are being released, and they’re being released frequently. Container vendors are also providing enterprise editions to companies that care about core features: portability, efficiency, and security. For example, Docker is offering an enterprise edition. CoreOS has an enterprise-ready Kubernetes called Tectonic. Red Hat offers OpenShift. There’s also Rancher, and Hashicorp has Nomad. That offering list is growing, and I just named a few.
So if you thought that containers were the new kids on the block, think twice. Maybe it’s time to modernize your application by using containers because containers are not the future: they are the present.