Working with Windows Containers and Docker: The Basics

When you begin to work with containers, you will notice many similarities between a container and a virtual machine; but, in fact, these are two quite different concepts. Containers are going to change the way that we do Windows-based development work in the coming year, and they already underpin much of the devops work of speeding the delivery process. Nicolas Prigent explains how to use the Windows Containers feature.

Introduction

Windows containers will revolutionize virtualization and the DevOps process.

With Windows Server 2016, Microsoft introduces the new feature called Windows Containers. Organizations that upgrade their servers to this new operating system will then be able to use containers right through from development to the production environment.

Robert Sheldon wrote a great article about Windows containers on Simple Talk: https://www.simple-talk.com/cloud/platform-as-a-service/windows-containers-and-docker/. We will not dig deep once again into the concept of containers, but I will explain in this series how to create, run, convert and manage your Windows Containers.

Windows Containers Fundamentals

Before starting with the practical side of Windows Containers, I ought to quickly cover the basics about this new feature.

Containers wrap software up within in a complete file system that contains everything it needs to run: code, runtime, system tools and system libraries. This guarantees that it will always run the same, regardless of the environment it is running within. To achieve this goal, Windows uses namespace isolation, resource control, and process-isolation technologies to restrict the files, network ports, and running processes that each container can access, so that applications running in containers can’t interact or see other applications running in the host OS or in other containers.

Virtual Machines Vs Containers

A Virtual machine is standalone and has its own operating system, its own applications and its own resources (memory, CPU and so on). The following schema shows three VMs hosted on the same physical host. Each virtual machine uses its own OS, libraries, etc. In consequence, they occupy significant amounts of memory.

VMs architecture

Quite often, developers need to test applications with different versions very quickly. Then they must ask to the IT Ops team to deploy one or many machines (Virtual or Physical): It is a time consuming process. VMs also consume considerable resources such as memory and storage space. That’s the reason why containers are amazingly useful for the DevOps process:

Containers architecture

Containers, in contrast, do not contain any operating system, so they take up fewer resources than virtual machines on the physical host. Containers simply share the host operating system, including the kernel and libraries, so they don’t need to boot a full OS.

In summary, the benefits of Windows containers are that:

  • When you deploy a container in a production environment, the rollback process is very simple. You just need to modify the deployment script and redeploy the Container image. Just imagine the rollback process with virtual machines? Well, you must rebuild the entire machine (or revert to the previous snapshot/backup).
  • Startup time for a Windows container is faster than a VM.
  • The small footprint benefits cloud-based scenarios

To finish, the container philosophy is “one service per container”

Windows Server Containers Vs Hyper-V Containers

Microsoft includes two different types of container. The first type is based on the Windows Server Core image and is called a Windows Server Container. The second one is called a Hyper-V Container and is based on the Windows Nano Server image. Hyper-V Containers expand on the isolation that is provided by Windows Server Containers by running each container in a highly-optimized virtual machine, so that they provide a full secure isolation. The kernel of the container host is not shared with other Hyper-V Containers. If all the code running on a host is trusted, then the isolation provided by Windows Containers is likely to be adequate. But if we don’t trust the code, then Hyper-V Containers provide the same level of isolation as virtual machines, but with many of the benefits of standard containers.

Please note that Hyper-V containers are only managed by Docker, while Hyper-V Virtual Machines are managed by traditional tools such as Hyper-V Manager. In practice, booting Hyper-V containers takes longer than Windows Server Containers but both are much faster than a VM with a full OS (even on Nano Server).

Docker

In October 2014, Microsoft Corp and Docker announced a strategic partnership to bring the agility, portability, and security benefits of the Docker platform to Windows Server.

Windows Server 2016 Containers, powered by Docker Engine

It is essential to understand that Windows Server 2016 can’t run Linux containers in Docker format but only Windows containers. Why? Because Linux containers require the Linux APIs from the host kernel and Windows Server Containers require the Windows APIs of a host Windows kernel.

However, the process of managing Linux and Windows containers are strictly identical. The following schema describe the Docker platform:

Docker Platform

Here is a summary of Windows Containers jargon with their meaning:

  • Container Host: Physical or Virtual computer system configured with the Windows Container feature.
  • Container Image: A container image contains the base operating system, application, and all the application dependencies that are needed to quickly deploy a container.
  • Container OS Image: The container OS image is the operating system environment.
  • Container Registry: Container images are stored in a container registry, and can be downloaded on demand. It is a place where container images are published. A registry can be remote or on-premises.
  • Docker Engine: It is the core of the Docker platform. It is a lightweight container runtime that builds and runs your container.
  • Docker file: Docker files are used by developers to build and automate the creation of container images. With a Docker file, the Docker daemon can automatically build a container image.

Docker provides a central repository called Docker Hub (https://hub.docker.com/), the public containerized-application registry that Docker maintains. Container Images can be published directly on this repository to be shared with the Docker community. There are already many images hosted on the Docker Hub. For example:

  • SQL
  • WordPress
  • IIS

You can run a private repository on-premise. Microsoft has its own public and official repository available via this URL: https://hub.docker.com/u/microsoft/

Docker Hub

Windows Containers in practice

Before deploying Windows Containers, you must prepare your environment with some prerequisites. To do that, you can use a physical or virtual machine, it’s up to you. In my case, I use a VM with the following characteristics:

  • A system running Windows Server 2016 (or Windows 10). It is the most important prerequisite. I advise you to work with the Datacenter version because of licensing (more information at the end of the article). You can choose to use Windows Server Core for your container host as opposed to the version of windows which includes a full UI.
  • Administrator permissions on the container host
  • Minimum free drive space to store images and deployment scripts
  • Your server must be up-to-date

OK, let’s start by installing the Windows Containers feature on the container host. To perform this task, run the following PowerShell command:

You must restart to apply changes via the Restart-Computer cmdlet:

Then check that the new feature is enabled:

Windows containers being intimately linked to Docker; you must install Docker Engine on the container host. To achieve this goal, you have two possibilities:

The first one is to deploy Docker from the PSGallery repository:

If you need more details about managing packages with PowerShell, I forward you to this article: https://www.simple-talk.com/sysadmin/powershell/managing-packages-using-windows-powershell/

Docker for Windows Server 2016 requires update “KB3176936”. You can download it from the Windows Update Website and then install manually:

http://www.catalog.update.microsoft.com/search.aspx?q=kb3176936

Or you can perform this task using sconfig utility. Then, choose the number 6:

Windows will download and install updates:

The second way is to install the development Docker version, in order to have the latest features. You must download Docker directly from the official website. Use the Invoke-WebRequest cmdlet:

Next, extract the archive via the Expand-Archive cmdlet:

Now, you can create an environment variable:

To finish, install Docker as a Windows service. So run the following command to register dockerd.exe:

Once installed, the service can be started:

To display Docker information, run the following:

Ok now, the container host is up and running, so we can deploy our first Windows container! For the first example, we will deploy a IIS container, run the following command:

Below is the syntax of the Docker command:

Containers use the PAT concept (Port Address Translation). It means that you must expose container ports through the container host. In my example, Docker will bind the port container number 80 to the port container host number 8080. Then, when I will try to open the IIS website located inside the container, I will use the public port 8080.

The « Name » parameter adds a friendly name to the container. It is not mandatory but it can be useful to manage your containers later.

Finally, you must specify the container image name. Here, I choose the IIS image provided by Microsoft.

When I run the command, Windows checks if the image is available locally (on the container host). If not, then Docker retrieves the image from the Docker hub.

When it’s done, your Windows Container is running. My container host has the following IP Address: 192.168.0.132, so my IIS website is available from 192.168.0.132:8080

IIS Container

To Go Further

Image2Docker

Let’s say you have a server that has been lovingly hand-crafted that you want to containerize? Well, you can use the “Image2Docker” PowerShell module available on GitHub: https://github.com/docker/communitytools-image2docker-win which ports existing Windows application workloads from virtual machines to Docker images. So you can easily convert Windows Services to Windows Containers such as: IIS websites, DNS, DHCP, …

Licensing

Official website contains relatively little information about licensing, but according to the Windows Server 2016 Licensing Datasheet, Standard Edition provides rights for up to 2 Hyper-V containers when all physical cores in the server are licensed and unlimited Windows Server containers for both editions.

Conclusion

Windows containers are already changing the way organizations build systems and deliver services. Containers are increasingly important to ensure that developers and Ops don’t spend too much time to deploy applications.

Container technology is not new in Linux world, but for Microsoft, it’s a revolution. It is important to urgently find the time to understand the ins and outs of implementing containers in your organization.

We have seen, in this article, how to deploy our first Windows container. You will soon notice that Containers are wonderful for Developers and Administrators because containerization allows great flexibility of use and will simplify your deployments. There are still many things to be said about Containers, so in the next articles, I will explain:

  • The Docker commands to get started with Docker,
  • How to find and download container images,
  • How to use Hyper-V Containers,
  • How to create your own Container image,
  • How to convert Windows services to run in a Windows Container,

I hope that this article has helped you to increase your knowledge about Windows Containers.

Tags:

  • 29356 views

  • Rate
    [Total: 33    Average: 4.6/5]
  • Khaled Ashour

    Thanks for the clear and informative article.

  • Dreamcasting

    One thing I can’t seem to figure out (or find a guide for) is when a server has multiple containers, each with a different URL needing to reach it via the same port (say, 80 or 443). How can I arrange this properly?

    http://www.address1.com –> server –> container 1
    http://www.address2.com –> server –> container 2
    http://www.address3.com –> server –> container 3

    and so forth