Quantcast
Channel: Virtualization Blog
Viewing all 88 articles
Browse latest View live

Use Docker Compose and Service Discovery on Windows to scale-out your multi-service container application

$
0
0

Article by Kallie Bracken and Jason Messer

The containers revolution popularized by Docker has come to Windows so that developers on Windows 10 (Anniversary Edition) or IT Pros using Windows Server 2016 can rapidly build, test, and deploy Windows “containerized” applications!

Based on community feedback, we have made several improvements to the Windows containers networking stack to enable multi-container, multi-service application scenarios. Support for Service Discovery and the ability to create (or re-use existing) networks are at the center of the improvements that were made to bring the efficiency of Docker Compose to Windows. Docker Compose enables developers to instantly build, deploy and scale-out their “containerized” applications running in Windows containers with just a few simple commands. Developers define their application using a ‘Compose file’ to specify the services, corresponding container images, and networking infrastructure required to run their application. Service Discovery itself is a key requirement to scale-out multi-service applications using DNS-based load-balancing and we are proud to announce support for Service Discovery in the most recent versions of Windows 10 and Windows Server 2016.

Take your next step in mastering development with Windows Containers, and keep letting us know what great capabilities you would like to see next!


When it comes to using Docker to manage Windows containers, with just a little background it’s easy to get simple container instances up and running. Once you’ve covered the basics, the next step is to build your own custom container images using Dockerfiles to install features, applications and other configuration layers on top of the Windows base container images. From there, the next step is to get your hands dirty building multi-tier applications, composed of multiple services running in multiple container instances. It’s here—in the modularization and scaling-out of your application—that Docker Compose comes in; Compose is the perfect tool for streamlining the specification and deployment of multi-tier, multi-container applications. Docker Compose registers each container instance by service name through the Docker engine thereby allowing containers to ‘discover’ each other by name when sending intra-application network traffic. Application services can also be scaled-out to multiple container instances using Compose. Network traffic destined to a multi-container service is then round-robin’d using DNS load-balancing across all container instances implementing that service.

This post walks through the process of creating and deploying a multi-tier blog application using Docker Compose (Compose file and application shown in Figure 1).

ComposeFile

Figure 1: The Compose File used to create the blog application, including its BlogEngine.NET front-end (the ‘web’ service) and SQL Server back-end (the ‘db’ service).

Note: Docker Compose can be used to scale-out applications on a single host which is the scope of this post. To scale-out your ‘containerized’ application across multiple hosts, the application should be deployed on a multi-node cluster using a tool such as Docker Swarm. Look for multi-host networking support in Docker Swarm on Windows in the near future.

The first tier of the application is an ASP.NET web app, BlogEngine.NET, and the back-end tier is a database built on SQL Server Express 2014. The database is created to manage and store blog posts from different users which are subsequently displayed through the Blog Engine app.

New to Docker or Windows Containers?

This post assumes familiarity with the basics of Docker, Windows containers and ‘containerized’ ASP.NET applications. Here are some good places to start if you need to brush up on your knowledge:

Setup

System Prerequisites

Before you walk through the steps described in this post, check that your environment meets the following requirements and has the most recent versions of Docker and Windows updates installed:

  • Windows 10 Anniversary Edition (Professional or Enterprise) or Windows Server 2016
    Windows Containers requires your system to have critical updates installed. Check your OS version by running winver.exe, and ensure you have installed the latest KB 3192366 and/or Windows 10 updates.
  • The latest version of Docker-Compose (available with Docker-for-Windows) must be installed on your system.

NOTE: The current version of Docker Compose on Windows requires that the Docker daemon be configured to listen to a TCP socket for new connections. A Pull Request (PR) to fix for this issue is in review and will be merged soon. For now, please ensure that you do the following:

Please configure the Docker Engine by adding a “hosts” key to the daemon.json file (example shown below) following the instructions here. Be sure to restart the Docker service after making this change.

{
…
"hosts":["tcp://0.0.0.0:2375", “npipe:////./pipe/win_engine"]
…
}

When running docker-compose, you will either need to explicitly reference the host port by adding the option “-H tcp://localhost:2375” to the end of this command (e.g. docker-compose -H “tcp://localhost:2375” or by setting your DOCKER_HOST environment variable to always use this port (e.g. $env:DOCKER_HOST=”tcp://localhost:2375”

Blog Application Source with Compose and Dockerfiles

This blog application is based on the Blog Engine ASP.NET web app availably publicly here: http://www.dnbe.net/docs/.  To follow this post and build the described application, a complete set of files is available on GitHub. Download the Blog Application files from GitHub and extract them to a location somewhere on your machine, e.g. ‘C:\build’ directory.

The blog application directory includes:

  • A ‘web’ folder that contains the Dockerfile and resources that you’ll need to build the image for the blog application’s ASP.NET front-end.
  • A ‘db’ folder that contains the Dockerfile and resources that you’ll need to build the blog application’s SQL database back-end.
  • A ‘docker-compose.yml’ file that you will use to build and run the application using Docker Compose.

The top-level of the blog application source folder is the main working directory for the directions in this post. Open an elevated PowerShell session and navigate there now – e.g.

PS C:\> cd c:\build\

The Blog Application Container Images

Database Back-End Tier: The ‘db’ Service

The database back-end Dockerfile is located in the ‘db’ sub-folder of the blog application source files and can be referenced here: The Blog Database Dockerfile. The main function of this Dockerfile is to run two scripts over the Windows Server Core base OS image to define a new database as well as the tables required by the BlogEngine.NET application.

The SQL scripts referenced by the Dockerfile to construct the blog database are included in the ‘db’ folder, and copied from host to container when the container image is created so that they can be run on the container.

BlogEngine.NET Front-End

The BlogEngine.NET Dockerfile is in the ‘web’ sub-folder of the blog application source files.

This Dockerfile refers to a PowerShell script (buildapp.ps1) that does the majority of the work required to configure the web service image. The buildapp.ps1 PowerShell Script obtains the BlogEngine.NET project files using a download link from Codeplex, configures the blog application using the default IIS site, grants full permission over the BlogEngine.NET project files (something that is required by the application) and executes the commands necessary to build an IIS web application from the BlogEngine.NET project files.

After running the script to obtain and configure the BlogEngine.NET web application, the Dockerfile finishes by copying the Web.config file included in the ‘web’ sub-folder to the container, to overwrite the file that was downloaded from Codeplex. The config file provided has been altered to point the ‘web’ service to the ‘db’ back-end service.

Streamlining with Docker Compose

When dealing with only one or two independent containers, it is simple to use the ‘docker run’ command to create and start a container image. However, as soon as an application begins to gain complexity, perhaps by including several inter-dependent services or by deploying multiple instances of any one service, the notion of configuring and running that app “manually” becomes impractical. To simplify the definition and deployment of an application, we can use Docker Compose.

A Compose file is used to define our “containerized” application using two services—a ‘web’ service and a ‘db’ service.  The blog application’s Compose File (available here for reference) defines the ‘web’ service which runs the BlogEngine.NET web front-end tier of the application and the ‘db’ service which runs the SQL Server 2014 Express back-end database tier. The compose file also handles network configuration for the blog application (with both application-level and service-level granularity).

Something to note in the blog application Compose file, is that the ‘expose’ option is used in place of the ‘ports’ option for the ‘db’ service. The ‘ports’ option is analogous to using the ‘-p’ argument in a ‘docker run’ command, and specifies HOST:CONTAINER port mapping for a service. However, this ‘ports’ option specifies a specific container host port to use for the service thereby limiting the service to only one container instance since multiple instances can’t re-use the same host port. The ‘expose’ option, on the other hand, can be used to define the internal container port with a dynamic, external port selected automatically by Docker through the Windows Host Networking Service – HNS. This allows for the creation of multiple container instances to run a single service; where the ‘ports’ option requires that every container instance for a service be mapped as specified, the ‘expose’ option allows Docker Compose to handle port mapping as required for scaled-out scenarios.

The ‘networks’ key in the Compose file specifies the network to which the application services will be connected. In this case, we define the default network for all services to use as external meaning a network will not be created by Docker Compose. The ‘nat’ network referenced is the default NAT network created by the Docker Engine when Docker is originally installed.

‘docker-compose build’

In this step, Docker Compose is used to build the blog application. The Compose file references the Dockerfiles for the ‘web’ and ‘db’ services and uses them to build the container image for each service.

From an elevated PowerShell session, navigate to the top level of the Blog Application directory. For example,

cd C:\build\

Now use Docker Compose to build the blog application:

docker-compose build

‘docker-compose up’

Now use Docker Compose to run the blog application:

docker-compose up

This will cause a container instance to be run for each application service. Execute the command to see that the blog application is now up and running.

docker-compose ps

You can access the blog application through a browser on your local machine, as described below.

Define Multiple, Custom NAT Networks

In previous Windows Server 2016 technical previews, Windows was limited to a single NAT network per container host. While this is still technically the case, it is possible to define custom NAT networks by segmenting the default NAT network’s large, internal prefix into multiple subnets.

For instance, if the default NAT internal prefix was 172.31.211.0/20, a custom NAT network could be carved out from this prefix. The ‘networks’ section in the Compose file could be replaced with the following:

networks:
  default:
    driver: nat
    ipam:
      driver: default
      config:
      - subnet: 172.31.212.0/24

This would create a user-defined NAT network with a user-defined IP subnet prefix (in this case, 172.31.211.0/24). The ipam option is used to specify this custom IPAM configuration.

Note: Ensure that any custom nat network defined is a subset of the larger nat internal prefix previously created. To obtain your host nat network’s internal prefix, run ‘docker network inspect nat’.

View the Blog Application

Now that the containers for the ‘web’ and ‘db’ services are running, the blog application can be accessed from the local container host using the internal container IP and port (80). Use the command docker inspect <web container instance> to determine this internal IP address.

To access the application, open an internet browser on the container host and navigate to the following URL: “http://<container ip>//BlogEngine/” appended. For instance, you might enter: http://172.16.12.216/BlogEngine

To access the application from an external host that is connected to the container host’s network, you must use the Container Host IP address and mapped port of the web container. The mapped port of the web container endpoint is displayed from docker-compose ps or docker ps commands. For instance, you might enter: http://10.123.174.107:3658/BlogEngine

The blog application may take a moment to load, but soon your browser should present the following page.

Screenshot of page

Screenshot of page

Taking Advantage of Service Discovery

Built in to Docker is Service Discovery, which offers two key benefits: service registration and service name to IP (DNS) mapping. Service Discovery is especially valuable in the context of scaled-out applications, as it allows multi-container services to be discovered and referenced in the same way as single container services; with Service Discovery, intra-application communication is simple and concise—any service can be referenced by name, regardless of the number of container instances that are being used to run that service.

Service registration is the piece of Service Discovery that makes it possible for containers/services on a given network to discover each other by name. As a result of service registration, every application service is registered with a set of internal IP addresses for the container endpoints that are running that service. With this mapping, DNS resolution in the Docker Engine responds to any application endpoint seeking to communicate with a given service by sending a randomly ordered list of the container IP addresses associated with that service. The DNS client in the requesting container then chooses one of these IPs for container-container communication. This is referred to as DNS load-balancing.

Through DNS mapping Docker abstracts away the added complexity of managing multiple container endpoints; because of this piece of Service Discovery a single service can be treated as an atomic entity, no matter how many container instances it has running behind the scenes.

Note: For further context on Service Discovery, visit this Docker resource. However, note that Windows does not support the “-link” options.

Scale-Out with ‘docker-compose scale’

DockerCompose Scale

While the service registration benefit of Service Discovery is leveraged by an application even when one container instance is running for each application service, a scaled-out scenario is required for the benefit of DNS load-balancing to truly take effect.

To run a scaled-out version of the blog application, use the following command (either in place of ‘docker-compose up’ or even after the compose application is up and running). This command will run the blog application with one container instance for the ‘web’ service and three container instances for the ‘db’ service.

docker-compose scale web=1 db=3

Recall that the docker-compose.yml file provided with the blog application project files does not allow for scaling multiple instances of the ‘web’ service. To scale the web service, the ‘ports’ option for the web service must be replaced with the ‘expose’ option. However, without a load-balancer in front of the web service, a user would need to reference individual container endpoint IPs and mapped ports for external access into the web front-end of this application. An improvement to this application would be to use volume mapping so that all ‘db’ container instances reference the same SQL database files. Stay tuned for a follow-on post on these topics.

Service Discovery in action

In this step, Service Discovery will be demonstrated through a simple interaction between the ‘web’ and ‘db’ application services. The idea here is to ping different instances of the ‘db’ service to see that Service Discovery allows it to be accessed as a single service, regardless of how many container instances are implementing the service.

Before you begin: Run the blog application using the ‘docker-compose scale’ instruction described above.

Return to your PowerShell session, and run the following command to ping the ‘db’ back-end service from your web service. Notice the IP address from which you receive a reply.

docker run blogengine ping db

Now run the ping command again, and notice whether or not you receive a reply from a different IP address (i.e. a different ‘db’ container instance).*

docker run blogengine ping db

The image below demonstrates the behavior you should see—after pinging 2-3 times, you should receive replied from at least two different ‘db’ container instances:

PowerShell Output

* There is a chance that Docker will return the set of IPs making up the ‘db’ service in the same order as your first request. In this case, you may not see a different IP address. Repeat the ping command until you receive a reply from a new instance.

Technical Note: Service Discovery implemented in Windows

On Linux, the Docker daemon starts a new thread in each container namespace to catch service name resolution requests. These requests are sent to the Docker engine which implements a DNS resolver and responds back to the thread in the container with the IP address/es of the container instance/s which correspond to the service name.

In Windows, service discovery is implemented differently due to the need to support both Windows Server Containers (shared Windows kernel) and Hyper-V Containers (isolated Windows kernel). Instead of starting a new thread in each container, the primary DNS server for the Container endpoint’s IP interface is set to the default gateway of the (NAT) network. A request to resolve the service name will be sent to the default gateway IP where it is caught by the Windows Host Networking Service (HNS) in the container host. The HNS service then sends the request to the Docker engine which replies with the IP address/es of the container instance/s for the service. HNS then returns the service name (DNS) query to the container.


Linux Integration Services Download 4.1.2-2 hotfix

$
0
0

We’ve just published a hotfix release of the Linux Integration Services download, version 4.1.2-2.

This release addresses two critical issues:

“Do not lose pending heartbeat vmbus packets” (for versions 5.x, 6.x, 7.x)
Hyper-V hosts can be configures to sent “heartbeat” packets to guests to see if they are active, and reboot them when they do not respond. These heartbeat packets can queue up when a guest is paused expecting a response when the guest is re-activated, for example when a guest is moved by live migration. This fix corrects a problem where some of these packets could be dropped leading the host to reboot an otherwise healthy guest.

“Exclude UDP ports in RSS hashing” (for version 6.x, 7.x)
While improving network performance by taking advantage of host-supported offloads we had introduced a problem with UDP workloads on Azure. This change fixes excessive UDP packet loss in this scenario.

Linux Integration Services 4.1.2-2 can be downloaded here.

Allowing an additional host to run a VM with virtual TPM

$
0
0

Recently a colleague got a new PC and asked me how he could migrate his existing virtual machines to his new system.  Because he had enabled a virtual Trusted Platform Module (TPM) on these VMs, he wasn’t sure how to proceed. This is also a common scenario when moving VMs to a guarded fabric.

TPMs are an established and standardized technology which can be used for different purposes around system trustworthiness and identity. For example, they can be used to ensure the OSes boot loader and boot configuration has not been tampered with before unsealing a BitLocker encrypted disk, or to have a strong system identity based on hardware. Virtual TPMs bring these great capabilities to virtual machines running on Windows 10 1511 and Windows Server 2016 hosts or newer.

To protect the virtual TPM’s state, it is stored encrypted. This means, some keys must be updated so the VM can run on the destination system. The overall process involves two basic steps before moving the VM to the new host:

  1. Importing the destination system’s guardian information on the source host
  2. Updating the virtual machine’s key protector

Importing the destination system’s guardian

First, the guardian information for the destination system or fabric must be exported. If you plan to authorize a guarded fabric, please make sure the destination hosts are properly configured with the Host Guardian Service information. Also, note that if you run this on a host in a guarded fabric, each host that is part of this guarded fabric will be able to run the virtual machine once the key protector is updated. If in doubt, ask your administrator.

The following script snippet can be used to export guardian information from a destination host by simply running it on this host.

If the destination host is part of a guarded fabric, the Host Guardian Service’s data is written to the file. Otherwise, a local guardian is created if it does not exist with the default name and exported.

On the source host, run this command in an administrative PowerShell to import the guardian information which was previously exported.

Updating the virtual machine’s key protector

With the destination system’s guardian information present on the source system, each virtual machine’s key protector can now be updated to include the new guardian.

For this step, the assumption is that the source system is running in local mode and the right guardian information is present. If you are running on Windows 10 and can start your VM with a virtual TPM, this should be the case.

The script loops through all VMs with an enabled vTPM and adds the guardian for the destination system exported above.

Finishing up

Finally, the virtual machines can be exported on the source and imported on the destination host. You should be good  to start the VMs.

Hope this helps,

Lars

Linux Integration Services Download 4.1.3

$
0
0

We’ve just published an update for the Linux Integration Services download. This release includes a series of upstream updates and adds compatibility with Red Hat Enterprise Linux, CentOS, and Oracle Linux RHCK 7.3.

The LIS 4.1.3 download is available from the Microsoft Download Center.

The LIS download is not required for Linux distributions that have built-in LIS, as described in “Which Linux Integration Services should I use in my Linux VMs?”

Linux Integration Services is an open source project that is part of the Linux Kernel, and we welcome public involvement with the LIS download on github.

Cool new things for Hyper-V on Windows 10

$
0
0

Insider build 15002 is now available for Fast Ring windows insiders. In it, you’ll find a few improvements in Hyper-V for Windows 10 users:

  • A new virtual machine Quick Create experience (work in progress).
  • More aggressive memory allocation for starting virtual machines.  This is especially useful for anyone using emulators in Visual Studio or static memory virtual machines.

Check it out and send feedback!

Virtual machine Quick Create

msohtmlclipclip_image001

Hyper-V Manager has a new single-page wizard that makes it faster and easier to create virtual machines.  You can access it through a new “Quick Create…” button (1).

Quick Create focuses on getting the guest operating system up and running.  It automatically creates virtual hardware necessary to run the guest operating system (2).  Including a virtual switch!  Since many desktop users see internet in the virtual machine as essential, we added the option to create an external switch (3) directly to the new virtual machine experience.

Quick Create is still under active development – try it out and please leave feedback!

Changes in memory allocation

Starting in build 15002, we changed how Hyper-V on Windows 10 allocates memory for starting virtual machines.

In the past, when you started a virtual machine, Hyper-V allocated memory very conservatively.  As an example, we maintained reserved memory for the Hyper-V host (root memory reserve) so even if task manager showed 2 GB free memory, Hyper-V wouldn’t use it for virtual machines.  Hyper-V also wouldn’t ask for applications to release unused memory (trim).  Conservative memory allocation makes sense in a hosting environment where not many applications run on the Hyper-V host and the ones that do are high priority – it doesn’t make much sense for Windows 10 and desktop virtualization.

In Windows 10, you’re probably running several applications (web browsers, text editors, chat clients, etc) and most of them will reserve more memory than they’re actively using.  With these changes, Hyper-V starts allocating memory in small chunks (to give the operating system a chance to trim memory from other applications) and will use all available memory (no root reserve).  Which isn’t to say you’ll never run out of memory but now the amount of memory shown in task manager accurately reflects the amount available for starting virtual machines.

Note:  For people using Hyper-V with device emulators in Visual Studio – the emulator does have overhead so you will need at least 200MB more RAM available than the emulator you’re starting suggests (i.e. a 512MB emulator actually needs closer to 700MB available to start successfully).

I’ll post a follow up blog going into more nitty gritty details on this later.

Have fun making virtual machines!

Cheers,
Sarah

A closer look at VM Quick Create

$
0
0


Author: Andy Atkinson

In the last Insiders build, we introduced Quick Create to quickly create virtual machines with less configuration (see blog).

image

We’re trying a few things to make it easier to set up a virtual machine, such as combining installation options to a single field for all supported file types, and adding a control to enable Windows Secure Boot more easily.

image

Quick Create can also help set up your network. If there’s no available switch, you’ll see a button to set up an “automatic network”, which will automatically configure an external switch for the virtual machine and connect it to the network.

To simplify the number of settings, we had to pick some good default settings for the virtual machine, which are currently:

  • Generation: 2
  • StartupRAM: 1024 MB
  • DynamicRAM: Enabled
  • Virtual Processors: 1

After the virtual machine is created, you will see the confirmation page with quick access to edit settings or to connect.

image

Are there other controls you want in Quick Create? Are we picking good defaults?

This is still a work in progress, so let us know what you think!

– Andy

Introducing the Host Compute Service (HCS)

$
0
0

Summary

This post introduces a low level container management API in Hyper-V called the Host Compute Service (HCS).  It tells the story behind its creation, and links to a few open source projects that make it easier to use.

Motivation and Creation

Building a great management API for Docker was important for Windows Server Containers.  There’s a ton of really cool low-level technical work that went into enabling containers on Windows, and we needed to make sure they were easy to use.  This seems very simple, but figuring out the right approach was surprisingly tricky.

Our first thought was to extend our existing management technologies (e.g. WMI, PowerShell) to containers.  After investigating, we concluded that they weren’t optimal for Docker, and started looking at other options.

Next, we considered mirroring the way Linux exposes containerization primitives (e.g. control groups, namespaces, etc.).  Under this model, we could have exposed each underlying feature independently, and asked Docker to call into them individually.  However, there were a few questions about that approach that caused us to consider alternatives:

  1. The low level APIs were evolving (and improving) rapidly.  Docker (and others) wanted those improvements, but also needed a stable API to build upon.  Could we stabilize the underlying features fast enough to meet our release goals?
  2. The low level APIs were interesting and useful because they made containers possible.  Would anyone actually want to call them independently?

After a bit of thinking, we decided to go with a third option.  We created a new management service called the Host Compute Service (HCS), which acts as a layer of abstraction above the low level functionality.  The HCS was a stable API Docker could build upon, and it was also easier to use.  Making a Windows Server Container with the HCS is just a single API call.  Making a Hyper-V Container instead just means adding a flag when calling into the API.  Figuring out how those calls translate into actual low-level implementation is something the Hyper-V team has already figured out.

linux-arch windows-arch

Getting Started with the HCS

If you think this is nifty, and would like to play around with the HCS, here’s some infomation to help you get started.  Instead of calling our C API directly, I recommend using one the friendly wrappers we’ve built around the HCS.  These wrappers make it easy to call the HCS from higher level languages, and are released open source on GitHub.  They’re also super handy if you want to figure out how to use the C API.  We’ve released two wrappers thus far.  One is written in Go (and used by Docker), and the other is written in C#.

You can find the wrappers here:

If you want to use the HCS (either directly or via a wrapper), or you want to make a Rust/Haskell/InsertYourLanguage wrapper around the HCS, please drop a comment below.  I’d love to chat.

For a deeper look at this topic, I recommend taking a look at John Stark’s DockerCon presentation: https://www.youtube.com/watch?v=85nCF5S8Qok

John Slack
Program Manager
Hyper-V Team

No more “out of memory” errors for Windows Phone emulators in Windows 10 (unless you’re really out of memory)

$
0
0

For those of you who run emulators in Visual Studio, you may be familiar with an annoying error:

1A742E040AD543ACAF235D67681F6656

It periodically pops up even when task manager reports enough available memory – this is especially true for machines with less than 8GB RAM.  Most of the time, it’s because there genuinely isn’t enough memory available but sometimes it’s because of Hyper-V’s root memory reserve (discussed in KB2911380).

This blog will tell you what the root memory reserve is, why it exists, and why you shouldn’t need it on Windows 10 starting in build 15002 (original announcement here).  I also wrote a mini script to clear the registry key that controls root memory reserve if you think it may be set on your system.

So, What is the root memory reserve and why is it there?

Root memory reserve is the memory Hyper-V sets aside to make sure there will always be enough available for the host to run well.

We change Hyper-V host memory management periodically based on feedback and new technology (things like dynamic memory and changes in clustering).  The root memory reserve is only one piece of that equation and even calculating that piece has several factors.  Modifying it is not supported but there is still a registry key available for times when the default isn’t appropriate for one reason or another.

KB2962295 basically describes measuring, monitoring, and modifying the root reserve.

KB2911380 tells you how to manually set it.

And now I’m here to tell you to remove it!

Why you don’t need root memory reserve any more.

We stopped using a root memory reserve in favor of other memory management tools in Windows 10.  The things that make it necessary are unique to server environments (clustering, service level agreements…).

However, while the default memory management settings on server are now different from Hyper-V on Windows,  if root reserve is set on Windows 10 Hyper-V will respect it — you won’t see any of the memory management changes we made.  Which is why now is the time to clear that custom root memory reserve.

 

Cheers,
Sarah


Introducing VMConnect dynamic resize

$
0
0

Starting in the latest Insider’s build, you can resize the display for a session in Virtual Machine Connection just by dragging the corner of the window.

dynamic_resize

When you connect to a VM, you’ll still see the normal options which determine the size of the window and the resolution to pass to the virtual machine:

vmconnectclassic

Once you log in, you can see that the guest OS is using the specified resolution, in this case 1366 x 768.

vmconnect4

Now, if we resize the window, the resolution in the guest OS is automatically adjusted. Neat!

dynamic_resize

Additionally, the system DPI settings are passed to the VM. If I change my scaling factor on the host, the VM display will scale as well.

There are 2 requirements for dynamic resizing to work:

  • You must be running in Enhanced session mode
  • You must be fully logged in to the guest OS (it won’t work on the lockscreen)

 

This remains a work in progress, so we would love to hear your thoughts.

-Andy

 

 

 

 

Live Migration via Constrained Delegation with Kerberos in Windows Server 2016

$
0
0

Introduction

Many Hyper-V customers have run into new challenges when trying to use constrained delegation with Kerberos to Live Migrate VMs in Windows Server 2016.  When attempting to migrate, they would see errors with messages like “no credentials are available in the security package,” or “the Virtual Machine Management Service failed to authenticate the connection for a Virtual Machine migration at the source host: no suitable credentials available.”  After investigating, we have determined the root cause of the issue and have updated guidance for how to configure constrained delegation.

Fixing This Issue

Resolving this issue is a simple configuration change in Active Directory when setting up constrained delegation.  In our documentation, when you reach the fifth instruction in Step 1, select “use any authentication protocol” instead of “use Kerberos only.”  The other instructions have not changed.

constrained_delegation

Root Cause

Warning: the next two sections go a bit deep into the internal workings of Hyper-V.

The root cause of this issue is an under the hood change in Hyper-V remoting.  Between Windows Server 2012R2 and Windows Server 2016, we shifted from using the Hyper-V WMI Provider *v1* over *DCOM* to the Hyper-V WMI Provider *v2* over *WinRM*.  This is a good thing: it unifies Hyper-V remoting with other Windows remoting tools (e.g. PowerShell Remoting).  This change matters for constrained delegation because:

  1. WinRM runs as NETWORK SERVICE, while the Virtual Machine Management Service (VMMS) runs as SYSTEM.
  2. The way WinRM does inbound authentication stores the nice, forwardable Kerberos ticket in a location that is unavailable to NETWORK SERVICE.

The net result is the WinRM cannot access the forwardable Kerberos ticket, and the Live Migration fails on Windows Server 2016.  After exploring possible solutions, the best (and fastest) option here is to change the configuration to enable “protocol transition” by changing the constrained delegation configuration as above.

How does this impact security?

You may think this approach is less secure, but in practice, the impact is debatable.

When Kerberos Constrained Delegation (KCD) is configured to “use Kerberos only,” the system performing delegation must possess a Kerberos service ticket from the delegated user as evidence that it is acting on behalf of that user.  By switching KCD to “use any authentication protocol”, that requirement is relaxed such that a service ticket acquired via Kerberos S4U logon is acceptable.  This means that the delegating service is able to delegate an account without direct involvement of the account owner.  While enabling the use of any protocol — often referred to as “protocol transition” — is nominally less secure for this reason, the difference is marginal due to the fact that the disabling of protocol transition provides no security promise.  Single-sign-on authentication between systems sharing a domain network is simply too ubiquitous to treat an inbound service ticket as proof of anything.  With or without protocol transition, the only secure way to limit the accounts that the service is permitted to delegate is to mark those accounts with the “account is sensitive and cannot be delegated” bit.

Documentation

We’re working on modifying our documentation to reflect this change.

John Slack
Hyper-V Team PM

Editing VMConnect session settings

$
0
0

When you connect to a VM with Virtual Machine Connection in enhanced session mode, you’re prompted to choose some settings for display and local resources.

VMConnect Session Settings

The main thing that changes between sessions is usually display configuration. But since you can now resize after connecting starting in the latest Insider build, you might not want to see this page each time you connect. You can select  “Save my settings for future connections to this virtual machine” and you won’t see this page for future sessions. 

VMConnect save session settings

 

However, you might want to occasionally configure local resources like audio and devices, so there are 2 easy ways to get back to these settings:

  1. In Hyper-V Manager, you will see an option to “Edit Session Settings…” for any VM for which you have saved settings.

    VMConnect edit session settings
  2. Open VMConnect from command line or Powershell, and specify the /edit flag to open the session settings. 

capture33

Cheers,
Andy

Fun fact: Quick Create handles emoji in virtual machine names and splices them into simple Unicode

$
0
0


I was playing with Windows 10’s on screen keyboard and discovered the emoticons section.  Specifically, I found this awesome set of cat emojis.

clip_image001[10]

WindowsKitty definitely needed to be a VM Name.  It even has a laptop!  Luckily, it turns out, the Quick Create option we added recently handles emoji beautifully.

clip_image001[5]

Not only does the VM name look great in Quick Create with the crazy Windows 10 emoji, Windows also splices them into simpler Unicode representations for Hyper-V Manager and the file system.  I was really enjoying seeing what the simplified Unicode would be – in this case, cat + computer.

clip_image001[14]

Which begs the question, how do emoji VM names look in PowerShell?

clip_image001[16]

Unfortunately, not so good – maybe someday.

In conclusion, if you don’t need PowerShell scripting (or love referencing VMs via GUID) maybe emoji names are for you.  It makes me smile, at least.

For further reading, checkout this blog post about how Windows 10 rethinks how we treat emoji.

Have fun!

Sarah

Overlay Network Driver with Support for Docker Swarm Mode Now Available to Windows Insiders on Windows 10

$
0
0

Windows 10 Insiders can now take advantage of overlay networking and Docker swarm mode  to manage containerized applications in both single-host and clustering scenarios.

Containers are a rapidly growing technology, and as they evolve so must the technologies that support them as members of a broader collection of compute, storage and networking infrastructure components. For networking, in particular, this means continually striving to achieve better connectivity, higher reliability and easier management for container networking. Less than six months ago, Microsoft released Windows 10 Anniversary Edition and Windows Server 2016, and even as our first versions of Windows with container support were being celebrated we were already hard at work on new container features, including several container networking features.

Our last Windows release showcased Docker Compose and service discovery—two key features for single-host container deployment and networking scenarios. Now, we’re expanding the reach of Windows container networking to multi-host (clustering) scenarios with the addition of a native overlay network driver and support for Docker swarm mode, available today to Windows Insiders as part of the upcoming Windows 10, Creators Update.

Docker swarm mode is Docker’s native orchestration tool, designed to simplify the experiencing of declaring, managing and scaling container services. The Windows overlay network driver (which uses VXLAN and virtual overlay networking technology) makes it possible to connect container endpoints running on separate hosts to the same, isolated network. Together, swarm mode and overlay enable easy management and complete scalability of your containerized applications, allowing you to leverage the full power of your infrastructure hosts.

What is “swarm mode”?

Swarm mode is a Docker feature that provides built in container orchestration capabilities, including native clustering of Docker hosts and scheduling of container workloads. A group of Docker hosts form a “swarm” cluster when their Docker engines are running together in “swarm mode.”

A swarm is composed of two types of container hosts: manager nodes, and worker nodes. Every swarm is initialized via a manager node, and all Docker CLI commands for controlling and monitoring a swarm must be executed from one of its manager nodes. Manager nodes can be thought of as “keepers” of the Swarm state—together, they form a consensus group that maintains awareness of the state of services running on the swarm, and it’s their job to ensure that the swarm’s actual state always matches its intended state, as defined by the developer or admin.

Note: Any given swarm can have multiple manager nodes, but it must always have at least one.

Worker nodes are orchestrated by Docker swarm via manager nodes. To join a swarm, a worker node must use a “join token” that was generated by the manager node when the swarm was initialized. Worker nodes simply receive and execute tasks from manager nodes, and so they require (and possess) no awareness of the swarm state.

swarmoverlayfunctionalview

Figure 1: A four-node swarm cluster running two container services on isolated overlay networks.

Figure 1 offers a simple visualization of a four-node cluster running in swarm mode, leveraging the overlay network driver. In this swarm, Host A is the manager node and Hosts B-D are worker nodes. Together, these manager and worker nodes are running two Docker services which are backed by a total of ten container instances, or “replicas.” The yellow in this figure distinguishes the first service, Service 1; the containers for Service 1 are connected by an overlay network. Similarly, the blue in this figure represents the second service, Service 2; the containers for Service 2 are also attached by an overlay network.

Note: In this case, the two Docker services happen to be connected by separate/isolated overlay networks. It is also possible, however, for multiple container services to be attached to the same overlay network.

Windows Network Stack Implementation

Under the covers, Swarm and overlay are enabled by enhancements to the Host Network Service (HNS) and Windows libnetwork plugin for the Docker engine, which leverage the Azure Virtual Filtering Platform (VFP) forwarding extension in the Hyper-V Virtual Switch. Figure 2 shows how these components work together on a given Windows container host, to enable overlay and swarm mode functionality.

Figure 2: Key components involved in enabling swarm mode and overlay networking on Windows container hosts.

Figure 2: Key components involved in enabling swarm mode and overlay networking on Windows container hosts.

The HNS overlay network driver plugin and VFP forwarding extension

Overlay networking was enabled with the addition of an overlay network driver plugin to the HNS service, which creates encapsulation rules using the VFP forwarding extension in the Hyper-V Virtual Switch; the HNS overlay plugin communicates with the VFP forwarding extension to perform the VXLAN encapsulation required to enable overlay networking functionality.

On Windows, the Azure Virtual Filtering Platform (VFP) is a software defined networking (SDN) element, installed as a programmable Hyper-V Virtual Switch forwarding extension. It is a shared component with the Azure platform, and was added to Windows 10 with Windows 10 Anniversary Edition. It is designed as a high performance, rule-flow based engine, to specify per-endpoint rules for forwarding, transforming, or blocking network traffic. The VFP extension has been used for implementing the l2bridge and l2tunnel Windows container networking modes and is now also used to implement the overlay networking mode. As we continue to expand container networking capabilities on Windows, we plan to further leverage the VFP extension to enable more fine-grained policy.

Enhancements to the Windows libnetwork plugin

Overlay networking support was the main hurdle that needed to be overcome to achieve Docker swarm mode support on Windows. Aside from that, additions also needed to be made to the Windows libnetwork Plugin—the plugin to the Docker engine that enables container networking functionality on Windows by facilitating communication between the Docker engine and the HNS service.

Load balancing: Windows routing mesh coming soon

Currently, Windows supports DNS Round-Robin load balancing between services. The routing mesh for Windows Docker hosts is not yet supported, but will be coming soon. Users seeking an alternative load balancing strategy today can setup an external load balancer (e.g. NGINX) and use Swarm’s publish-port mode to expose container host ports over which to load balance.

Boost your DevOps cycle and manage containers across Windows hosts by leveraging Docker swarm mode today

Together, Docker Swarm and support for overlay container networks enable multi-host scenarios and rapid scalability of your Windows containerized applications and services. This new support, combined with service discovery and the rest of the capabilities that you are used to leveraging in single-host configurations, makes for a clean and straight-forward experience developing containerized apps on Windows for multi-host environments.

To get started with Docker Swarm and overlay networking on Windows, start here .

The Datacenter and Cloud Networking team worked alongside our partners internally and at Docker to bring overlay networking mode and Docker swarm mode support to Windows. Again, this is an exciting milestone in our ongoing work to achieve better container networking support to Windows users. We’re constantly seeking more ways to improve your experience working with containers on Windows, and it’s only with your feedback that we can best decide what to do next to enable you and your DevOps teams.

We encourage you to share your experiences, questions and feedback with us, to help us learn more about what you’re doing with container networking on Windows today, and to understand what you’d like to achieve in the future. Visit our Contact Page to learn more about the forums that you can use to be in touch with us.

How to give us feedback

$
0
0

We love hearing from you.  So what’s the best way to give us feedback?

The best way to report an issue or give a quick suggestion is the Feedback Hub on Windows 10 (Windows key + F to open it quickly). The feedback hub lets the product team see all of your feedback in one place, and allows other users to upvote and provide further comments. It’s also tightly integrated with our bug tracking and engineering processes, so that we can keep an eye on what users are saying and use this data to help prioritize fixes and feature requests, and so that you can follow up and see what we’re doing about it.

In the latest build, we have reintroduced the Hyper-V feedback category.

After typing your feedback, selecting “Show category suggestions” should help you find the Hyper-V category under Apps and Games. It looks like a couple people have already discovered the new category:

 

Hyper-V feedback

When you put your feedback in the Hyper-V category, we are also able to collect relevant event logs to help diagnose issues. To provide more information about a problem that you can reproduce, hit “begin monitoring”, reproduce the issue, and then “stop monitoring”. This allows us to collect relevant diagnostic information to help reproduce and fix the problem.

Begin monitoring

We also love to hear from you in our forums if there are any issues you are running into. This is a good place to get direct help from the product group as well as community members. Hyper-V Forums

Hyper-V Forums

That’s all for now. Looking forward to seeing your feedback!

Cheers,
Andy

Linux Integration Services 4.1.3-2

$
0
0

Linux Integration Services has been update to version 4.1.3-2 and is available from https://www.microsoft.com/en-us/download/details.aspx?id=51612

This is a minor update to correct the RPMs for a kernel ABI change in Red Hat Enterprise Linux, CentOS, and Oracle Linux’s Red Hat Compatible Kernel version 7.3. Version 3.10.0-514.10.2.el7 of the kernel was sufficiently different for symbol conflicts to break the LIS kernel modules and create a situation where a VM would not start correctly. This version of the modules is compatible with the new kernel.


Use NGINX to load balance across your Docker Swarm cluster

$
0
0

A practical walkthrough, in six steps

This basic example demonstrates NGINX and swarm mode in action, to provide the foundation for you to apply these concepts to your own configurations.

This document walks through several steps for setting up a containerized NGINX server and using it to load balance traffic across a swarm cluster. For clarity, these steps are designed as an end-to-end tutorial for setting up a three node cluster and running two docker services on that cluster; by completing this exercise, you will become familiar with the general workflow required to use swarm mode and to load balance across Windows Container endpoints using an NGINX load balancer.

The basic setup

This exercise requires three container hosts–two of which will be joined to form a two-node swarm cluster, and one which will be used to host a containerized NGINX load balancer. In order to demonstrate the load balancer in action, two docker services will be deployed to the swarm cluster, and the NGINX server will be configured to load balance across the container instances that define those services. The services will both be web services, hosting simple content that can be viewed via web browser. With this setup, the load balancer will be easy to see in action, as traffic is routed between the two services each time the web browser view displaying their content is refreshed.

The figure below provides a visualization of this three-node setup. Two of the nodes, the “Swarm Manager” node and the “Swarm Worker” node together form a two-node swarm mode cluster, running two Docker web services, “S1” and “S2”. A third node (the “NGINX Host” in the figure) is used to host a containerized NGINX load balancer, and the load balancer is configured to route traffic across the container endpoints for the two container services. This figure includes example IP addresses and port numbers for the two swarm hosts and for each of the six container endpoints running on the hosts.

configuration

System requirements

Three* or more computer systems running Windows 10 Creators Update (available today for members of the Windows Insiders program), setup as a container host (see the topic, Windows Containers on Windows 10 for more details on how to get started with Docker containers on Windows 10).

Additionally, each host system should be configured with the following:

  • The microsoft/windowsservercore container image
  • Docker Engine v1.13.0 or later
  • Open ports: Swarm mode requires that the following ports be available on each host.
    • TCP port 2377 for cluster management communications
    • TCP and UDP port 7946 for communication among nodes
    • TCP and UDP port 4789 for overlay network traffic

*Note on using two nodes rather than three:
These instructions can be completed using just two nodes. However, currently there is a known bug on Windows which prevents containers from accessing their hosts using localhost or even the host’s external IP address (for more background on this, see Caveats and Gotchas below). This means that in order to access docker services via their exposed ports on the swarm hosts, the NGINX load balancer must not reside on the same host as any of the service container instances.
Put another way, if you use only two nodes to complete this exercise, one of them will need to be dedicated to hosting the NGINX load balancer, leaving the other to be used as a swarm container host (i.e. you will have a single-host swarm cluster and a host dedicated to your containerized NGINX load balancer).

Step 1: Build an NGINX container image

In this step, we’ll build the container image required for your containerized NGINX load balancer. Later we will run this image on the host that you have designated as your NGINX container host.

Note: To avoid having to transfer your container image later, complete the instructions in this section on the container host that you intend to use for your NGINX load balancer.

NGINX is available for download from nginx.org. An NGINX container image can be built using a simple Dockerfile that installs NGINX onto a Windows base container image and configures the container to run as an NGINX executable. For the purpose of this exercise, I’ve made a Dockerfile downloadable from my personal GitHub repo; access the NGINX Dockerfile here, then save it to some location (e.g. C:\temp\nginx) on your NGINX container host machine. From that location, build the image using the following command:

C:\temp\nginx> docker build -t nginx .

Now the image should appear with the rest of the docker images on your system (you can check this using the docker images command).

(Optional) Confirm that your NGINX image is ready

First, run the container:

C:\temp> docker run -it -p 80:80 nginx

Next, open a new cmdlet window and use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID> in the next command.

Get the container’s IP address:

C:\temp> docker exec <CONTAINERID> ipconfig

For example, your container’s IP address may be 172.17.176.155, as in the example output shown below.

nginxipconfig

Next, open a browser on your container host and put your container’s IP address in the address bar. You should see a confirmation page, indicating that NGINX is successfully running in your container.

nginxconfirmation

 

Step 2: Build images for two containerized IIS Web services

In this step, we’ll build container images for two simple IIS-based web applications. Later, we’ll use these images to create two docker services.

Note: Complete the instructions in this section on one of the container hosts that you intend to use as a swarm host.

Build a generic IIS Web Server image

On my personal GitHub repo, I have made a Dockerfile available for creating an IIS Web server image. The Dockerfile simply enables the Internet Information Services (IIS) Web server role within a microsoft/windowsservercore container. Download the Dockerfile from here, and save it to some location (e.g. C:\temp\iis) on one of the host machines that you plan to use as a swarm node. From that location, build the image using the following command:

 C:\temp\iis> docker build -t iis-web .

(Optional) Confirm that your IIS Web server image is ready

First, run the container:

 C:\temp> docker run -it -p 80:80 iis-web

Next, use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID> in the next command.

Get the container’s IP address:

C:\temp> docker exec <CONTAINERID> ipconfig

Now open a browser on your container host and put your container’s IP address in the address bar. You should see a confirmation page, indicating that the IIS Web server role is successfully running in your container.

iisconfirmation

Build two custom IIS Web server images

In this step, we’ll be replacing the IIS landing/confirmation page that we saw above with custom HTML pages–two different pages, corresponding to two different web container images. In a later step, we’ll be using our NGINX container to load balance across instances of these two images. Because the images will be different, we will easily see the load balancing in action as it shifts between the content being served by the containers instances of the two images.

First, on your host machine create a simple file called, index_1.html. In the file type any text. For example, your index_1.html file might look like this:

index1

Now create a second file, index_2.html. Again, in the file type any text. For example, your index_2.html file might look like this:

index2

Now we’ll use these HTML documents to make two custom web service images.

If the iis-web container instance that you just built is not still running, run a new one, then get the ID of the container using:

C:\temp> docker exec <CONTAINERID> ipconfig

Now, copy your index_1.html file from your host onto the IIS container instance that is running, using the following command:

C:\temp> docker cp index_1.html <CONTAINERID>:C:\inetpub\wwwroot\index.html

Next, stop and commit the container in its current state. This will create a container image for the first web service. Let’s call this first image, “web_1.”

C:\> docker stop <CONTAINERID>
C:\> docker commit <CONTAINERID> web_1

Now, start the container again and repeat the previous steps to create a second web service image, this time using your index_2.html file. Do this using the following commands:

C:\> docker start <CONTAINERID>
C:\> docker cp index_2.html <CONTAINERID>:C:\inetpub\wwwroot\index.html
C:\> docker stop <CONTAINERID>
C:\> docker commit <CONTAINERID> web_2

You have now created images for two unique web services; if you view the Docker images on your host by running docker images, you should see that you have two new container images—“web_1” and “web_2”.

Put the IIS container images on all of your swarm hosts

To complete this exercise you will need the custom web container images that you just created to be on all of the host machines that you intend to use as swarm nodes. There are two ways for you to get the images onto additional machines:

  • Option 1: Repeat the steps above to build the “web_1” and “web_2” containers on your second host.
  • Option 2 [recommended]: Push the images to your repository on Docker Hub then pull them onto additional hosts.

A note on Docker Hub:
Using Docker Hub is a convenient way to leverage the lightweight nature of containers across all of your machines, and to share your images with others. Visit the following Docker resources to get started with pushing/pulling images with Docker Hub:
Create a Docker Hub account and repository
Tag, push and pull your image

 

Step 3: Join your hosts to a swarm

As a result of the previous steps, one of your host machines should have the nginx container image, and the rest of your hosts should have the Web server images, “web_1” and “web_2”. In this step, we’ll join the latter hosts to a swarm cluster.

Note: The host running the containerized NGINX load balancer cannot run on the same host as any container endpoints for which it is performing load balancing; the host with your nginx container image must be reserved for load balancing only. For more background on this, see Caveats and Gotchas below.

First, run the following command from any machine that you intend to use as a swarm host. The machine that you use to execute this command will become a manager node for your swarm cluster.

  • Replace <HOSTIPADDRESS> with the public IP address of your host machine
C:\temp> docker swarm init --advertise-addr=<HOSTIPADDRESS> --listen-addr <HOSTIPADDRESS>:2377

Now run the following command from each of the other host machines that you intend to use as swarm nodes, joining them to the swarm as a worker nodes.

  • Replace <MANAGERIPADDRESS> with the public IP address of your host machine (i.e. the value of <HOSTIPADDRESS> that you used to initialize the swarm from the manager node)
  • Replace <WORKERJOINTOKEN> with the worker join-token provided as output by the docker swarm init command (you can also obtain the join-token by running docker swarm join-token worker from the manager host)
C:\temp> docker swarm join --token <WORKERJOINTOKEN> <MANAGERIPADDRESS>:2377

Your nodes are now configured to form a swarm cluster! You can see the status of the nodes by running the following command from your manage node:

C:\temp> docker node ls

Step 4: Deploy services to your swarm

Note: Before moving on, stop and remove any NGINX or IIS containers running on your hosts. This will help avoid port conflicts when you define services. To do this, simply run the following commands for each container, replacing <CONTAINERID> with the ID of the container you are stopping/removing:

C:\temp> docker stop <CONTAINERID>
C:\temp> docker rm <CONTAINERID>

Next, we’re going to use the “web_1” and “web_2” container images that we created in previous steps of this exercise to deploy two container services to our swarm cluster.

To create the services, run the following commands from your swarm manager node:

C:\ > docker service create --name=s1 --publish mode=host,target=80 --endpoint-mode dnsrr web_1 powershell -command {echo sleep; sleep 360000;}
C:\ > docker service create --name=s2 --publish mode=host,target=80 --endpoint-mode dnsrr web_2 powershell -command {echo sleep; sleep 360000;}

You should now have two services running, s1 and s2. You can view their status by running the following command from your swarm manager node:

C:\ > docker service ls

Additionally, you can view information on the container instances that define a specific service with the following commands (where <SERVICENAME> is replaced with the name of the service you are inspecting (for example, s1 or s2):

# List all services
C:\ > docker service ls
# List info for a specific service
C:\ > docker service ps <SERVICENAME>

(Optional) Scale your services

The commands in the previous step will deploy one container instance/replica for each service, s1 and s2. To scale the services to be backed by multiple replicas, run the following command:

C:\ > docker service scale <SERVICENAME>=<REPLICAS>
# e.g. docker service scale s1=3

Step 5: Configure your NGINX load balancer

Now that services are running on your swarm, you can configure the NGINX load balancer to distribute traffic across the container instances for those services.

Of course, generally load balancers are used to balance traffic across instances of a single service, not multiple services. For the purpose of clarity, this example uses two services so that the function of the load balancer can be easily seen; because the two services are serving different HTML content, we’ll clearly see how the load balancer is distributing requests between them.

The nginx.conf file

First, the nginx.conf file for your load balancer must be configured with the IP addresses and service ports of your swarm nodes and services. An example nginx.conf file was included with the NGINX download that was used to create your nginx container image in Step 1. For the purpose of this exercise, I copied and adapted the example file provided by NGINX and used it to create a simple template for you to adapt with your specific node/container information.

Download the nginx.conf file template that I prepared for this exercise from my personal GitHub repo, and save it onto your NGINX container host machine. In this step, we’ll adapt the template file and use it to replace the default nginx.conf file that was originally downloaded onto your NGINX container image.

You will need to adjust the file by adding the information for your hosts and container instances. The template nginx.conf file provided contains the following section:

upstream appcluster {
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
 }

To adapt the file for your configuration, you will need to adjust the <HOSTIP>:<HOSTPORT> entries in the template config file. You will have an entry for each container endpoint that defines your web services. For any given container endpoint, the value of <HOSTIP> will be the IP address of the container host upon which that container is running. The value of <HOSTPORT> will be the port on the container host upon which the container endpoint has been published.

When the services, s1 and s2, were defined in the previous step of this exercise, the --publish mode=host,target=80 parameter was included. This parameter specified that the container instances for the services should be exposed via published ports on the container hosts. More specifically, by including --publish mode=host,target=80 in the service definitions, each service was configured to be exposed on port 80 of each of its container endpoints, as well as a set of automatically defined ports on the swarm hosts (i.e. one port for each container running on a given host).

First, identify the host IPs and published ports for your container endpoints

Before you can adjust your nginx.conf file, you must obtain the required information for the container endpoints that define your services. To do this, run the following commands (again, run these from your swarm manager node):

C:\ > docker service ps s1
C:\ > docker service ps s2

The above commands will return details on every container instance running for each of your services, across all of your swarm hosts.

  • One column of the output, the “ports” column, includes port information for each host of the form *:<HOSTPORT>->80/tcp. The values of <HOSTPORT> will be different for each container instance, as each container is published on its own host port.
  • Another column, the “node” column, will tell you which machine the container is running on. This is how you will identify the host IP information for each endpoint.

You now have the port information and node for each container endpoint. Next, use that information to populate the upstream field of your nginx.conf file; for each endpoint, add a server to the upstream field of the file, replacing the field with the IP address of each node (if you don’t have this, run ipconfig on each swarm host machine to obtain it), and the field with the corresponding host port.

For example, if you have two swarm hosts (IP addresses 172.17.0.10 and 172.17.0.11), each running three containers your list of servers will end up looking something like this:

upstream appcluster {
     server 172.17.0.10:21858;
     server 172.17.0.11:64199;
     server 172.17.0.10:15463;
     server 172.17.0.11:56049;
     server 172.17.0.11:35953;
     server 172.17.0.10:47364;
}

Once you have changed your nginx.conf file, save it. Next, we’ll copy it from your host to the NGINX container image itself.

Replace the default nginx.conf file with your adjusted file

If your nginx container is not already running on its host, run it now:

C:\temp> docker run -it -p 80:80 nginx

Next, open a new cmdlet window and use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID> in the next command.

Get the container’s IP address:

C:\temp> docker exec <CONTAINERID> ipconfig

With the container running, use the following command to replace the default nginx.conf file with the file that you just configured (run the following command from the directory in which you saved your adjusted version of the nginx.conf on the host machine):

C:\temp> docker cp nginx.conf <CONTAINERID>:C:\nginx\nginx-1.10.3\conf

Now use the following command to reload the NGINX server running within your container:

C:\temp> docker exec <CONTAINERID> nginx.exe -s reload

Step 6: See your load balancer in action

Your load balancer should now be fully configured to distribute traffic across the various instances of your swarm services. To see it in action, open a browser and

  • If accessing from the NGINX host machine: Type the IP address of the nginx container running on the machine into the browser address bar. (This is the value of <CONTAINERID> above).
  • If accessing from another host machine (with network access to the NGINX host machine): Type the IP address of the NGINX host machine into the browser address bar.

Once you’ve typed the applicable address into the browser address bar, press enter and wait for the web page to load. Once it loads, you should see one of the HTML pages that you created in step 2.

Now press refresh on the page. You may need to refresh more than once, but after just a few times you should see the other HTML page that you created in step 2.

If you continue refreshing, you will see the two different HTML pages that you used to define the services, web_1 and web_2, being accessed in a round-robin pattern (round-robin is the default load balancing strategy for NGINX, but there are others). The animated image below demonstrated the behavior that you should see.

As a reminder, below is the full configuration with all three nodes. When you’re refreshing your web page view, you’re repeatedly accessing the NGINX node, which is distributing your GET request to the container endpoints running on the swarm nodes. Each time you resend the request, the load balancer has the opportunity to route you to a different endpoint, resulting in your being served a different web page, depending on whether your request was routed to an S1 or S2 endpoint.

configuration_full

Caveats and gotchas

Q: Is there a way to publish a single port for my service, so that I can load balance across each of my services rather than each of the individual endpoints for my services?

Unfortunately, we do not yet support publishing a single port for a service on Windows. This feature is swarm mode’s routing mesh feature—a feature that allows you to publish ports for a service, so that that service is accessible to external resources via that port on every swarm node.

Routing mesh for swarm mode on Windows is not yet supported, but will be coming soon.

 

Q: Why can’t I run my containerized load balancer on one of my swarm nodes?

Currently, there is a known bug on Windows, which prevents containers from accessing their hosts using localhost or even the host’s external IP address. This means containers cannot access their host’s exposed ports—the can only access exposed ports on other hosts.

In the context of this exercise, this means that the NGINX load balancer must be running on its own host, and never on the same host as any services that it needs to access via exposed ports. Put another way, for the containerized NGINX load balancer to balance across the two web services defined in this exercise, s1 and s2, it cannot be running on a swarm node—if it were running on a swarm node, it would be unable to access any containers on that node via host exposed ports.

Of course, an additional caveat here is that containers do not need to be accessed via host exposed ports. It is also possible to access containers directly, using the container IP and published port. If this instead were done for this exercise, the NGINX load balancer would need to be configured to:

  • Access containers that share its host by their container IP and port
  • Access containers that do not share its host by their host’s IP and exposed port

There is no problem with configuring the load balancer in this way, other than the added complexity that it introduces compared to simply putting the load balancer on its own machine, so that containers can be uniformly accessed via their host’s IPs and exposed ports.

A closer look at VM Quick Create

$
0
0


Author: Andy Atkinson

In the last Insiders build, we introduced Quick Create to quickly create virtual machines with less configuration (see blog).

image

We’re trying a few things to make it easier to set up a virtual machine, such as combining installation options to a single field for all supported file types, and adding a control to enable Windows Secure Boot more easily.

image

Quick Create can also help set up your network. If there’s no available switch, you’ll see a button to set up an “automatic network”, which will automatically configure an external switch for the virtual machine and connect it to the network.

To simplify the number of settings, we had to pick some good default settings for the virtual machine, which are currently:

  • Generation: 2
  • StartupRAM: 1024 MB
  • DynamicRAM: Enabled
  • Virtual Processors: 1

After the virtual machine is created, you will see the confirmation page with quick access to edit settings or to connect.

image

Are there other controls you want in Quick Create? Are we picking good defaults?

This is still a work in progress, so let us know what you think!

– Andy

Use NGINX to load balance across your Docker Swarm cluster

$
0
0

A practical walkthrough, in six steps

This basic example demonstrates NGINX and swarm mode in action, to provide the foundation for you to apply these concepts to your own configurations.

This document walks through several steps for setting up a containerized NGINX server and using it to load balance traffic across a swarm cluster. For clarity, these steps are designed as an end-to-end tutorial for setting up a three node cluster and running two docker services on that cluster; by completing this exercise, you will become familiar with the general workflow required to use swarm mode and to load balance across Windows Container endpoints using an NGINX load balancer.

The basic setup

This exercise requires three container hosts–two of which will be joined to form a two-node swarm cluster, and one which will be used to host a containerized NGINX load balancer. In order to demonstrate the load balancer in action, two docker services will be deployed to the swarm cluster, and the NGINX server will be configured to load balance across the container instances that define those services. The services will both be web services, hosting simple content that can be viewed via web browser. With this setup, the load balancer will be easy to see in action, as traffic is routed between the two services each time the web browser view displaying their content is refreshed.

The figure below provides a visualization of this three-node setup. Two of the nodes, the “Swarm Manager” node and the “Swarm Worker” node together form a two-node swarm mode cluster, running two Docker web services, “S1” and “S2”. A third node (the “NGINX Host” in the figure) is used to host a containerized NGINX load balancer, and the load balancer is configured to route traffic across the container endpoints for the two container services. This figure includes example IP addresses and port numbers for the two swarm hosts and for each of the six container endpoints running on the hosts.

configuration

System requirements

Three* or more computer systems running Windows 10 Creators Update (available today for members of the Windows Insiders program), setup as a container host (see the topic, Windows Containers on Windows 10 for more details on how to get started with Docker containers on Windows 10).

Additionally, each host system should be configured with the following:

  • The microsoft/windowsservercore container image
  • Docker Engine v1.13.0 or later
  • Open ports: Swarm mode requires that the following ports be available on each host.
    • TCP port 2377 for cluster management communications
    • TCP and UDP port 7946 for communication among nodes
    • TCP and UDP port 4789 for overlay network traffic

*Note on using two nodes rather than three:
These instructions can be completed using just two nodes. However, currently there is a known bug on Windows which prevents containers from accessing their hosts using localhost or even the host’s external IP address (for more background on this, see Caveats and Gotchas below). This means that in order to access docker services via their exposed ports on the swarm hosts, the NGINX load balancer must not reside on the same host as any of the service container instances.
Put another way, if you use only two nodes to complete this exercise, one of them will need to be dedicated to hosting the NGINX load balancer, leaving the other to be used as a swarm container host (i.e. you will have a single-host swarm cluster, a host dedicated to hosting your containerized NGINX load balancer).

Step 1: Build an NGINX container image

In this step, we’ll build the container image required for your containerized NGINX load balancer. Later we will run this image on the host that you have designated as your NGINX container host.

Note: To avoid having to transfer your container image later, complete the instructions in this section on the container host that you intend to use for your NGINX load balancer.

NGINX is available for download from nginx.org. An NGINX container image can be built using a simple Dockerfile that installs NGINX onto a Windows base container image and configures the container to run as an NGINX executable. For the purpose of this exercise, I’ve made a Dockerfile downloadable from my personal GitHub repo–access the NGINX Dockerfile here, then save it to some location (e.g. C:\temp\nginx) on your NGINX container host machine. From that location, build the image using the following command:

C:\temp\nginx> docker build -t nginx .

Now the image should appear with the rest of the docker images on your system (check using the docker images command).

(Optional) Confirm that your NGINX image is ready

First, run the container:

C:\temp> docker run -it -p 80:80 nginx

Next, open a new cmdlet window and use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID> in the next command.

Get the container’s IP address:

C:\temp> docker exec <CONTAINERID> ipconfig

For example, your container’s IP address may be 172.17.176.155, as in the example output shown below.

nginxipconfig

Next, open a browser on your container host and put your container’s IP address in the address bar. You should see a confirmation page, indicating that NGINX is successfully running in your container.

nginxconfirmation

 

Step 2: Build images for two containerized IIS Web services

In this step, we’ll build container images for two simple IIS-based web applications. Later, we’ll use these images to create two docker services.

Note: Complete the instructions in this section on one of the container hosts that you intend to use as a swarm host.

Build a generic IIS Web Server image

On my personal GitHub repo, I have made a simple Dockerfile available for creating an IIS Web server image. The Dockerfile simply enables the Internet Information Services (IIS) Web server role within a microsoft/windowsservercore container. Download the Dockerfile from here, and save it to some location (e.g. C:\temp\iis) on one of the host machines that you plan to use as a swarm node. From that location, build the image using the following command:

 C:\temp\iis> docker build -t iis-web .

(Optional) Confirm that your IIS Web server image is ready

First, run the container:

 C:\temp> docker run -it -p 80:80 iis-web

Next, use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID> in the next command.

Get the container’s IP address:

C:\temp> docker exec <CONTAINERID> ipconfig

Now open a browser on your container host and put your container’s IP address in the address bar. You should see a confirmation page, indicating that the IIS Web server role is successfully running in your container.

iisconfirmation

Build two custom IIS Web server images

In this step, we’ll be replacing the IIS landing/confirmation page that we saw above with custom HTML pages–two different images, corresponding to two different web container images. In a later step, we’ll be using our NGINX container to load balance across instances of these two images. Because the images will be different, we will easily see the load balancing in action as it shifts between the content being served by the containers we’ll define in this step.

First, on your host machine create a simple file called, index_1.html. In the file type any text. For example, your index_1.html file might look like this:

index1

Now create a second file, index_2.html. Again, in the file type any text. For example, your index_2.html file might look like this:

index2

Now we’ll use these HTML documents to make two custom web service images.

If the iis-web container instance that you just built is not still running, run a new one, then get the ID of the container using:

C:\temp> docker exec <CONTAINERID> ipconfig

Now, copy your index_1.html file from your host onto the IIS container instance that is running, using the following command:

C:\temp> docker cp index_1.html <CONTAINERID>:C:\inetpub\wwwroot\index.html

Next, stop and commit the container in its current state. This will create a container image for the first web service. Let’s call this first image, “web_1.”

C:\> docker stop <CONTAINERID>
C:\> docker commit <CONTAINERID> web_1

Now, start the container again and repeat the previous steps to create a second web service image, this time using your index_2.html file. Do this using the following commands:

C:\> docker start <CONTAINERID>
C:\> docker cp index_2.html <CONTAINERID>:C:\inetpub\wwwroot\index.html
C:\> docker stop <CONTAINERID>
C:\> docker commit <CONTAINERID> web_2

You have now created images for two unique web services; if you view the Docker images on your host by running docker images , you should see that you have two new container images—“web_1” and “web_2”.

Put the IIS container images on all of your swarm hosts

To complete this exercise you will need the custom web container images that you just created to be on all of the host machines that you intend to use as swarm nodes. There are two ways for you to get the images onto additional machines:

Option 1: Repeat the steps above to build the “web_1” and “web_2” containers on your second host.
Option 2 [recommended]: Push the images to your repository on Docker Hub then pull them onto additional hosts.

Using Docker Hub is a convenient way to leverage the lightweight nature of containers across all of your machines, and to share your images with others. Visit the following Docker resources to get started with pushing/pulling images with Docker Hub:
Create a Docker Hub account and repository
Tag, push and pull your image

Step 3: Join your hosts to a swarm

As a result of the previous steps, one of your host machines should have the nginx container image, and the rest of your hosts should have the Web server images, “web_1” and “web_2”. In this step, we’ll join the latter hosts to a swarm cluster.

Note: The host running the containerized NGINX load balancer cannot run on the same host as any container endpoints for which it is performing load balancing; the host with your nginx container image must be reserved for load balancing only. For more background on this, see Caveats and Gotchas below.

First, run the following command from any machine that you intend to use as a swarm host. The machine that you use to execute this command will become a manager node for your swarm cluster.

  • Replace <HOSTIPADDRESS> with the public IP address of your host machine
C:\temp> docker swarm init --advertise-addr=<HOSTIPADDRESS> --listen-addr <HOSTIPADDRESS>:2377

Now run the following command from each of the other host machines that you intend to use as swarm nodes, joining them to the swarm as a worker nodes.

  • Replace <MANAGERIPADDRESS> with the public IP address of your host machine (i.e. the value of <HOSTIPADDRESS> that you used to initialize the swarm from the manager node)
  • Replace <WORKERJOINTOKEN> with the worker join-token provided as output by the docker swarm init command (you can also obtain the join-token by running docker swarm join-token worker from the manager host)
C:\temp> docker swarm join --token <WORKERJOINTOKEN> <MANAGERIPADDRESS>:2377

Your nodes are now configured to form a swarm cluster! You can see the status of the nodes by running the following command from your manage node:

C:\temp> docker node ls

Step 4: Deploy services to your swarm

Note: Before moving on, stop and remove any NGINX or IIS containers running on your hosts. This will help avoid port conflicts when you define services. To do this, simply run the following commands for each container, replacing <CONTAINERID> with the ID of the container you are stopping/removing:

C:\temp> docker stop <CONTAINERID>
C:\temp> docker rm <CONTAINERID>

Next, we’re going to use the “web_1” and “web_2” container images that we created in previous steps of this exercise to deploy two container services to our swarm cluster.

To create the services, run the following commands from your swarm manager node:

C:\ > docker service create --name=s1 --publish mode=host,target=80 --endpoint-mode dnsrr web_1 powershell -command {echo sleep; sleep 360000;}
C:\ > docker service create --name=s2 --publish mode=host,target=80 --endpoint-mode dnsrr web_2 powershell -command {echo sleep; sleep 360000;}

You should now have two services running, s1 and s2. You can view their status by running the following command from your swarm manager node:

C:\ > docker service ls

Additionally, you can view information on the container instances that define a specific service with the following commands (where <SERVICENAME> is replaced with the name of the service you are inspecting (for example, s1 or s2):

# List all services
C:\ > docker service ls
# List info for a specific service
C:\ > docker service ps <SERVICENAME>

(Optional) Scale your services

The commands in the previous step will deploy one container instance/replica for each service, s1 and s2. To scale the services to be backed by multiple replicas, run the following command:

C:\ > docker service scale <SERVICENAME>=<REPLICAS>
# e.g. docker service scale s1=3

Step 5: Configure your NGINX load balancer

Now that services are running on your swarm, you can configure the NGINX load balancer to distribute traffic across the container instances for those services.

Of course, generally load balancers are used to balance traffic across instances of a single service, not multiple services. For the purpose of clarity, this example uses two services so that the function of the load balancer can be easily seen; because the two services are serving different HTML content, we’ll clearly see how the load balancer is distributing requests between them.

The nginx.conf file

First, the nginx.conf file for your load balancer must be configured with the IP addresses and service ports of your swarm nodes and services. An example nginx.conf file was included with the NGINX download that was used to create your nginx container image in step 1. For the purpose of this exercise, I copied and adapted the example file provided by NGINX and used it to create a simple template for you to adapt with your specific node/container information.

Download the nginx.conf file template that I prepared for this exercise from my personal GitHub repo, and save it onto your NGINX container host machine. In this step, we’ll adapt the template file and use it to replace the default nginx.conf file that was originally downloaded onto your NGINX container image.

You will need to adjust the file by adding the information for your hosts and container instances. The template nginx.conf file provided contains the following section:

upstream appcluster {
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
 }

To adapt the file for your configuration, you will need to adjust the <HOSTIP>:<HOSTPORT> entries in the config file. You will have an entry for each container endpoint that defines your web services. For any given container endpoint, the value of <HOSTIP> will be the IP address of the container host upon which that container is running. The value of <HOSTPORT> will be the port on the container host upon which the container endpoint has been published.

When the services, s1 and s2, were defined in the previous step of this exercise, the --publish mode=host,target=80 parameter was included. This paramater specified that the container instances for the services should be exposed via published ports on the container hosts. More specifically, by including --publish mode=host,target=80 in the service definitions, each service was configured to be exposed on port 80 of each of its container endpoints, as well as a set of automatically defined ports on the swarm hosts (i.e. one port for each container running on a given host).

First, identify the host IPs and published ports for your container endpoints

Before you can adjust your nginx.conf file, you must obtain the required information for the container endpoints that define your services. To do this, run the following commands (again, run these from your swarm manager node):

C:\ > docker service ps s1
C:\ > docker service ps s2

The above commands will return details on every container instance running for each of your services, across all of your swarm hosts.

  • One column of the output, the “ports” column, includes port information for each host of the form *:<HOSTPORT>->80/tcp. The values of <HOSTPORT> will be different for each container instance, as each container is published on its own host port.
  • Another column, the “node” column, will tell you which machine the container is running on. This is how you will identify the host IP information for each endpoint.

You now have the port information and node for each container endpoint. Next, use that information to populate the upstream field of your nginx.conf file; for each endpoint, add a server to the upstream field of the file, replacing the field with the IP address of each node (if you don’t have this, run ipconfig on each host machine to obtain it), and the field with the corresponding host port.

For example, if you have two swarm hosts (IP addresses 172.17.0.10 and 172.17.0.11), each running three containers your list of servers will end up looking something like this:

upstream appcluster {
     server 172.17.0.10:21858;
     server 172.17.0.11:64199;
     server 172.17.0.10:15463;
     server 172.17.0.11:56049;
     server 172.17.0.11:35953;
     server 172.17.0.10:47364;
}

Once you have changed your nginx.conf file, save it. Next, we’ll copy it from your host to the NGINX container image itself.

Replace the default nginx.conf file with your adjusted file

If your nginx container is not already running on its host, run it now:

C:\temp> docker run -it -p 80:80 nginx

Next, open a new cmdlet window and use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID> in the next command.

Get the container’s IP address:

C:\temp> docker exec <CONTAINERID> ipconfig

With the container running, use the following command to replace the default nginx.conf file with the file that you just configured (run the following command from the directory in which you saved your adjusted version of the nginx.conf on the host machine):

C:\temp> docker cp nginx.conf <CONTAINERID>:C:\nginx\nginx-1.10.3\conf

Now use the following command to reload the NGINX server running within your container:

C:\temp> docker exec <CONTAINERID> nginx.exe -s reload

Step 6: See your load balancer in action

Your load balancer should now be fully configured to distribute traffic across the various instances of your swarm services. To see it in action, open a browser and

  • If accessing from the NGINX host machine: Type the IP address of the nginx container running on the machine into the browser address bar. (This is the value of <CONTAINERID> above).
  • If accessing from another host machine (with network access to the NGINX host machine): Type the IP address of the NGINX host machine into the browser address bar.

Once you’ve typed the applicable address into the browser address bar, press enter and wait for the web page to load. Once it loads, you should see one of the HTML pages that you created in step 2.

Now press refresh on the page. You may need to refresh more than once, but after just a few times you should see the other HTML page that you created in step 2.

If you continue refreshing, you will see the two different HTML pages that you used to define the services, web_1 and web_2, being accessed in a round-robin pattern (round-robin is the default load balancing strategy for NGINX, but there are others). The animated image below demonstrated the behavior that you should see.

As a reminder, below is the full configuration with all three nodes. When you’re refreshing your web page view, you’re repeatedly accessing the NGINX node, which is distributing your GET request to the container endpoints running on the swarm nodes. Each time you resend the request, the load balancer has the opportunity to route you to a different endpoint, resulting in your being served a different web page, depending on whether or not your request was routed to an S1 or S2 endpoint.

configuration_full

Caveats and gotchas

Is there a way to publish a single port for my service, so that I can load balance across just a few endpoints rather than all of my container instances?

Unfortunately, we do not yet support publishing a single port for a service on Windows. This feature is swarm mode’s routing mesh feature—a feature that allows you to publish ports for a service, so that that service is accessible to external resources via that port on every swarm node.

Routing mesh for swarm mode on Windows is not yet supported, but will be coming soon.

Why can’t I run my containerized load balancer on one of my swarm nodes?

Currently, there is a known bug on Windows, which prevents containers from accessing their hosts using localhost or even the host’s external IP address. This means containers cannot access their host’s exposed ports—the can only access exposed ports on other hosts.

In the context of this exercise, this means that the NGINX load balancer must be running on its own host, and never on the same host as any services that it needs to via exposed ports. Put another way, for the containerized NGINX load balancer to balance across the two web services defined in this exercise, s1 and s2, it cannot be running on a swarm node—if it were running on a swarm node, it would be unable to access any containers on that node via host exposed ports.

Of course, an additional caveat here is that containers do not need to be accessed via host exposed ports. It is also possible to access containers directly, using the container IP and published port. If this instead were done for this exercise, the NGINX load balancer would need to be configured to access:

  • containers that share its host by their container IP and port
  • containers that do not share its host by their host’s IP and exposed port

There is no problem with configuring the load balancer in this way, other than the added complexity that it introduces compared to simply putting the load balancer on its own machine, so that containers can be uniformly accessed via their hosts.

What’s new in Hyper-V for the Windows 10 Creators Update?

$
0
0

Microsoft just released the Windows 10 Creators Update.  Which means Hyper-V improvements!

New and improved features in Creators Update:

  • Quick Create
  • Checkpoint and Save for nested Hyper-V
  • Dynamic resize for VM Connect
  • Zoom for VM Connect
  • Networking improvements (NAT)
  • Developer-centric memory management

Keep reading for more details.  Also, if you want to try new Hyper-V things as we build them, become a Windows Insider.

Faster VM creation with Quick Create

clip_image001

Hyper-V Manager has a new option for quickly and easily creating virtual machines, aptly named “Quick Create”.  Introduced in build 15002, Quick Create focuses on getting the guest operating system up and running as quickly as possible — including creating and connecting to a virtual switch.

When we first released Quick Create, there were a number of issues mostly centered on our default virtual machine settings (read more).  In response to your feedback, we have updated the Quick Create defaults.

Creators Update Quick Create defaults:

  • Generation: 2
  • Memory: 2048 MB to start, Dynamic Memory enabled
  • Virtual Processors: 4
  • VHD: dynamic resize up to 100GB

Checkpoint and save work on nested Hyper-V host

Last year we added the ability to run Hyper-V inside of Hyper-V (a.k.a. nested virtualization).  This has been a very popular feature, but it initially came with a number of limitations.  We have continued to work on the performance, compatibility and feature integration of nested virtualization.

In the Creator update for Windows 10 you can now take checkpoints and saved states on virtual machines that are acting as nested Hyper-V hosts.

Dynamic resize for Enhanced Session Mode VMs

dynamic_resize

The picture says it all.  If you are using Hyper-V’s Enhanced Session Mode, you can dynamically resize your virtual machine.  Right now, this is only available to virtual machines that support Hyper-V’s Enhanced Session mode.  That includes:

  • Windows Client: Windows 8.1, Windows 10 and later
  • Windows Server: Windows Server 2012 R2, Windows Server 2016 and later

Read blog announcement.

Zoom for VM Connect

Is your virtual machine impossible to read?  Alternately, do you suffer from scaling issues in legacy applications?

VMConnect now has the option to adjust Zoom Level under the View Menu.

image

Multiple NAT networks and IP pinning

NAT networking is vital to both Docker and Visual Studio’s UWP device emulators.  When we released Windows Containers, developers discovered number of networking differences between containers on Linux and containers on Windows.  Additionally, introducing another common developer tool that uses NAT networking presented new challenges for our networking stack.

In the Creators Update, there are two significant improvements to NAT:

  1. Developers can now use for multiple NAT networks (internal prefixes) on a single host.
    That means VMs, containers, emulators, et. al. can all take advantage of NAT functionality from a single host.
  2. Developers are also able to build and test their applications with industry-standard tooling directly from the container host using an overlay network driver (provided by the Virtual Filtering Platform (VFP) Hyper-V switch extension) as well as having direct access to the container using the Host IP and exposed port.

Improved memory management

Until recently, Hyper-V has allocated memory very conservatively.  While that is the right behavior for Windows Server, UWP developers faced out of memory errors starting device emulators from Visual Studio (read more).

In the Creators Update, Hyper-V gives the operating system a chance to trim memory from other applications and uses all available memory.  You may still run out of memory, but now the amount of memory shown in task manager accurately reflects the amount available for starting virtual machines.

Introduced in build 15002.

As always, please send us feedback!

Once more, because I can’t emphasize this enough, become a Windows Insider – almost everything here has benefited from your early feedback.

Cheers,
Sarah

Introducing the Host Compute Service (HCS)

$
0
0

Summary

This post introduces a low level container management API in Hyper-V called the Host Compute Service (HCS).  It tells the story behind its creation, and links to a few open source projects that make it easier to use.

Motivation and Creation

Building a great management API for Docker was important for Windows Server Containers.  There’s a ton of really cool low-level technical work that went into enabling containers on Windows, and we needed to make sure they were easy to use.  This seems very simple, but figuring out the right approach was surprisingly tricky.

Our first thought was to extend our existing management technologies (e.g. WMI, PowerShell) to containers.  After investigating, we concluded that they weren’t optimal for Docker, and started looking at other options.

Next, we considered mirroring the way Linux exposes containerization primitives (e.g. control groups, namespaces, etc.).  Under this model, we could have exposed each underlying feature independently, and asked Docker to call into them individually.  However, there were a few questions about that approach that caused us to consider alternatives:

  1. The low level APIs were evolving (and improving) rapidly.  Docker (and others) wanted those improvements, but also needed a stable API to build upon.  Could we stabilize the underlying features fast enough to meet our release goals?
  2. The low level APIs were interesting and useful because they made containers possible.  Would anyone actually want to call them independently?

After a bit of thinking, we decided to go with a third option.  We created a new management service called the Host Compute Service (HCS), which acts as a layer of abstraction above the low level functionality.  The HCS was a stable API Docker could build upon, and it was also easier to use.  Making a Windows Server Container with the HCS is just a single API call.  Making a Hyper-V Container instead just means adding a flag when calling into the API.  Figuring out how those calls translate into actual low-level implementation is something the Hyper-V team has already figured out.

linux-arch windows-arch

Getting Started with the HCS

If you think this is nifty, and would like to play around with the HCS, here’s some infomation to help you get started.  Instead of calling our C API directly, I recommend using one the friendly wrappers we’ve built around the HCS.  These wrappers make it easy to call the HCS from higher level languages, and are released open source on GitHub.  They’re also super handy if you want to figure out how to use the C API.  We’ve released two wrappers thus far.  One is written in Go (and used by Docker), and the other is written in C#.

You can find the wrappers here:

If you want to use the HCS (either directly or via a wrapper), or you want to make a Rust/Haskell/InsertYourLanguage wrapper around the HCS, please drop a comment below.  I’d love to chat.

For a deeper look at this topic, I recommend taking a look at John Stark’s DockerCon presentation: https://www.youtube.com/watch?v=85nCF5S8Qok

John Slack
Program Manager
Hyper-V Team

Viewing all 88 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>