Quantcast
Channel: Virtualization Blog
Viewing all 88 articles
Browse latest View live

Windows Server 2016 Adds Native Overlay Network Driver, enabling mixed Linux + Windows Docker Swarm Mode Clusters

$
0
0

Based on customer and partner feedback, we are happy to announce the Windows networking team released a native overlay network driver for Windows Server 2016 to enable admins to create a Docker Swarm cluster spanning multiple Windows Server and Linux container hosts without worrying about configuring the underlying network fabricWindows Server containers and those with Hyper-V Isolation powered by Docker are available natively in Windows Server 2016 and enable developers and IT admins to work together in building and deploying both modern, cloud-native applications as well as supporting lift-and-shift of workloads from a virtual machine (VM) into a container. Previously, an admin would be limited to scaling out these containers on a single Windows Docker host. With Docker Swarm and overlay, your containerized workloads can now communicate seamlessly across hosts, and scale fluidly, on-demand. 

How did we do it? The Docker engines, running in Swarm mode, are able to scale-out services by launching multiple container instances across all nodes in a cluster. When one of the “master” Swarm mode nodes schedules a container instance to run on a particular host, the Docker engine on that host will call the Windows Host Networking Service (HNS) to create the container endpoint and attach it to the overlay networks referenced by that particular service. HNS will then program this policy into the Virtual Filtering Platform (VFP) Hyper-V switch extension where it is enforced by creating network overlays using VXLAN encapsulation.

The flexibility and agility enjoyed by applications already being managed by Docker Swarm is one thing, but what about the up-front work of getting those applications developed, tested, and deployed? Customers can re-use their Docker Compose file from their development environment to deploy and scale out a multi-service/tier application across the cluster using docker stack deploy command syntax. It’s easy to leverage the power of running both Linux and Windows services in a single application, by deploying individual services on the OS for which they are optimized. Simply use constraints and labels to specify the OS for a Docker Service, and Docker Swarm will take care of scheduling tasks for that service to be run only on the correct host OS. In addition, customers can use Docker Datacenter (via Docker Enterprise Edition Standard) to provide integrated container management and security from development to production.

Ready to get your hands on Docker Swarm and Docker Datacenter with Windows Server 2016? This feature has already been validated by beta customers by successfully deploying workloads using swarm mode and Docker Datacenter (via Docker Enterprise Edition Standard), and we are now excited to release it to all Windows Server customers through Windows Update KB4015217. This feature is also available in the Windows 10 Creator’s Edition (with Docker Community Edition) so that developers can have a consistent experience developing apps on both Windows client and server.

To learn more about Docker Swarm on Windows, start here (https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/swarm-mode). To learn more about Docker Datacenter, start with Docker’s documentation on Docker Enterprise Edition (https://www.docker.com/enterprise-edition).

Feature requests? Bugs? General feedback? We would love to hear from you! Please email us with feedback at sdn_feedback@microsoft.com.


Use NGINX to load balance across your Docker Swarm cluster

$
0
0

A practical walkthrough, in six steps

This basic example demonstrates NGINX and swarm mode in action, to provide the foundation for you to apply these concepts to your own configurations.

This document walks through several steps for setting up a containerized NGINX server and using it to load balance traffic across a swarm cluster. For clarity, these steps are designed as an end-to-end tutorial for setting up a three node cluster and running two docker services on that cluster; by completing this exercise, you will become familiar with the general workflow required to use swarm mode and to load balance across Windows Container endpoints using an NGINX load balancer.

The basic setup

This exercise requires three container hosts–two of which will be joined to form a two-node swarm cluster, and one which will be used to host a containerized NGINX load balancer. In order to demonstrate the load balancer in action, two docker services will be deployed to the swarm cluster, and the NGINX server will be configured to load balance across the container instances that define those services. The services will both be web services, hosting simple content that can be viewed via web browser. With this setup, the load balancer will be easy to see in action, as traffic is routed between the two services each time the web browser view displaying their content is refreshed.

The figure below provides a visualization of this three-node setup. Two of the nodes, the “Swarm Manager” node and the “Swarm Worker” node together form a two-node swarm mode cluster, running two Docker web services, “S1” and “S2”. A third node (the “NGINX Host” in the figure) is used to host a containerized NGINX load balancer, and the load balancer is configured to route traffic across the container endpoints for the two container services. This figure includes example IP addresses and port numbers for the two swarm hosts and for each of the six container endpoints running on the hosts.

configuration

System requirements

Three* or more computer systems running either Windows 10 Creators Update or Windows Server 2016 with all of the latest updates*, setup as a container host (see the topic, Windows Containers on Windows 10 or Windows Containers on Windows Server for more details on how to get started with Docker containers on Windows 10).+

*Note: Docker Swarm on Windows Server 2016 requires KB4015217

Additionally, each host system should be configured with the following:

  • The microsoft/windowsservercore container image
  • Docker Engine v1.13.0 or later
  • Open ports: Swarm mode requires that the following ports be available on each host.
    • TCP port 2377 for cluster management communications
    • TCP and UDP port 7946 for communication among nodes
    • TCP and UDP port 4789 for overlay network traffic

*Note on using two nodes rather than three:
These instructions can be completed using just two nodes. However, currently there is a known bug on Windows which prevents containers from accessing their hosts using localhost or even the host’s external IP address (for more background on this, see Caveats and Gotchas below). This means that in order to access docker services via their exposed ports on the swarm hosts, the NGINX load balancer must not reside on the same host as any of the service container instances.
Put another way, if you use only two nodes to complete this exercise, one of them will need to be dedicated to hosting the NGINX load balancer, leaving the other to be used as a swarm container host (i.e. you will have a single-host swarm cluster, a host dedicated to hosting your containerized NGINX load balancer).

Step 1: Build an NGINX container image

In this step, we’ll build the container image required for your containerized NGINX load balancer. Later we will run this image on the host that you have designated as your NGINX container host.

Note: To avoid having to transfer your container image later, complete the instructions in this section on the container host that you intend to use for your NGINX load balancer.

NGINX is available for download from nginx.org. An NGINX container image can be built using a simple Dockerfile that installs NGINX onto a Windows base container image and configures the container to run as an NGINX executable. The content of such a Dockerfile is shown below.

1   FROM microsoft/windowsservercore
2   RUN powershell Invoke-WebRequest http://nginx.org/download/nginx-1.10.3.zip -UseBasicParsing -OutFile c:\\ng    inx.zip
3   RUN powershell Expand-Archive c:\\nginx.zip -Dest c:\\nginx"
4   WORKDIR c:\\nginx\\nginx-1.10.3
5   ENTRYPOINT powershell .\\nginx.exe

Create a Dockerfile from the content provided above, and save it to some location (e.g. C:\temp\nginx) on your NGINX container host machine. From that location, build the image using the following command:

C:\temp\nginx> docker build -t nginx .

Now the image should appear with the rest of the docker images on your system (check using the docker images command).

(Optional) Confirm that your NGINX image is ready

First, run the container:

C:\temp> docker run -it -p 80:80 nginx

Next, open a new cmdlet window and use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID> in the next command.

Get the container’s IP address:

C:\temp> docker exec <CONTAINERID> ipconfig

For example, your container’s IP address may be 172.17.176.155, as in the example output shown below.

nginxipconfig

Next, open a browser on your container host and put your container’s IP address in the address bar. You should see a confirmation page, indicating that NGINX is successfully running in your container.

nginxconfirmation

 

Step 2: Build images for two containerized IIS Web services

In this step, we’ll build container images for two simple IIS-based web applications. Later, we’ll use these images to create two docker services.

Note: Complete the instructions in this section on one of the container hosts that you intend to use as a swarm host.

Build a generic IIS Web Server image

Below are the contents of a simple Dockerfile that can be used to create an IIS Web server image. The Dockerfile simply enables the Internet Information Services (IIS) Web server role within a microsoft/windowsservercore container.

1     FROM microsoft/windowsservercore
2     RUN dism.exe /online /enable-feature /all /featurename:iis-webserver /NoRestart

Create a Dockerfile from the content provided above, and save it to some location (e.g. C:\temp\iis) on one of the host machines that you plan to use as a swarm node. From that location, build the image using the following command:

 C:\temp\iis> docker build -t iis-web .

(Optional) Confirm that your IIS Web server image is ready

First, run the container:

 C:\temp> docker run -it -p 80:80 iis-web

Next, use the docker ps command to see that the container is running. Note its ID. The ID of your container is the value of <CONTAINERID> in the next command.

Get the container’s IP address:

C:\temp> docker exec <CONTAINERID> ipconfig

Now open a browser on your container host and put your container’s IP address in the address bar. You should see a confirmation page, indicating that the IIS Web server role is successfully running in your container.

iisconfirmation

Build two custom IIS Web server images

In this step, we’ll be replacing the IIS landing/confirmation page that we saw above with custom HTML pages–two different images, corresponding to two different web container images. In a later step, we’ll be using our NGINX container to load balance across instances of these two images. Because the images will be different, we will easily see the load balancing in action as it shifts between the content being served by the containers we’ll define in this step.

First, on your host machine create a simple file called, index_1.html. In the file type any text. For example, your index_1.html file might look like this:

index1

Now create a second file, index_2.html. Again, in the file type any text. For example, your index_2.html file might look like this:

index2

Now we’ll use these HTML documents to make two custom web service images.

If the iis-web container instance that you just built is not still running, run a new one, then get the ID of the container using:

C:\temp> docker exec <CONTAINERID> ipconfig

Now, copy your index_1.html file from your host onto the IIS container instance that is running, using the following command:

C:\temp> docker cp index_1.html <CONTAINERID>:C:\inetpub\wwwroot\index.html

Next, stop and commit the container in its current state. This will create a container image for the first web service. Let’s call this first image, “web_1.”

C:\> docker stop <CONTAINERID>
C:\> docker commit <CONTAINERID> web_1

Now, start the container again and repeat the previous steps to create a second web service image, this time using your index_2.html file. Do this using the following commands:

C:\> docker start <CONTAINERID>
C:\> docker cp index_2.html <CONTAINERID>:C:\inetpub\wwwroot\index.html
C:\> docker stop <CONTAINERID>
C:\> docker commit <CONTAINERID> web_2

You have now created images for two unique web services; if you view the Docker images on your host by running docker images , you should see that you have two new container images—“web_1” and “web_2”.

Put the IIS container images on all of your swarm hosts

To complete this exercise you will need the custom web container images that you just created to be on all of the host machines that you intend to use as swarm nodes. There are two ways for you to get the images onto additional machines:

Option 1: Repeat the steps above to build the “web_1” and “web_2” containers on your second host.
Option 2 [recommended]: Push the images to your repository on Docker Hub then pull them onto additional hosts.

Using Docker Hub is a convenient way to leverage the lightweight nature of containers across all of your machines, and to share your images with others. Visit the following Docker resources to get started with pushing/pulling images with Docker Hub:
Create a Docker Hub account and repository
Tag, push and pull your image

Step 3: Join your hosts to a swarm

As a result of the previous steps, one of your host machines should have the nginx container image, and the rest of your hosts should have the Web server images, “web_1” and “web_2”. In this step, we’ll join the latter hosts to a swarm cluster.

Note: The host running the containerized NGINX load balancer cannot run on the same host as any container endpoints for which it is performing load balancing; the host with your nginx container image must be reserved for load balancing only. For more background on this, see Caveats and Gotchas below.

First, run the following command from any machine that you intend to use as a swarm host. The machine that you use to execute this command will become a manager node for your swarm cluster.

  • Replace <HOSTIPADDRESS> with the public IP address of your host machine
C:\temp> docker swarm init --advertise-addr=<HOSTIPADDRESS> --listen-addr <HOSTIPADDRESS>:2377

Now run the following command from each of the other host machines that you intend to use as swarm nodes, joining them to the swarm as a worker nodes.

  • Replace <MANAGERIPADDRESS> with the public IP address of your host machine (i.e. the value of <HOSTIPADDRESS> that you used to initialize the swarm from the manager node)
  • Replace <WORKERJOINTOKEN> with the worker join-token provided as output by the docker swarm init command (you can also obtain the join-token by running docker swarm join-token worker from the manager host)
C:\temp> docker swarm join --token <WORKERJOINTOKEN> <MANAGERIPADDRESS>:2377

Your nodes are now configured to form a swarm cluster! You can see the status of the nodes by running the following command from your manage node:

C:\temp> docker node ls

Step 4: Deploy services to your swarm

Note: Before moving on, stop and remove any NGINX or IIS containers running on your hosts. This will help avoid port conflicts when you define services. To do this, simply run the following commands for each container, replacing <CONTAINERID> with the ID of the container you are stopping/removing:

C:\temp> docker stop <CONTAINERID>
C:\temp> docker rm <CONTAINERID>

Next, we’re going to use the “web_1” and “web_2” container images that we created in previous steps of this exercise to deploy two container services to our swarm cluster.

To create the services, run the following commands from your swarm manager node:

C:\ > docker service create --name=s1 --publish mode=host,target=80 --endpoint-mode dnsrr web_1 powershell -command {echo sleep; sleep 360000;}
C:\ > docker service create --name=s2 --publish mode=host,target=80 --endpoint-mode dnsrr web_2 powershell -command {echo sleep; sleep 360000;}

You should now have two services running, s1 and s2. You can view their status by running the following command from your swarm manager node:

C:\ > docker service ls

Additionally, you can view information on the container instances that define a specific service with the following commands (where <SERVICENAME> is replaced with the name of the service you are inspecting (for example, s1 or s2):

# List all services
C:\ > docker service ls
# List info for a specific service
C:\ > docker service ps <SERVICENAME>

(Optional) Scale your services

The commands in the previous step will deploy one container instance/replica for each service, s1 and s2. To scale the services to be backed by multiple replicas, run the following command:

C:\ > docker service scale <SERVICENAME>=<REPLICAS>
# e.g. docker service scale s1=3

Step 5: Configure your NGINX load balancer

Now that services are running on your swarm, you can configure the NGINX load balancer to distribute traffic across the container instances for those services.

Of course, generally load balancers are used to balance traffic across instances of a single service, not multiple services. For the purpose of clarity, this example uses two services so that the function of the load balancer can be easily seen; because the two services are serving different HTML content, we’ll clearly see how the load balancer is distributing requests between them.

The nginx.conf file

First, the nginx.conf file for your load balancer must be configured with the IP addresses and service ports of your swarm nodes and services. The download for NGINX that was downloaded in step 1 as a part of building your NGINX container image includes an example nginx.conf file. For the purpose of this exercise, a version of that file was copied and adapted to create a simple template for you to adapt with your specific node/container information. Get the template file here [TODO: ADD LINK]and save it onto your NGINX container host machine. In this step, we’ll adapt the template file and use it to replace the default nginx.conf file that was originally downloaded onto your NGINX container image.

You will need to adjust the file by adding the information for your hosts and container instances. The template nginx.conf file provided contains the following section:

upstream appcluster {
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
     server <HOSTIP>:<HOSTPORT>;
}

To adapt the file for your configuration, you will need to adjust the <HOSTIP>:<HOSTPORT> entries in the config file. You will have an entry for each container endpoint that defines your web services. For any given container endpoint, the value of <HOSTIP> will be the IP address of the container host upon which that container is running. The value of <HOSTPORT> will be the port on the container host upon which the container endpoint has been published.

When the services, s1 and s2, were defined in the previous step of this exercise, the --publish mode=host,target=80 parameter was included. This paramater specified that the container instances for the services should be exposed via published ports on the container hosts. More specifically, by including --publish mode=host,target=80 in the service definitions, each service was configured to be exposed on port 80 of each of its container endpoints, as well as a set of automatically defined ports on the swarm hosts (i.e. one port for each container running on a given host).

First, identify the host IPs and published ports for your container endpoints

Before you can adjust your nginx.conf file, you must obtain the required information for the container endpoints that define your services. To do this, run the following commands (again, run these from your swarm manager node):

C:\ > docker service ps s1
C:\ > docker service ps s2

The above commands will return details on every container instance running for each of your services, across all of your swarm hosts.

  • One column of the output, the “ports” column, includes port information for each host of the form *:<HOSTPORT>->80/tcp. The values of <HOSTPORT> will be different for each container instance, as each container is published on its own host port.
  • Another column, the “node” column, will tell you which machine the container is running on. This is how you will identify the host IP information for each endpoint.

You now have the port information and node for each container endpoint. Next, use that information to populate the upstream field of your nginx.conf file; for each endpoint, add a server to the upstream field of the file, replacing the field with the IP address of each node (if you don’t have this, run ipconfig on each host machine to obtain it), and the field with the corresponding host port.

For example, if you have two swarm hosts (IP addresses 172.17.0.10 and 172.17.0.11), each running three containers your list of servers will end up looking something like this:

upstream appcluster {
     server 172.17.0.10:21858;
     server 172.17.0.11:64199;
     server 172.17.0.10:15463;
     server 172.17.0.11:56049;
     server 172.17.0.11:35953;
     server 172.17.0.10:47364;
}

Once you have changed your nginx.conf file, save it. Next, we’ll copy it from your host to the NGINX container image itself.

Replace the default nginx.conf file with your adjusted file

If your nginx container is not already running on its host, run it now:

C:\temp> docker run -it -p 80:80 nginx

Get the ID of the container using:

C:\temp> docker exec <CONTAINERID> ipconfig

With the container running, use the following command to replace the default nginx.conf file with the file that you just configured (run the following command from the directory in which you saved your adjusted version of the nginx.conf on the host machine):

C:\temp> docker cp nginx.conf <CONTAINERID>:C:\nginx\nginx-1.10.3\conf

Now use the following command to reload the NGINX server running within your container:

C:\temp> docker exec <CONTAINERID> nginx.exe -s reload

Step 6: See your load balancer in action

Your load balancer should now be fully configured to distribute traffic across the various instances of your swarm services. To see it in action, open a browser and

  • If accessing from the NGINX host machine: Type the IP address of the nginx container running on the machine into the browser address bar. (This is the value of <CONTAINERID> above).
  • If accessing from another host machine (with network access to the NGINX host machine): Type the IP address of the NGINX host machine into the browser address bar.

Once you’ve typed the applicable address into the browser address bar, press enter and wait for the web page to load. Once it loads, you should see one of the HTML pages that you created in step 2.

Now press refresh on the page. You may need to refresh more than once, but after just a few times you should see the other HTML page that you created in step 2.

If you continue refreshing, you will see the two different HTML pages that you used to define the services, web_1 and web_2, being accessed in a round-robin pattern (round-robin is the default load balancing strategy for NGINX, but there are others). The animated image below demonstrated the behavior that you should see.

As a reminder, below is the full configuration with all three nodes. When you’re refreshing your web page view, you’re repeatedly accessing the NGINX node, which is distributing your GET request to the container endpoints running on the swarm nodes. Each time you resend the request, the load balancer has the opportunity to route you to a different endpoint, resulting in your being served a different web page, depending on whether or not your request was routed to an S1 or S2 endpoint.

configuration_full

Caveats and gotchas

Is there a way to publish a single port for my service, so that I can load balance across just a few endpoints rather than all of my container instances?

Unfortunately, we do not yet support publishing a single port for a service on Windows. This feature is swarm mode’s routing mesh feature—a feature that allows you to publish ports for a service, so that that service is accessible to external resources via that port on every swarm node.

Routing mesh for swarm mode on Windows is not yet supported, but will be coming soon.

Why can’t I run my containerized load balancer on one of my swarm nodes?

Currently, there is a known bug on Windows, which prevents containers from accessing their hosts using localhost or even the host’s external IP address. This means containers cannot access their host’s exposed ports—the can only access exposed ports on other hosts.

In the context of this exercise, this means that the NGINX load balancer must be running on its own host, and never on the same host as any services that it needs to via exposed ports. Put another way, for the containerized NGINX load balancer to balance across the two web services defined in this exercise, s1 and s2, it cannot be running on a swarm node—if it were running on a swarm node, it would be unable to access any containers on that node via host exposed ports.

Of course, an additional caveat here is that containers do not need to be accessed via host exposed ports. It is also possible to access containers directly, using the container IP and published port. If this instead were done for this exercise, the NGINX load balancer would need to be configured to access:

  • containers that share its host by their container IP and port
  • containers that do not share its host by their host’s IP and exposed port

There is no problem with configuring the load balancer in this way, other than the added complexity that it introduces compared to simply putting the load balancer on its own machine, so that containers can be uniformly accessed via their hosts.

Making it easier to revert

$
0
0

Sometimes when things go wrong in my environment, I don’t want to have to clean it all up — I just want to go back in time to when everything was working. But remembering to maintain good recovery points isn’t easy.

Now we’re making it so that you can always roll back your virtual machine to a recent good state if you need to. Starting in the latest Windows Insider build, you can now always revert a virtual machine back to the state it started in.

In Virtual Machine Connection, just click the Revert button to undo any changes made inside the virtual machine since it last started.

Revert virtual machine

Under the hood, we’re using checkpoints; when you start a virtual machine that doesn’t have any checkpoints, we create one for you so that you can easily roll back to it if something goes wrong, then we clean it up once the virtual machine shuts down cleanly.

New virtual machines will be created with “Use automatic checkpoints” enabled by default, but you will have to enable it yourself to use it for existing VMs. The option is off by default on Windows Server.  This option can be found in Settings -> Checkpoints -> “Use automatic checkpoints”

Checkpoint settings

Note: the checkpoint will only be taken automatically when the VM starts if it doesn’t have other existing checkpoints.

Hopefully this will come in handy next time you need to undo something in your VM. If you are in the Windows Insider Program, please give it a try and let us know what you think.

Cheers,
Andy

Vagrant and Hyper-V — Tips and Tricks

$
0
0

Learning to Use Vagrant on Windows 10

A few months ago, I went to DockerCon as a Microsoft representative. While I was there, I had the chance to ask developers about their favorite tools.

The most common tool mentioned (outside of Docker itself) was Vagrant. This was interesting — I was familiar with Vagrant, but I’d never actually used it. I decided that needed to change. Over the past week or two, I took some time to try it out. I got everything working eventually, but I definitely ran into some issues on the way.

My pain is your gain — here are my tips and tricks for getting started with Vagrant on Windows 10 and Hyper-V.

NOTE: This is a supplement for Vagrant’s “Getting Started” guide, not a replacement.

Tip 0: Install Hyper-V

For those new to Hyper-V, make sure you’ve got Hyper-V running on your machine. Our official docs list the exact steps and requirements.

Tip 1: Set Up Networking Correctly

Vagrant doesn’t know how to set up networking on Hyper-V right now (unlike other providers), so it’s up to you to get things working the way you like them.

There are a few NAT networks already created on Windows 10 (depending on your specific build).  Layered_ICS should work (but is under active development), while Layered_NAT doesn’t have DHCP.  If you’re a Windows Insider, you can try Layered_ICS.  If that doesn’t work, the safest option is to create an external switch via Hyper-V Manager.  This is the approach I took. If you go this route, a friendly reminder that the external switch is tied to a specific network adapter. So if you make it for WiFi, it won’t work when you hook up the Ethernet, and vice versa.

You can also do this with PowerShell

Instructions for adding an external switch in Hyper-V manager

Tip 2: Use the Hyper-V Provider

Unfortunately, the Getting Started guide uses VirtualBox, and you can’t run other virtualization solutions alongside Hyper-V. You need to change the “provider” Vagrant uses at a few different points.

When you install your first box, add –provider :

vagrant box add hashicorp/precise64 --provider hyperv

And when you boot your first Vagrant environment, again, add –provider. Note: you might run into the error mentioned in Trick 4, so skip to there if you see something like “mount error(112): Host is down”.

vagrant up --provider hyperv

Tip 3: Add the basics to your Vagrantfile

Adding the provider flag is a pain to do every single time you run vagrant up. Fortunately, you can set up your Vagrantfile to automate things for you. After running vagrant init, modify your vagrant file with the following:

Vagrant.configure(2) do |config|
  config.vm.box = "hashicorp/precise64"
  config.vm.provider "hyperv"
  config.vm.network "public_network"
end

One additional trick here: vagrant init will create a file that will appear to be full of commented out items. However, there is one line not commented out:

There is one line not commented.

Make sure you delete that line! Otherwise, you’ll end up with an error like this:

Bringing machine 'default' up with 'hyperv' provider...
==> default: Verifying Hyper-V is enabled...
==> default: Box 'base' could not be found. Attempting to find and install...
    default: Box Provider: hyperv
    default: Box Version: >= 0
==> default: Box file was not detected as metadata. Adding it directly...
==> default: Adding box 'base' (v0) for provider: hyperv
    default: Downloading: base
    default:
An error occurred while downloading the remote file. The error
message, if any, is reproduced below. Please fix this error and try
again.

Trick 4: Shared folders uses SMBv1 for hashicorp/precise64

For the image used in the “Getting Started” guide (hashicorp/precise64), Vagrant tries to use SMBv1 for shared folders. However, if you’re like me and have SMBv1 disabled, this will fail:

Failed to mount folders in Linux guest. This is usually because
the "vboxsf" file system is not available. Please verify that
the guest additions are properly installed in the guest and
can work properly. The command attempted was:

mount -t cifs -o uid=1000,gid=1000,sec=ntlm,credentials=/etc/smb_creds_e70609f244a9ad09df0e760d1859e431 //10.124.157.30/e70609f244a9ad09df0e760d1859e431 /vagrant

The error output from the last command was:

mount error(112): Host is down
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

You can check if SMBv1 is enabled with this PowerShell Cmdlet:

Get-SmbServerConfiguration

If you can live without synced folders, here’s the line to add to the vagrantfile to disable the default synced folder.

config.vm.synced_folder ".", "/vagrant", disabled: true

If you can’t, you can try installing cifs-utils in the VM and re-provision. You could also try another synced folder method. For example, rsync works with Cygwin or MinGW. Disclaimer: I personally didn’t try either of these methods.

Tip 5: Enable Nifty Hyper-V Features

Hyper-V has some useful features that improve the Vagrant experience. For example, a pretty substantial portion of the time spent running vagrant up is spent cloning the virtual hard drive. A faster way is to use differencing disks with Hyper-V. You can also turn on virtualization extensions, which allow nested virtualization within the VM (i.e. Docker with Hyper-V containers). Here are the lines to add to your Vagrantfile to add these features:

config.vm.provider "hyperv" do |h|
  h.enable_virtualization_extensions = true
  h.differencing_disk = true
end

There are a many more customization options that can be added here (i.e. VMName, CPU/Memory settings, integration services). You can find the details in the Hyper-V provider documentation.

Tip 6: Filter for Hyper-V compatible boxes on Vagrant Cloud

You can find more boxes to use in the Vagrant Cloud (formally called Atlas). They let you filter by provider, so it’s easy to find all of the Hyper-V compatible boxes.

Tip 7: Default to the Hyper-V Provider

While adding the default provider to your Vagrantfile is useful, it means you need to remember to do it with each new Vagrantfile you create. If you don’t, Vagrant will trying to download VirtualBox when you vagrant up the first time for your new box. Again, VirtualBox doesn’t work alongside Hyper-V, so this is a problem.

PS C:\vagrant> vagrant up
==>  Provider 'virtualbox' not found. We'll automatically install it now...
     The installation process will start below. Human interaction may be
     required at some points. If you're uncomfortable with automatically
     installing this provider, you can safely Ctrl-C this process and install
     it manually.
==>  Downloading VirtualBox 5.0.10...
     This may not be the latest version of VirtualBox, but it is a version
     that is known to work well. Over time, we'll update the version that
     is installed.

You can set your default provider on a user level by using the VAGRANT_DEFAULT_PROVIDER environmental variable. For more options (and details), this is the relevant page of Vagrant’s documentation.

Here’s how I set the user-level environment variable in PowerShell:

[Environment]::SetEnvironmentVariable("VAGRANT_DEFAULT_PROVIDER", "hyperv", "User")

Again, you can also set the default provider in the Vagrant file (see Trick 3), which will prevent this issue on a per project basis. You can also just add --provider hyperv when running vagrant up. The choice is yours.

Wrapping Up

Those are my tips and tricks for getting started with Vagrant on Hyper-V. If there are any you think I missed, or anything you think I got wrong, let me know in the comments.

Here’s the complete version of my simple starting Vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
  config.vm.box = "hashicorp/precise64"
  config.vm.provider "hyperv"
  config.vm.network "public_network"
  config.vm.synced_folder ".", "/vagrant", disabled: true
  config.vm.provider "hyperv" do |h|
    h.enable_virtualization_extensions = true
    h.differencing_disk = true
  end
end

Copying Files into a Hyper-V VM with Vagrant

$
0
0

A couple of weeks ago, I published a blog with tips and tricks for getting started with Vagrant on Hyper-V. My fifth tip was to “Enable Nifty Hyper-V Features,” where I briefly mentioned stuff like differencing disks and virtualization extensions.

While those are useful, I realized later that I should have added one more feature to my list of examples: the “guest_service_interface” field in “vm_integration_services.” It’s hard to know what that means just from the name, so I usually call it the “the thing that lets me copy files into a VM.”

Disclaimer: this is not a replacement for Vagrant’s synced folders. Those are super convienent, and should really be your default solution for sharing files. This method is more useful in one-off situations.

Enabling Copy-VMFile

Enabling this functionality requires a simple change to your Vagrantfile. You need to set “guest_service_interface” to true within “vm_integration_services” configuration hash. Here’s what my Vagrantfile looks like for CentOS 7:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "centos/7"
  config.vm.provider "hyperv"
  config.vm.network "public_network"
  config.vm.synced_folder ".", "/vagrant", disabled: true
  config.vm.provider "hyperv" do |h|
    h.enable_virtualization_extensions = true
    h.differencing_disk = true
    h.vm_integration_services = {
      guest_service_interface: true  #<---------- this line enables Copy-VMFile
  }
  end
end

You can check that it’s enabled by running Get-VMIntegrationService in PowerShell on the host machine:

PS C:\vagrant_selfhost\centos>  Get-VMIntegrationService -VMName "centos-7-1-1.x86_64"

VMName              Name                    Enabled PrimaryStatusDescription SecondaryStatusDescription
------              ----                    ------- ------------------------ --------------------------
centos-7-1-1.x86_64 Guest Service Interface True    OK
centos-7-1-1.x86_64 Heartbeat               True    OK
centos-7-1-1.x86_64 Key-Value Pair Exchange True    OK                       The protocol version of...
centos-7-1-1.x86_64 Shutdown                True    OK
centos-7-1-1.x86_64 Time Synchronization    True    OK                       The protocol version of...
centos-7-1-1.x86_64 VSS                     True    OK                       The protocol version of...

Note: not all integration services work on all guest operating systems. For example, this functionality will not work on the “Precise” Ubuntu image that’s used in Vagrant’s “Getting Started” guide. The full compatibility list various Windows and Linux distrobutions can be found here. Just click on your chosen distrobution and check for “File copy from host to guest.”

Using Copy-VMFile

Once you’ve got a VM set up correctly, copying files to and from arbitrary locations is as simple as running Copy-VMFile in PowerShell.

Here’s a sample test I used to verify it was working on my CentOS VM:

Copy-VMFile -Name 'centos-7-1-1.x86_64' -SourcePath '.\Foo.txt' -DestinationPath '/tmp' -FileSource Host

Full details can found in the official documentation. Unfortunately, you can’t yet use it to copy files from your VM to your host. If you’re running a Windows Guest, you can use Copy-Item with PowerShell Direct to make that work; see this document for more details.

How Does It Work?

The way this works is by running Hyper-V integration services within the guest operating system. Full details can be found in the official documentation. The short version is that integration services are Windows Services (on Windows) or Daemons (on Linux) that allow the guest operating system to communicate with the host. In this particular instance, the integration service allows us to copy files to the VM over the VM Bus (no network required!).

Conclusion

Hope you find this helpful — let me know if there’s anything you think I missed.

John Slack
Program Manager
Hyper-V Team

Hyper-V virtual machine gallery and networking improvements

$
0
0

In January, we added Quick Create to Hyper-V manager in Windows 10.  Quick Create is a single-page wizard for fast, easy, virtual machine creation.

Starting in the latest fast-track Windows Insider builds (16237+) we’re expanding on that idea in two ways.  Quick Create now includes:

  1. A virtual machine gallery with downloadable, pre-configured, virtual machines.
  2. A default virtual switch to allow virtual machines to share the host’s internet connection using NAT.

image

To launch Quick Create, open Hyper-V Manager and click on the “Quick Create…” button (1).

From there you can either create a virtual machine from one of the pre-built images available from Microsoft (2) or use a local installation source.  Once you’ve selected an image or chosen installation media, you’re done!  The virtual machine comes with a default name and a pre-made network connection using NAT (3) which can be modified in the “more options” menu.

Click “Create Virtual Machine” and you’re ready to go – granted downloading the virtual machine will take awhile.

Details about the Default Switch

The switch named “Default Switch” or “Layered_ICS”, allows virtual machines to share the host’s network connection.  Without getting too deep into networking (saving that for a different post), this switch has a few unique attributes compared to other Hyper-V switches:

  1. Virtual machines connected to it will have access to the host’s network whether you’re connected to WIFI, a dock, or Ethernet.
  2. It’s available as soon as you enable Hyper-V – you won’t lose internet setting it up.
  3. You can’t delete it.
  4. It has the same name and device ID (GUID c08cb7b8-9b3c-408e-8e30-5e16a3aeb444) on all Windows 10 hosts so virtual machines on recent builds can assume the same switch is present on all Windows 10 Hyper-V host.

I’m really excited by the work we are doing in this area.  These improvements make Hyper-V a better tool for people running virtual machines on a laptop.  They don’t, however, replace existing Hyper-V tools.  If you need to define specific virtual machine settings, New-VM or the new virtual machine wizard are the right tools.  For people with custom networks or complicated virtual network needs, continue using Virtual Switch Manager.

Also keep in mind that all of this is a work in progress.  There are rough edges for the default switch right now and there aren’t many images in the gallery.  Please give us feedback!  Your feedback helps us.  Let us know what images you would like to see and share issues by commenting on this blog or submitting feedback through Feedback Hub.

Cheers,
Sarah

Delivering Safer Apps with Windows Server 2016 and Docker Enterprise Edition

$
0
0

Windows Server 2016 and Docker Enterprise Edition are revolutionizing the way Windows developers can create, deploy, and manage their applications on-premises and in the cloud. Microsoft and Docker are committed to providing secure containerization technologies and enabling developers to implement security best practices in their applications. This blog post highlights some of the security features in Docker Enterprise Edition and Windows Server 2016 designed to help you deliver safer applications.

For more information on Docker and Windows Server 2016 Container security, check out the full whitepaper on Docker’s site.

Introduction

Today, many organizations are turning to Docker Enterprise Edition (EE) and Windows Server 2016 to deploy IT applications consistently and efficiently using containers. Container technologies can play a pivotal role in ensuring the applications being deployed in your enterprise are safe — free of malware, up-to-date with security patches, and known to come from a trustworthy source. Docker EE and Windows each play a hand in helping you develop and deploy safer applications according to the following three characteristics:

  1. Usable Security: Secure defaults with tooling that is native to both developers and operators.
  2. Trusted Delivery: Everything needed to run an application is delivered safely and guaranteed not to be tampered with.
  3. Infrastructure Independent: Application and security configurations are portable and can move between developer workstations, testing environments, and production deployments regardless of whether those environments are running in Azure or your own datacenter.

Usable Security

Resource Isolation

Windows Server 2016 ships with support for Windows Server Containers, which are powered by Docker Enterprise Edition. Docker EE for Windows Server is the result of a joint engineering effort between Microsoft and Docker. When you run a Windows Server Container, key system resources are sandboxed for each container and isolated from the host operating system. This means the container does not see the resources available on the host machine, and any changes made within the container will not affect the host or other containers. Some of the resources that are isolated include:

  • File system
  • Registry
  • Certificate stores
  • Namespace (privileged API access, system services, task scheduler, etc.)
  • Local users and groups

Additionally, you can limit a Windows Server Container’s use of the CPU, memory, disk usage, and disk throughput to protect the performance of other applications and containers running on the same host.

Hyper-V Isolation

For even greater isolation, Windows Server Containers can be deployed using Hyper-V isolation. In this configuration, the container runs inside a specially optimized Hyper-V virtual machine with a completely isolated Windows kernel instance. Docker EE handles creating, managing, and deleting the VM for you. Better yet, the same Docker container images can be used for both process isolated and Hyper-V isolated containers, and both types of containers can run side by side on the same host.

Application Secrets

Starting with Docker EE 17.06, support for delivering secrets to Windows Server Containers at runtime is now available. Secrets are simply blobs of data that may contain sensitive information best left out of a container image. Common examples of secrets are SSL/TLS certificates, connection strings, and passwords.

Developers and security operators use and manage secrets in the exact same way — by registering them on manager nodes (in an encrypted store), granting applicable services access to obtain the secrets, and instructing Docker to provide the secret to the container at deployment time. Each environment can use unique secrets without having to change the container image. The container can just read the secrets at runtime from the file system and use them for their intended purposes.

Trusted Delivery

Image Signing and Verification

Knowing that the software running in your environment is authentic and came from a trusted source is critical to protecting your information assets. With Docker Content Trust, which is built into Docker EE, container images are cryptographically signed to record the contents present in the image at the time of signing. Later, when a host pulls the image down, it will validate the signature of the downloaded image and compare it to the expected signature from the metadata. If the two do not match, Docker EE will not deploy the image since it is likely that someone tampered with the image.

Image Scanning and Antimalware

Beyond checking if an image has been modified, it’s important to ensure the image doesn’t contain malware of libraries with known vulnerabilities. When images are stored in Docker Trusted Registry, Docker Security Scanning can analyze images to identify libraries and components in use that have known vulnerabilities in the Common Vulnerabilities and Exposures (CVE) database.

Further, when the image is pulled on a Windows Server 2016 host with Windows Defender enabled, the image will automatically be scanned for malware to prevent malicious software from being distributed through container images.

Windows Updates

Working alongside Docker Security Scanning, Microsoft Windows Update can ensure that your Windows Server operating system is up to date. Microsoft publishes two pre-built Windows Server base images to Docker Hub: microsoft/nanoserver and microsoft/windowsservercore. These images are updated the same day as new Windows security updates are released. When you use the “latest” tag to pull these images, you can rest assured that you’re working with the most up to date version of Windows Server. This makes it easy to integrate updates into your continuous integration and deployment workflow.

Infrastructure Independent

Active Directory Service Accounts

Windows workloads often rely on Active Directory for authentication of users to the application and authentication between the application itself and other resources like Microsoft SQL Server. Windows Server Containers can be configured to use a Group Managed Service Account when communicating over the network to provide a native authentication experience with your existing Active Directory infrastructure. You can select a different service account (even belonging to a different AD domain) for each environment where you deploy the container, without ever having to update the container image.

Docker Role Based Access Control

Docker Enterprise Edition allows administrators to apply fine-grained role based access control to a variety of Docker primitives, including volumes, nodes, networks, and containers. IT operators can grant users predefined permission roles to collections of Docker resources. Docker EE also provides the ability to create custom permission roles, providing IT operators tremendous flexibility in how they define access control policies in their environment.

Conclusion

With Docker Enterprise Edition and Windows Server 2016, you can develop, deploy, and manage your applications more safely using the variety of built-in security features designed with developers and operators in mind. To read more about the security features available when running Windows Server Containers with Docker Enterprise Edition, check out the full whitepaper and learn more about using Docker Enterprise Edition in Azure.

Docker’s routing mesh available with Windows Server version 1709

$
0
0

The Windows Core Networking team, along with our friends at Docker, are thrilled to announce that support for Docker’s ingress routing mesh will be supported with Windows Server version 1709.

Ingress routing mesh is part of swarm mode–Docker’s built-in orchestration solution for containers. Swarm mode first became available on Windows early this year, along with support for the Windows overlay network driver. With swarm mode, users have the ability to create container services and deploy them to a cluster of container hosts. With this, of course, also comes the ability to define published ports for services, so that the apps that those services are running can be accessed by endpoints outside of the swarm cluster (for example, a user might want to access a containerized web service via web browser from their laptop or phone).

To place routing mesh in context, it’s useful to understand that Docker currently provides it, along with another option for publishing services with swarm mode–host mode service publishing:*

  • Host mode is an approach to service publishing that’s optimal for production environments, where system administrators value maximum performance and full control over their container network configuration. With host mode, each container of a service is published directly to the host where it is running.
  • Routing mesh is an approach to service publishing that’s optimized for the developer experience, or for production cases where a simple configuration experience is valued above performance, or control over how incoming requests are routed to the specific replicas/containers for a service. With ingress routing mesh, the containers for a published service, can all be accessed through a single “swarm port”–one port, published on every swarm host (even the hosts where no container for the service is currently running!).

While our support for routing mesh is new with Windows Server version 1709, host mode service publishing has been supported since swarm mode was originally made available on Windows. 

*For more information, on how host mode and routing mesh work, visit Docker’s documentation on routing mesh and publishing services with swarm mode.

So, what does it take to use routing mesh on Windows? Routing mesh is Docker’s default service publishing option. It has always been the default behavior on Linux, and now it’s also supported as the default on Windows! This means that all you need to do to use routing mesh, is create your services using the --publish flag to the docker service create option, as described in Docker’s documentation.

For example, assume you have a basic web service, defined by a container image called, web-frontend. If you wanted to publish this service to port 80 of each container and port 8080 of all of your swarm nodes, you’d create the service with a command like this:

C:\> docker service create --name web --replicas 3 --publish 8080:80 web-frontend

In this case, the web app, running on a pre-configured swarm cluster along with a db backend service, might look like the app depicted below. As shown, because of routing mesh clients outside of the swarm cluster (in this example, web browsers) are able to access the web service via its published port–8080. And in fact, each client can access the web service via its published port on any swarm host; no matter which host receives an original incoming request, that host will use routing mesh to route the request to a web container instance that can ultimately service that request.

Once again, we at Microsoft and our partners at Docker are proud to make ingress mode available to you on Windows. Try it out on Windows Server version 1709, and using Docker EE Preview*, and let us know what you think! We appreciate your engagement and support in making features like routing mesh possible, and we encourage you to continue reaching out with feedback. Please provide your questions/comments/feature requests by posting issues to the Docker for Windows GitHub repo or by emailing the Windows Core Networking team directly, at sdn_feedback@microsoft.com.

*Note: Ingress mode on Windows currently has the following system requirements:

 


Container Images are now out for Windows Server version 1709!

$
0
0

With the release of Windows Server version 1709 also come Windows Server Core and Nano Server base OS container images.

It is important to note that while older versions of the base OS container images will work on a newer host (with Hyper-V isolation), the opposite is not true. Container images based on Windows Server version 1709 will not work on a host using Windows Server 2016.  Read more about the different versions of Windows Server.

We’ve also made some changes to our tagging scheme so you can more easily specify which version of the container images you want to use.  From now on, the “latest” tag will follow the releases of the current LTSC product, Windows Server 2016. If you want to keep up with the latest patches for Windows Server 2016, you can use:

“microsoft/nanoserver”
or
“microsoft/windowsservercore”

in your dockerfiles to get the most up-to-date version of the Windows Server 2016 base OS images. You can also continue using specific versions of the Windows Server 2016 base OS container images by using the tags specifying the build, like so:

“microsoft/nanoserver:10.0.14393.1770”
or
“microsoft/windowsservercore:10.0.14393.1770”.

If you would like to use base OS container images based on Windows Server version 1709, you will have to specify that with the tag. In order to get the most up-to-date base OS container images of Windows Server version 1709, you can use the tags:

“microsoft/nanoserver:1709”
or
“microsoft/windowsservercore:1709”

And if you would like a specific version of these base OS container images, you can specify the KB number that you need on the tag, like this:

“microsoft/nanoserver:1709_KB4043961”
or
“microsoft/windowsservercore:1709_KB4043961”.

We hope that this tagging scheme will ensure that you always choose the image that you want and need for your environment. Please let us know in the comments if you have any feedback for us.

Note: We currently do not intend to use the build numbers to specify Windows Server version 1709 container images. We will only be using the KB schema specified above for the tagging of these images. Let us know if you have feedback about this as well

Regards,
Ender

A great way to collect logs for troubleshooting

$
0
0

Did you ever have to troubleshoot issues within a Hyper-V cluster or standalone environment and found yourself switching between different event logs? Or did you repro something just to find out not all of the important Windows event channels had been activated?

To make it easier to collect the right set of event logs into a single evtx file to help with troubleshooting we have published a HyperVLogs PowerShell module on GitHub.

In this blog post I am sharing with you how to get the module and how to gather event logs using the functions provided.

Step 1: Download and import the PowerShell module

First of all you need to download the PowerShell module and import it.

Step 2: Reproduce the issue and capture logs

Now, you can use the functions provided as part of the module to collect logs for different situations.
For example, to investigate an issue on a single node, you can collect events with the following steps:

Using this module and its functions made it a lot easier for me to collect the right event data to help with investigations. Any feedback or suggestions are highly welcome.

Cheers,
Lars

Available to Windows 10 Insiders Today: Access to published container ports via “localhost”/127.0.0.1

Create your custom Quick Create VM gallery

$
0
0

Have you ever wondered whether it is possible to add your own custom images to the list of available VMs for Quick Create?

The answer is: Yes, you can!

Since quite a few people have been asking us, this post will give you a quick example to get started and add your own custom image while we’re working on the official documentation. The following two steps will be described in this blog post:

  1. Create JSON document describing your image
  2. Add this JSON document to the list of galleries to include

Step 1: Create JSON document describing your image

The first thing you will need is a JSON document which describes the image you want to have showing up in quick create. The following snippet is a sample JSON document which you can adapt to your own needs. We will publish more documentation on this including a JSON schema to run validation as soon as it is ready.

To calculate the SHA256 hashes for the linked files you can use different tools. Since it is already available on Windows 10 machines, I like to use a quick PowerShell call: Get-FileHash -Path .\contoso_logo.png -Algorithm SHA256
The values for logo, symbol, and thumbnail are optional, so if there are no images at hand, you can just remove these values from the JSON document.

Step 2: Add this JSON document to the list of galleries to include

To have your custom gallery image show up on a Windows 10 client, you need to set the GalleryLocations registry value under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization.
There are multiple ways to achieve this, you can adapt the following PowerShell snippet to set the value:

If you don’t want to include the official Windows 10 developer evaluation images, just remove the fwlink from the GalleryLocations value.

Have fun creating your own VM galleries and stay tuned for our official documentation. We’re looking forward to see what you create!

Lars

What’s new in Hyper-V for Windows 10 Fall Creators Update?

$
0
0

Windows 10 Fall Creators Update has arrived!  While we’ve been blogging about new features as they appear in Windows Insider builds, many of you have asked for a consolidated list of Hyper-V updates and improvements since Creators Update in April.

Summary:

  • Quick Create includes a gallery (and you can add your own images)
  • Hyper-V has a Default Switch for easy networking
  • It’s easy to revert virtual machines to their start state
  • Host battery state is visible in virtual machines
  • Virtual machines are easier to share

    Quick Create virtual machine gallery

    The virtual machine gallery in Quick Create makes it easy to find virtual machine images in one convenient location.

    image

    You can also add your own virtual machine images to the Quick Create gallery.  Building a custom gallery takes some time but, once built, makes creating virtual machines easy and consistent.

    This blog post walks through adding custom images to the gallery.

    For images that aren’t in the gallery, select “Local Installation Source” to create a virtual machine from an .iso or vhd located somewhere in your file system.

    Keep in mind, while Quick Create and the virtual machine gallery are convenient, they are not a replacement for the New Virtual Machine wizard in Hyper-V manager.  For more complicated virtual machine configuration, use that.

    Default Switch

    The switch named “Default Switch” allows virtual machines to share the host’s network connection using NAT (Network Address Translation).  This switch has a few unique attributes:

    1. Virtual machines connected to it will have access to the host’s network whether you’re connected to WIFI, a dock, or Ethernet. It will also work when the host is using VPN
      or proxy.
    2. It’s available as soon as you enable Hyper-V – you won’t lose internet setting it up.
    3. You can’t delete or rename it.
    4. It has the same name and device ID on all Windows 10 Fall Creator’s Update Hyper-V hosts.
      Name: Default Switch
      ID: c08cb7b8-9b3c-408e-8e30-5e16a3aeb444

    Yes, the default switch does automatically assign an IP to the virtual machine (DNS and DHCP).

    I’m really excited to have a always-available network connection for virtual machines on Hyper-V.  The Default Switch offers the best networking experience for virtual machines on a laptop.  If you need highly customized networking, however, continue using Virtual Switch Manager.

    Revert! (automatic checkpoints)

    This is my personal favorite feature from Fall Creators Update.

    For a little bit of background, I mostly use virtual machines to build/run demos and to sandbox simple experiments.  At least once a month, I accidently mess up my virtual machine.  Sometimes I remember to make a checkpoint and I can roll back to a good state.  Most of the time I don’t.  Before automatic checkpoints, I’d have to choose between rebuilding my virtual machine or manually undoing my mistake.

    Starting in Fall Creators Update, Hyper-V creates a checkpoint when you start virtual machines.  Say you’re learning about Linux and accidently `rm –rf /*` or update your guest and discover a breaking change, now you can simply revert back to when the virtual machine started.

    image

    Automatic checkpoints are enabled by default on Windows 10 and disabled by default on Windows Server.  They are not useful for everyone.  For people with automation or for those of you worried about the overhead of making a checkpoint, you can disable automatic checkpoints with PowerShell (Set-VM –Name VMwithAutomation –AutomaticCheckpointsEnabled) or in VM settings under “Checkpoints”.

    Here’s a link to the original announcement with more information.

    Battery pass-through

    Virtual machines in Fall Creators Update are aware of the hosts battery state.

    imageThis is nice for a few reasons:

    1. You can see how much battery life you have left in a full-screen virtual machine.
    2. The guest operating system knows the battery state and can optimize for low power situations.

    Easier virtual machine sharing

    Sharing your Hyper-V virtual machines is easier with the new “Share” button. Share packages and compresses your virtual machine so you can move it to another Hyper-V host right from Virtual Machine Connection.

    image

    Share creates a “.vmcz” file with your virtual hard drive (vhd/vhdx) and any state the virtual machine will need to run.  “Share” will not include checkpoints. If you would like to also export your checkpoints, you can use the “Export” tool, or the “Export-VM” PowerShell cmdlet.

    clip_image002

    Once you’ve moved your virtual machine to another computer with Hyper-V, double click the “.vmcz” file and the virtual machine will import automatically.

    —-

    That’s the list!  As always, please send us feedback via FeedbackHub.

    Curious what we’re building next?  Become a Windows Insider – almost everything here has benefited from your early feedback.

    Cheers,
    Sarah

    WSL Interoperability with Docker

    $
    0
    0

    We frequently get asked about running docker from within the Windows Subsystem for Linux (WSL). We don’t support running the docker daemon directly in WSL. But what you can do is call in to the daemon running under Windows from WSL. What does this let you do? You can create dockerfiles, build them, and run them in the daemon—Windows or Linux, depending on which runtime you have selected—all from the comfort of WSL. 

    Overview 

    The architectural design of docker is split into three components: a client, a REST API, and a server (the daemon). At a high level: 

    • Client: interacts with the REST API. The primary purpose of this piece is to allow a user to interface the daemon. 
    • REST API: Acts as the interface between the client and server, allowing a flow of communication. 
    • Daemon: Responsible for actually managing the containers—starting, stopping, etc. The daemon listens for API requests from docker clients. 

    The daemon has very close ties to the kernel. Today in Windows, when you’re running Windows Server containers, a daemon process runs in Windows. When you switch to Linux Container mode, the daemon actually runs inside a VM called the Moby Linux VM. With the upcoming release of Docker, you’ll be able to run Windows Server containers and Linux container side-by-side, and the daemon will always run as a Windows process. 

    The client, however, doesn’t have to sit in the same place as the daemon. For example, you could have a local docker client on your dev machine communicating with Docker up in Azure. This allows us to have a client in WSL talking to the daemon running on the host. 

    What‘s the Proposal? 

    This method is made available because of a tool built by John Starks (@gigastarks), a dev lead on Hyper-V, called npiperelay. Getting communication up and running between WSL and the daemon isn’t new; there have been several great blog posts (this blog by Nick Janetakis comes to mind) which recommend going a TCP-route by opening a port without TLS (like below): 

    While I would consider the port 2375 method to be more robust than the tutorial we’re about to walk through, you do expose your system to potential attack vectors for malicious code. We don’t like exposing attack vectors 🙂 

    What about opening another port to have docker listen on and protect that with TLS? Well, Docker for Windows doesn’t support the requirements needed to make this happen.  So this brings up back to npiperelay. 

    Note: the tool we are about to use works best with insider builds–it can be a little buggy on ver. 1709. Your mileage may vary.

    Installing Go

    We’re going to build the relay from within WSL. If you do not have WSL installed, then you’ll need to download it from the Microsoft Store. Once you have WSL running, we need to download Go. To do this:

    #Make sure we have the latest package lists
    sudo apt-get update
    #Download Go. You should change the version if there's a newer one. Check at: https://golang.org/dl/
    sudo wget https://storage.googleapis.com/golang/go1.9.2.linux-amd64.tar.gz

    Now we need to unzip Go and add the binary to our PATH:

    #unzip Go
    sudo tar -C /usr/local -xzf go1.9.2.linux-amd64.tar.gz
    #Put it in the path
    export PATH=$PATH:/usr/local/go/bin

    Building the Relay

    With Go now installed, we can build the relay. In the command below, make sure to replace with your Windows username:

    go get -d github.com/jstarks/npiperelay
    GOOS=windows go build -o /mnt/c/Users/<your_user_name>/go/bin/npiperelay.exe github.com/jstarks/npiperelay

    We’ve now built the relay for Windows but we want it callable from within WSL. To do this, we make a symlink. Make sure to replace with your Windows username:

    sudo ln -s /mnt/c/Users/<your_user_name>/go/bin/npiperelay.exe /usr/local/bin/npiperelay.exe

    We’ll be using socat to help enable the relay. Install socat, a tool that allows for bidirectional flow of data between two points (more on this later). Grab this package:

    sudo apt install socat

    We need to install the docker client on WSL. To do this:

    sudo apt install docker.io

    Last Steps

    With socat installed and the executable built, we just need to string a few things together. We’re going to make a shell script to activate the functionality for us. We’re going to place this in the home directory of the user. To do this:

    #make the file
    touch ~/docker-relay
    #add execution privileges
    chmod +x ~/docker-relay

    Open the file we’ve created with your favorite text editor (like vim). Paste this into the file:

    #!/bin/sh
    exec socat UNIX-LISTEN:/var/run/docker.sock,fork,group=docker,umask=007 EXEC:"npiperelay.exe -ep -s //./pipe/docker_engine",nofork

    Save the file and close it. The docker-relay script configures the Docker pipe to allow access by the docker group. To run as an ordinary user (without having to attach ‘sudo’ to every docker command), add your WSL user to the docker group. In Ubuntu:

    sudo adduser ${USER} docker

    Test it Out!

    Open a new WSL shell to ensure your group membership is reset. Launch the relay in the background:

    sudo ~/docker-relay &

    Now, run a docker command to test the waters. You should be greeted by the same output as if you ran the command from Windows (and note you don’t need ‘sudo’ prefixed to the command, either!)

    Volume Mounting

    If you’re wondering how volume mounting works with npiperelay, you’ll need to use the Windows path when you specify your volume. See the comparison below:

    #this is CORRECT
    docker run -v C:/Users/crwilhit.REDMOND/tmp/ microsoft/nanoserver cmd.exe

    #this is INCORRECT
    docker run -v /mnt/c/Users/crwilhit.REDMOND/tmp/ microsoft/nanoserver cmd.exe

    How Does it Work?

    There’s a fundamental problem with getting the docker client running under WSL to communicate with Docker for Windows: the WSL client understands IPC via unix sockets, whereas Docker for Windows understands IPC via named pipes. This is where socat and npiperelay.exe come in to play–as the mediators between these two forms of disjoint IPC. Socat understands how to communicate via unix sockets and npiperelay understands how to communicate via named pipes. Socat and npiperelay both understand how to communicate via stdio, hence they can talk to each other.

    Conclusion

    Congratulations, you can now talk to Docker for Windows via WSL. With the recent addition of background processes in WSL, you can close out of WSL, open it later, and the relay we’ve built will continue to run. However, if you kill the socat process or do a hard reboot of your system, you’ll need to make sure you launch the relay in the background again when you first launch WSL.

    You can use the npiperelay tool for other things as well. Check out the GitHub repo to learn more. Try it out and let us know how this works out for you.

    Migrating local VM owner certificates for VMs with vTPM

    $
    0
    0

    Whenever I want to replace or reinstall a system which is used to run virtual machines with a virtual trusted platform module (vTPM), I’ve been facing a challenge: For hosts that are not part of a guarded fabric, the new system does need to be authorized to run the VM.
    Some time ago, I wrote a blog post focused on running VMs with a vTPM on additional hosts, but the approach highlighted there does not solve everything when the original host is decommissioned. The VMs can be started on the new host, but without the original owner certificates, you cannot change the list of allowed guardians anymore.

    This blog post shows a way to export the information needed from the source host and import it on a destination host. Please note that this technique only works for local mode and not for a host that is part of a guarded fabric. You can check whether your host runs in local mode by running Get-HgsClientConfiguration. The property Mode should list Local as a value.

    Exporting the default owner from the source host

    The following script exports the necessary information of the default owner (“UntrustedGuardian“) on a host that is configured using local mode. When running the script on the source host, two certificates are exported: a signing certificate and an encryption certificate.

    Importing the UntrustedGuardian on the new host

    On the destination host, the following snippet creates a new guardian using the certificates that have been exported in the previous step.

    Please note that importing the “UntrustedGuardian” on the new host has to be done before creating new VMs with a vTPM on this host — otherwise a new guardian with the same name will already be present and the creation with the PowerShell snippet above will fail.

    With these two steps, you should be able to migrate all the necessary bits to keep your VMs with vTPM running in your dev/test environment. This approach can also be used to back up your owner certificates, depending on how these certificates have been created.


    Tar and Curl Come to Windows!

    $
    0
    0

    Beginning in Insider Build 17063, we’re introducing two command-line tools to the Windows toolchain: curl and bsdtar. It’s been a long time coming, I know. We’d like to give credit to the folks who’ve created and maintain bsdtar and curl—awesome open-source tools used my millions of humans every day. Let’s take a look at two impactful ways these tools will make developing on Windows an even better experience.

    1. Developers! Developers! Developers!

    Tar and curl are staples in a developer’s toolbox; beginning today, you’ll find these tools are available from the command-line for all SKUs of Windows. And yes, they’re the same tools you’ve come to know and love! If you’re unfamiliar with these tools, here’s an overview of what they do:

    • Tar: A command line tool that allows a user to extract files and create archives. Outside of PowerShell or the installation of third party software, there was no way to extract a file from cmd.exe. We’re correcting this behavior 🙂 The implementation we’re shipping in Windows uses libarchive.
    • Curl: Another command line tool that allows for transferring of files to and from servers (so you can, say, now download a file from the internet).

    Now not only will you be able to perform file transfers from the command line,  you’ll also be able to extract files in formats in addition to .zip (like .tar.gz, for example). PowerShell does already offer similar functionality (it has curl and it’s own file extraction utilities), but we recognize that there might be instances where PowerShell is not readily available or the user wants to stay in cmd.

    2. The Containers Experience

    Now that we’re shipping these tools inbox, you no longer need to worry about using a separate container image as the builder when targeting nanoserver-based containers. Instead, we can invoke the tools like so:

    Background

    We offer two base images for our containers: windowsservercore and nanoserver. The servercore image is the larger of the two and has support for such things as the full .NET framework. On the opposite end of the spectrum is nanoserver, which is built to be lightweight with as minimal a memory footprint   as possible. It’s capable of running .NET core but, in keeping with the minimalism, we’ve tried to slim down the image size as much as possible. We threw out all components we felt were not mission-critical for the container image.

    PowerShell was one of the components that was put on the chopping block for our nanoserver image. PowerShell is a whopping 56 Mb (given that the total size of the nanoserver image is 200 Mb…that’s quite the savings!) But the consequence of removing PowerShell meant there was no way to pull down a package and unzip it from within the container.

    If you’re familiar with writing dockerfiles, you’ll know that it’s common practice to pull in all the packages (node, mongo, etc.) you need and install them. Instead, users would have to rely on using a separate image with PowerShell as the “builder” image to accomplish constructing an image. This is clearly not the experience we want our users to have when targeting nanoserver—they’d end up having to download the much larger servercore image.

    This is all resolved with the addition of curl and tar. You can call these tools from servercore images as well.

     

    We want your Feedback!

    Are there other developer tools you would like to see added to the command-line? Drop a comment below with your thoughts! In the mean time, go grab Insider Build 17063 and get busy curl’ing an tar’ing to your heart’s desire.

    A smaller Windows Server Core Container with better Application Compatibility

    $
    0
    0

    In Windows Server Insider Preview Build 17074 released on Tuesday Jan 16, 2018, there are some exciting improvements to Windows Server containers that we’d like to share with you.  We’d love for you to test out the build, especially the Windows Server Core container image, and give us feedback!

    Windows Server Core Container Base Image Size Reduced to 1.58GB!

    You told us that the size of the Server Core container image affects your deployment times, takes too long to pull down and takes up too much space on your laptops and servers alike.  In our first Semi-Annual Channel release, Windows Server, version 1709, we made some great progress reducing the size by 60% and your excitement was noted.  We’ve continued to actively look for additional space savings while balancing application compatibility. It’s not easy but we are committed.

    There are two main directions we looked at:

    1)      Architecture optimization to reduce duplicate payloads

     We are always looking for way to optimize our architecture. In Windows Server, version 1709 along with the substantial reduction in Server Core container image, we also made some substantial reductions in the Nano Server container image (dropping it below 100MB).  In doing that work we identified that some of the same architecture could be leveraged with Server Core container. In partnership with other teams in Windows, we were able to implement changes in our build process to take advantage of those improvements.  The great part about this work is that you should not notice any differences in application compatibility or experiences other than a nice reduction in size and some performance improvements.

    2)      Removing unused optional components

    We looked at all the various roles, features and optional components available in Server Core and broke them down into a few buckets in terms of usage:  frequently in containers, rarely in containers, those that we don’t believe are being used and those that are not supported in containers.  We leveraged several data sources to help categorize this list. First, those of you that have telemetry enabled, thank you! That anonymized data is invaluable to these exercises. Second was publicly available dockerfiles/images and of course feedback from GitHub issues and forums.  Third, the roles or features that are not even supported in containers were easy to make a call and remove. Lastly, we also removed roles and features we do not see evidence of customers using.  We could do more in this space in the future but really need your feedback (telemetry is also very much appreciated) to help guide what can be removed or separated.

    So, here are the numbers on Windows Server Core container size if you are curious:

    • 1.58GB, download size, 30% reduction from Windows Server, version 1709
    • 3.61GB, on disk size, 20% reduction from Windows Server, version 1709

    MSMQ now installs in a Windows Server Core container

    MSMQ has been one of the top asks we heard from you, and ranks very high on Windows Server User Voice here. In this release, we were able to partner with our Kernel team and make the change which was not trivial. We are happy to announce now it installs! And passed our in-house Application Compatibility test. Woohoo!

    However, there are many different use cases and ways customers have used MSMQ. So please do try it out and let us know if it indeed works for you.

    A Few Other Key App Compatibility Bug Fixes:

    • We fixed the issue reported on GitHub that services running in containers do not receive shutdown notification.

    https://github.com/moby/moby/issues/25982

    • We fixed this issue reported on GitHub and User Voice related to BitLocker and FDVDenyWriteAccess policy: Users were not able to run basic Docker commands like Docker Pull.

    https://github.com/Microsoft/Virtualization-Documentation/issues/530

    https://github.com/Microsoft/Virtualization-Documentation/issues/355

    https://windowsserver.uservoice.com/forums/304624-containers/suggestions/18544312-fix-docker-load-pull-build-issue-when-bitlocker-is

    • We fixed a few issues reported on GitHub related to mounting directories between hosts and containers.

    https://github.com/moby/moby/issues/30556

    https://github.com/git-for-windows/git/issues/1007

    We are so excited and proud of what we have done so far to listen to your voice, continuously optimize Server Core container size and performance, and fix top application compatibility issues to make your Windows Container experience better and meet your business needs better. We love hearing how you are using Windows containers, and we know there is still plenty of opportunities ahead of us to make them even faster and better. Fun journey ahead of us!

    Thank you.

    Weijuan

    Looking at the Hyper-V Event Log (January 2018 edition)

    $
    0
    0

    Hyper-V has changed over the last few years and so has our event log structure. With that in mind, here is an update of Ben’s original post in 2009 (“Looking at the Hyper-V Event Log”).

    This post gives a short overview on the different Windows event log channels that Hyper-V uses. It can be used as a reference to better understand which event channels might be relevant for different purposes.

    As a general guidance you should start with the Hyper-V-VMMS and Hyper-V-Worker event channels when analyzing a failure. For migration-related events it makes sense to look at the event logs both on the source and destination node.

    Windows Event Viewer showing the Hyper-V-VMMS Admin log

    Below are the current event log channels for Hyper-V. Using “Event Viewer” you can find them under “Applications and Services Logs”, “Microsoft”, “Windows”.
    If you would like to collect events from these channels and consolidate them into a single file, we’ve published a HyperVLogs PowerShell module to help.

    Event Channel Category Description
    Hyper-V-Compute Events from the Host Compute Service (HCS) are collected here. The HCS is a low-level management API.
    Hyper-V-Config This section is for anything that relates to virtual machine configuration files. If you have a missing or corrupt virtual machine configuration file – there will be entries here that tell you all about it.
    Hyper-V-Guest-Drivers Look at this section if you are experiencing issues with VM integration components.
    Hyper-V-High-Availability Hyper-V clustering-related events are collected in this section.
    Hyper-V-Hypervisor This section is used for hypervisor specific events. You will usually only need to look here if the hypervisor fails to start – then you can get detailed information here.
    Hyper-V-StorageVSP Events from the Storage Virtualization Service Provider. Typically you would look at these when you want to debug low-level storage operations for a virtual machine.
    Hyper-V-VID These are events form the Virtualization Infrastructure Driver. Look here if you experience issues with memory assignment, e.g. dynamic memory, or changing static memory while the VM is running.
    Hyper-V-VMMS Events from the virtual machine management service can be found here. When VMs are not starting properly, or VM migrations fail, this would be a good source to start investigating.
    Hyper-V-VmSwitch These channels contain events from the virtual network switches.
    Hyper-V-Worker This section contains events from the worker process that is used for the actual running of the virtual machine. You will see events related to startup and shutdown of the VM here.
    Hyper-V-Shared-VHDX Events specific to virtual hard disks that can be shared between several virtual machines. If you are using shared VHDs this event channel can provide more detail in case of a failure.
    Hyper-V-VMSP The VM security process (VMSP) is used to provide secured virtual devices like the virtual TPM module to the VM.
    Hyper-V-VfpExt Events form the Virtual Filtering Platform (VFP) which is part of the Software Defined Networking Stack.
    VHDMP Events from operations on virtual hard disk files (e.g. creation, merging) go here.

    Please note: some of these only contain analytic/debug logs that need to be enabled separately and not all channels exist on Windows client. To enable the analytic/debug logs, you can use the HyperVLogs PowerShell module.

    Alles Gute,

    Lars

    Sneak Peek: Taking a Spin with Enhanced Linux VMs

    $
    0
    0

    Whether you’re a developer or an IT admin, virtual machines are familiar tools that allow users to run entirely separate operating system instances on a host. And despite being a separate OS, we feel there’s a great importance in having a VM experience that feels tightly integrated with the host. We invested in making the Windows client VM experience first-class, and users really liked it. Our users asked us to go further: they wanted that same first-class experience on Linux VMs as well.

    As we thought about how we could deliver a better-quality experience–one that achieved closer parity with Windows clients–we found an opportunity to collaborate with the open source folks at XRDP, who have implemented Microsoft’s RDP protocol on Linux.

    We’re partnering with Canonical on the upcoming Ubuntu 18.04 release to make this experience a reality, and we’re working to provide a solution that works out of the box. Hyper-V’s Quick Create VM gallery  is the perfect vehicle to deliver such an experience. With only 3 mouse clicks, users will be able to get an Ubuntu VM running that offers clipboard functionality, drive redirection, and much more.

    But you don’t have to wait until the release of Ubuntu 18.04 to try out the improved Linux VM experience. Read on to learn how you can get a sneak peek!

    Disclaimer: This feature is under development. This tutorial outlines steps to have an enhanced Ubuntu experience in 16.04. Our TARGET experience will be with 18.04. There may be some bugs you discover in 16.04–and that’s okay! We want to gather this data so we can make the 18.04 experience great. 

    A Call for Testing

    We’ve chosen Canonical’s next LTS release, Bionic Beaver, to be the focal point of our investments. In the lead up to the official release of 18.04, we’d like to begin getting feedback on how satisfied users are with the general experience. The experience we’re working towards in Ubuntu 18.04 can be set up in Ubuntu 16.04 (with a few extra steps). We will walk through how to set up an Ubuntu 16.04 VM running in Hyper-V with Enhanced Session Mode.

    In the future, you can expect to be able to find an Ubuntu 18.04 image sitting in the Hyper-V Quick Create galley 😊

    NOTE: In order to participate in this tutorial, you need to be on Insider Builds, running at minimum Insider Build No. 17063

    Tutorial

    Grab the Ubuntu 16.04 ISO from Canonical’s website, found at releases.ubuntu.com. Provision the VM as you normally would and step through the installation process. We created a set of scripts to perform all the heavy lifting to set up your environment appropriately. Once your VM is fully operational, we’ll be executing the following commands inside of it.

    #Get the scripts from GitHub
    $ sudo apt-get update
    $ sudo apt install git
    $ git clone https://github.com/jterry75/xrdp-init.git ~/xrdp-init
    $ cd ~/xrdp-init/ubuntu/16.04/

    #Make the scripts executable and run them...
    $ sudo chmod +x install.sh
    $ sudo chmod +x config-user.sh
    $ sudo ./install.sh

    Install.sh will need to be run twice in order for the script to execute fully (it must perform a reboot mid-script). That is, once your VM reboots, you’ll need to change dir into the location of the script and run again. Once you’ve finished running the install.sh script, you’ll need to run config-user.sh

    $ sudo ./config-user.sh

    After you’ve run your scripts, shut down your VM. On your host machine in a powershell prompt, execute this command:

    Set-VM -VMName <your_vm_name>  -EnhancedSessionTransportType HvSocket

    Now, when you boot your VM, you will be greeted with an option to connect and adjust your display size. This will be an indication that you’re running in an enhanced session mode. Click “connect” and you’re complete.

    What are the Benefits?

    These are the features that you get with the new enhanced session mode:

    • Better mouse experience
    • Integrated clipboard
    • Window Resizing
    • Drive Redirection

    We encourage you to log any issues you discover to GitHub. This will also give you an idea of already identified issues.

    How does this work?

    The technology behind this mode is actually the same as how we achieve an enhanced session mode in Windows. It relies on the RDP protocol, implemented on Linux by the open source folks at XRDP, over Hyper-V sockets to light up all the great features that give the VM an integrated feel. Hyper-V sockets, or hv_sock, supply a byte-stream based communication mechanism between the host partition and the guest VM. Think of it as similar to TCP, except it’s going over an optimized transport layer called VMBus. We contributed changes which would allow XRDP to utilize hv_sock.

    The scripts we executed did the following:

    • Installs the “Linux-azure” kernel to the VM. This carries the hv_sock bits that we need.
    • Downloads the XRDP source code and compiles it with the hv_sock feature turned on (the published XRDP package in 16.04 doesn’t have this set, so we must compile from source).
    • Builds and installs xorgxrdp.
    • Configures the user session for RDP
    • Launches the XRDP service

    As we mentioned earlier, the steps described above are for Ubuntu 16.04, which will look a little different from 18.04. In fact, with Ubuntu 18.04 shipping with the 4.15 linux kernel (which already carries the hv_sock bits), we won’t need to apply the linux-azure kernel. The version of XRDP that ships as available in 18.04 is already compiled with hv_sock feature turned on, so there’s no more need to build xrdp/xorgxrdp—a simple “apt install” will bring in all the feature goodness!

    If you’re not flighting insider builds, you can look forward to having this enhanced VM experience via the VM gallery when Ubuntu 18.04 is released at the end of April. Leave a comment below on your experience or tweet me with your thoughts!

    Cheers,

    Craig Wilhite (@CraigWilhite)

    Hyper-V symbols for debugging

    $
    0
    0

    Having access to debugging symbols can be very handy, for example when you are

    • A partner building solutions leveraging Hyper-V,
    • Trying to debug a specific issue, or
    • Searching for bugs to participate in the Microsoft Hyper-V Bounty Program.

    Starting with symbols for Windows Server 2016 with an installed April 2018 cumulative update, we are now providing access to most Hyper-V-related symbols through the public symbol servers. Here are some of the symbols that are available right now:

    SYMCHK: vmbuspipe.dll [10.0.14393.2007 ] PASSED - PDB: vmbuspipe.pdb DBG:
    SYMCHK: vmbuspiper.dll [10.0.14393.2007 ] PASSED - PDB: vmbuspiper.pdb DBG:
    SYMCHK: vmbusvdev.dll [10.0.14393.2007 ] PASSED - PDB: vmbusvdev.pdb DBG:
    SYMCHK: vmchipset.dll [10.0.14393.2007 ] PASSED - PDB: VmChipset.pdb DBG:
    SYMCHK: vmcompute.dll [10.0.14393.2214 ] PASSED - PDB: vmcompute.pdb DBG:
    SYMCHK: vmcompute.exe [10.0.14393.2214 ] PASSED - PDB: vmcompute.pdb DBG:
    SYMCHK: vmconnect.exe [10.0.14393.0 ] PASSED - PDB: vmconnect.pdb DBG:
    SYMCHK: vmdebug.dll [10.0.14393.2097 ] PASSED - PDB: vmdebug.pdb DBG:
    SYMCHK: vmdynmem.dll [10.0.14393.2007 ] PASSED - PDB: vmdynmem.pdb DBG:
    SYMCHK: vmemulateddevices.dll [10.0.14393.2007 ] PASSED - PDB: VmEmulatedDevices.pdb DBG:
    SYMCHK: VmEmulatedNic.dll [10.0.14393.2007 ] PASSED - PDB: VmEmulatedNic.pdb DBG:
    SYMCHK: VmEmulatedStorage.dll [10.0.14393.2214 ] PASSED - PDB: VmEmulatedStorage.pdb DBG:
    SYMCHK: vmicrdv.dll [10.0.14393.2007 ] PASSED - PDB: vmicrdv.pdb DBG:
    SYMCHK: vmictimeprovider.dll [10.0.14393.2007 ] PASSED - PDB: vmictimeprovider.pdb DBG:
    SYMCHK: vmicvdev.dll [10.0.14393.2214 ] PASSED - PDB: vmicvdev.pdb DBG:
    SYMCHK: vmms.exe [10.0.14393.2214 ] PASSED - PDB: vmms.pdb DBG:
    SYMCHK: vmrdvcore.dll [10.0.14393.2214 ] PASSED - PDB: vmrdvcore.pdb DBG:
    SYMCHK: vmserial.dll [10.0.14393.2007 ] PASSED - PDB: vmserial.pdb DBG:
    SYMCHK: vmsif.dll [10.0.14393.2214 ] PASSED - PDB: vmsif.pdb DBG:
    SYMCHK: vmsifproxystub.dll [10.0.14393.82 ] PASSED - PDB: vmsifproxystub.pdb DBG:
    SYMCHK: vmsmb.dll [10.0.14393.2007 ] PASSED - PDB: vmsmb.pdb DBG:
    SYMCHK: vmsp.exe [10.0.14393.2214 ] PASSED - PDB: vmsp.pdb DBG:
    SYMCHK: vmsynthfcvdev.dll [10.0.14393.2007 ] PASSED - PDB: VmSynthFcVdev.pdb DBG:
    SYMCHK: VmSynthNic.dll [10.0.14393.2007 ] PASSED - PDB: VmSynthNic.pdb DBG:
    SYMCHK: vmsynthstor.dll [10.0.14393.2007 ] PASSED - PDB: VmSynthStor.pdb DBG:
    SYMCHK: vmtpm.dll [10.0.14393.2007 ] PASSED - PDB: vmtpm.pdb DBG:
    SYMCHK: vmuidevices.dll [10.0.14393.2007 ] PASSED - PDB: VmUiDevices.pdb DBG:
    SYMCHK: vmusrv.dll [10.0.14393.2007 ] PASSED - PDB: vmusrv.pdb DBG:
    SYMCHK: vmwp.exe [10.0.14393.2214 ] PASSED - PDB: vmwp.pdb DBG:
    SYMCHK: vmwpctrl.dll [10.0.14393.2007 ] PASSED - PDB: vmwpctrl.pdb DBG:
    SYMCHK: hvhostsvc.dll [10.0.14393.2007 ] PASSED - PDB: hvhostsvc.pdb DBG:
    SYMCHK: vpcivsp.sys [10.0.14393.2214 ] PASSED - PDB: vpcivsp.pdb DBG:
    SYMCHK: vhdmp.sys [10.0.14393.2097 ] PASSED - PDB: vhdmp.pdb DBG:
    SYMCHK: vmprox.dll [10.0.14393.2007 ] PASSED - PDB: vmprox.pdb DBG:
    SYMCHK: vid.dll [10.0.14393.2007 ] PASSED - PDB: vid.pdb DBG:
    SYMCHK: Vid.sys [10.0.14393.2007 ] PASSED - PDB: Vid.pdb DBG:

    There is a limited set of virtualization-related symbols that are currently not available: storvsp.pdb, vhdparser.pdb, passthroughparser.pdb, hvax64.pdb, hvix64.pdb, and hvloader.pdb.

    If you have a scenario where you need access to any of these symbols, please let us know in the comments below or through the Feedback Hub app. Please include some detail on the specific scenario which you are looking at. With newer releases, we are evaluating whether we can make even more symbols available.

    Alles Gute,
    Lars

    [update 2018-04-26]: symbols for vid.sys, vid.dll, and vmprox.dll are now available as well — updated the post to include them as well.

    Viewing all 88 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>