Book Image

Getting Started with Nano Server

By : Charbel Nemnom
Book Image

Getting Started with Nano Server

By: Charbel Nemnom

Overview of this book

Nano Server allows developers and operations teams to work closely together and use containers that package applications so that the entire platform works as one. The aim of Nano Server is to help applications run the way they are intended to. It can be used to run and deploy infrastructures (acting as a compute host, storage host, container, or VM guest operating system) without consuming significant resources. Although Nano Server isn't intended to replace Server 2016 or 2012 R2, it will be an attractive choice for developers and IT teams. Want to improve your ability to deploy a new VM and install and deploy container apps within minutes? You have come to the right place! The objective of this book is to get you started with Nano Server successfully. The journey is quite exciting, since we are introducing you to a cutting-edge technology that will revolutionize today's datacenters. We'll cover everything from the basic to advanced topics. You'll discover a lot of added value from using Nano Server, such as hundreds of VM types on a single host through a small footprint, which could be a big plus for you and your company. After reading this book, you will have the necessary skills to start your journey effectively using Nano Server.
Table of Contents (17 chapters)
Title Page
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Preface

The journey to Nano Server


Now going back, let's tell the story from the beginning. Starting with the Windows NT and 3.1 days, after Windows Server came Windows NT and really, what Microsoft did at that time was they took the client and installed everything on it. All the roles and features were in the box. You could just deploy what you wanted and you were up and running. In fact, Mark Russinovich (CTO of Microsoft Azure), claimed that he discovered the registry key that will allow you to convert your client OS into a Server. That approach continues through Windows Server 2003 when they started to separate some of the roles and features.

Server Core

The big change occurred in Windows Server 2008 and Microsoft introduced Windows Server Core as an installation option. This was really the first step toward having to deploy less on your servers, have less that you have to patch and reboot, and have fewer components that you don't necessary need on your servers. What I mean by the installation option is, when you first start installing the operating system, you have the option to choose between Server Core or Server with Desktop Experience installation.

Once you deploy Server Core or Server with Desktop Experience, then you can start adding roles and features that you want to run on top as shown in Figure 1, small boxes on top.

For Windows Server 2008 and Windows Server 2008 R2, the choice between Server Core and a full installation had to be made at installation time and couldn't be changed without reinstalling the OS:

Figure 1: Windows NT - Windows Server 2012 R2 journey (image source: Microsoft)

However, with Windows Server 2012 and 2012 R2, Microsoft has offered the installation options in a way that you can start by deploying Server Core. Then there is a package that you can add to move up to Full Server or you can install a Full Server and then remove the server graphic shell and Graphical Management Tools and Infrastructure and convert back down to Server Core is as showing in Figure 2. In other words, the graphical shell and the management infrastructure are features that can be added and removed at any time, requiring only a reboot, making it easy to switch between the Server Core and Full Server with GUI. Microsoft also introduced the minimal Server interface so you can actually uninstall Internet Explorer and Explore.exe and have just Microsoft Management Console ( MMC) and Server Manager, which results in less patching. The Minimal Server Interface has fewer benefits than Server Core but it does provide a nice middle-ground versus Server with Desktop Experience:

Figure 2: Removing the graphical management tools in Windows Server 2012 R2

Cloud journey

Now moving to the cloud journey with Microsoft Azure, a large server installation that has a lot of things installed, requires patching, and reboots which interrupt service delivery. Azure doesn't use live migration and doesn't use failover clustering. When they have to take down a host in an Azure data center, it does require the virtual machine to be taken down and restarted as well. So, with a large number of servers and large OS resource consumption, it generates a lot of Cost of goods sold (COGS) for them. COGS are the direct costs attributable to the production of the services sold by Microsoft Azure. Thus, by provisioning, large host images compete for the network resources. As mentioned in the Business impact section earlier in this chapter, deploying all those hosts and then re-imaging all of them when a new patch comes out, requires a lot of network bandwidth. Many service providers (not only Microsoft Azure) are over provisioning their network so that they can have enough capacity for live migration or for re-provisioning servers.

Back in October 2014, Microsoft released the first version of their Cloud-in-box solution called Cloud Platform System (CPS) which is running on top of Windows Server Core, System Center, and Windows Azure Pack. To build a CPS system, requires a lot of time; installing all that software takes a lot of time and patching impacts the network allocation. Since a CPS system is an on-premises solution, it does use live migration for the virtual machines. So, with fully loaded CPS 4 racks, configuration would support up to 8,000 virtual machines. So, if each VM is configured with 2 GB of RAM, then you need 16 TB to live migrate over all the networks. Thus, we conclude that you need to have enough capacity to handle that network traffic instead of using it for the business itself. I am not saying that the configuration isn't optimized in CPS in a live migration sense, but they are using live migration over Server Message Block (SMB) protocol directly to offload the network traffic to Remote Direct Memory Access (RDMA) NICs, which is really fast. However, it still takes time to migrate 16 TB of information, and as mentioned earlier, server reboots result in service disruption. The reboot for the compute Hyper-V host in CPS takes around 2 minutes, and the storage host takes around 5 minutes to complete.

Microsoft determined from both Azure and building up the CPS solution that they need a server configuration which is optimized for the cloud and also something that will benefit all their customers, whether you are deploying a cloud configuration in your data center or you are using just Windows Server as your virtualization platform or leveraging the public cloud that's running on top of Windows Server.

The next step in the journey is Nano Server, a new headless, 64-bit only, deployment option for Windows Server, as you can see in Figure 3. It's a little different from Windows Server 2012 R2 in Figure 1. Nano Servers start following the Server Core pattern as a separate installation option. Therefore you can install Nano Server and then there is sub-set of roles and features that you can add on top. The installation options that we have in Windows Server 2016 are Nano Server, Server Core, and Server with a Desktop Experience. Microsoft made a significant change in Windows Server 2016 where you cannot move between different installation options anymore as in Windows Server 2012 R2, just because of some of the changes they had to make in order to implement Nano Server and Server with a Desktop Experience:

Figure 3: Nano Server journey (image source: Microsoft)

Nano Server is deep refactoring initially focused on the CloudOS infrastructure. With Nano Server, you can deploy Hyper-V hosts as a compute platform. You can deploy a scale-out file server as storage nodes and clusters, so that you can do clustered storage servers or clustered Hyper-V hosts and do live migration across nodes. The Nano Server team is continuously working on supporting born-in-the cloud applications; those applications were written with cloud patterns which allow you to run on top of Nano Server. Nano Server can be installed on your physical machines, or it can be installed as a guest virtual machine, and it will also serve as the base OS for Hyper-V containers. Please refer to Chapter 8, Running Windows Server Containers and Hyper-V Containers on Nano Server, for more details about Windows Server containers and Hyper-V containers running on top of Nano Server.

Nano Server is a separate installation option. It's a self-contained operating system that has everything you need. The major difference between Nano Server and Server Core is that none of the roles or features are available in the image same as we get in Server Core and Full Server. The side by side store is when you go to add or install additional roles and features with Windows Server; it never prompts you for the media, as the binary data that is required already exists on your hard disk within the OS. However, in Nano Server, all the infrastructure roles (Hyper-V, storage, clustering, DNS, IIS, and so on) live in a series of separate packages, so you have to add them to the image. In this case, your base Nano Server image will always stay very small. As you start adding roles and features to Nano Server, each role becomes an additional package, as the Hyper-V role for example which only requires the Nano Server base OS, so it will always be small and tight. If you are adding another role that requires a 500 MB file, that will be another 500 MB file to be added to the Nano Server image as a separate package. Nano Server has full driver support, so any driver that works for Windows Server 2016, will work with Nano Server as well.

As of the first release of Nano Server 2016, these are the key roles and features supported to run on Nano Server:

  • Hyper-V, clustering, storage, DNS, IIS, DCB, PowerShell DSC, shielded VMs, Windows defender, and software inventory logging
  • Core CLR, ASP.NET 5, and PaaSv2
  • Windows Server containers and Hyper-V containers
  • System Center Virtual Machine Manager (SCVMM) and System Center Operations Manager (SCOM)

Nano Server - management

Without a GUI, it's not easy to carry out the daily management and maintenance of Nano Server. In fact, all the existing graphical tools, such as Hyper-V Manager, failover cluster manager, Server Manager, registry editor, file explorer, disk and device manager, server configuration, computer management, users and groups are compatible to manage Nano Server remotely.

The Nano Server deployment option of Windows comes with full PowerShell remoting support. The purpose of the core PowerShell engine is to manage Nano Server instances at scale. PowerShell remoting includes WMI, Windows Server cmdlets (network, storage, Hyper-V, and so on.), PowerShell Desired State Configuration (DSC), remote file transfer, remote script authoring, and debugging. PowerShell relies on the .NET Framework; as you may have noticed Nano Server is a small and tiny OS and only has the Core Common Language Runtime (CLR). The Core CLR is a tiny subset of the .NET Framework; the PowerShell team went ahead and refactored PowerShell to run on Core CLR, which was a huge effort. The good news is that PowerShell users probably will not miss the most important features. It has full language compatibility and supports PowerShell remoting, so you can use the most popular remote commands, such as Invoke-Command, New-PSSession, Enter-PSSession, and so on.

The PowerShell Core is available in every image of Nano Server; it's not an optional package. Each Nano Server image contains, by default, Core CLR that takes up 45 MB of space; PowerShell itself takes about 8 MB of space, and there is 2 MB available for two built-in modules. Remoting is turned on by default, so a Nano Server installation will be always ready to be remoted into and be managed remotely.