Managing storage space direct (S2D) on Nano server 2016 TP4 from VMM 2016 TP4
In a previous post i have demonstrated how to deploy an hyperconverged hyper-v running Nano server 2016 TP4. In this post I’ll show you how to managed this storage provider from VMM 2016 TP4 in order to provide file shares and logical units to hyper-v hosts.
I’ll assume that you have already deployed your cluster using S2D and you have configured a scale out file server role. If it is not the case, you can use the script which i did provide in my previous post in order to automate the cluster deployment and scale out file server role configuration.
Cluster Name: NANOCL
SOFS Name: SOFSS2D
Nodes Name: NANO1, NANO2, NANO3, NANO4
Each node has 5 data disk of 40GB attached, since S2D is enabled you can see that the cluster list my Nano servers as enclosure.
Now let’s add this cluster to VMM storage management.
Specify the name of the cluster (not the SOFS!)
VMM will discover the SOFS automatically.
Create a new classification which we will use to identify this set of storage when consuming from Hosts
when the process is over the storage array should appear in VMM.
now let’s play with our disks. Let’s create a pool.
Provide a name and ensure to select the classification which we did create previously
now select all the disk you want to assign to this pool. I leave a disk per host on purpose, it allow me to hot add a disk in case of a failure disk on host level.
leave interleave to default
now if you look into the cluster management console you should see the new pool created.
Let’s now create some shares on top of this pool. Go back to VMM in fabric.
VMM will actually create a virtual disk on the cluster, turn it into a CSV and assign it to the SOFS role to host the share.
we should be able to map the share now from the network.
and it should also appear in VMM under management.
At this point, this share can be easily assigned to any host in VMM and then be used to host VMs in high availability over SMB3
Let’s do the same but with a logical unit this time (LUN)
We have created a file share and a logical unit from VMM by consuming a storage pool created on top of an S2D solution. The LUN will mostly be used in a hyperconverged scenario where the cluster is actually also an hyper-v cluster. Loopback is still not allowed on SOFS so CSV will be the way to go for this scenario. Shares will be used in converged scenario where actually hyper-v is not installed on the cluster.