Further to my original post Creating and Using Mount Points in SQL Server, this post contains updated content using Windows Storage Spaces. Storage Spaces is a storage feature introduced in Windows 2012, the old 3rd party Disk Management Snapin is replaced by Microsoft's own disk management Snapin tool, although the original is still present even in Windows 2022. Although the storage can be easily managed via the Server Manager GUI, the intended management interface for the disk objects is using PowerShell and we'll look at the cmdlets in detail as we go through an example.
You should have already checked whether your planned SQL Server instance will support Mount Points as not all versions provide this. With that in mind, let's briefly look at this now
Validating SQL Server Compatibility
Check whether or not your chosen version of SQL Server will support mount points. Not all editions of SQL Server do support the use of mount points, therefore, before configuring mount points and attempting to install SQL Server you must refer to the following Microsoft KB article for further details:
http://support.microsoft.com/kb/819546
In brief, Versions of SQL Server 2005 and beyond fully support mount points for stand alone and clustered environments.
What are Mount Points?
Anybody who has ever used a Unix system would have come across the use of mount points before, they are common within this Network Operating System. Anything from a disk partition to a CD drive or floppy drive all require mounting before use, these file systems are usually mounted under a root partition folder.
Within the Windows operating system we have the ability to perform similar actions with multiple Windows disks. We have 26 drive letters initially available for assignment, subtract from this the reserved set of drives (A, B, C) and you are commonly left with 23. This doesn't leave much scope, especially if you plan to use many data locations per instance. You may have multiple SQL Server instances and require multiple disk sets to offload I\O, For each instance you may originally plan to use separate assignments for
- SQL Server data
- SQL Server logs
- SQL Server backups
- SQL Server tempdb
That's 4 drive letters for a single instance. Mount points enable you to use a single drive letter for all of the chosen locations but still implement separate physical drives for each of the individual data locations. How is this possible?
Mount points are simply file system locations (in the form of folders) under which a physical disk partition may be mounted\referenced. You start with a root reference known as the root disk. The root disk can, in fact should be, as small as possible. This will avoid multiple users creating data on the root disk and make administration easier. You must also bear in mind that all I\O destined for the Mount Point occurs on the mounted physical disk and not the root disk.
We'll go through actually creating mount points later in the article, for now let's look at a little theory. We have a new SQL instance required and we need to separate out the SQL Server files to separate disks, our basic example is as follows
Root disk 1 M:
Multiple mount points are then employed in the following configuration for our single instance, remember each of these mount points is actually a separate physical drive from the Server Manager File and Storage view.
M:\SQLData
M:\SQLLog
M:\SQLTempdb
M:\SQLBak
The instance may be installed using the paths below. To assign permissions we apply the ACLs at the MSSQL folder level on each mount point and they then propagate down to the child folders\files. Applying permissions at the root level M: or the mount point folders would not affect the mount point child folder\file structures.
M:\SQLData\MSSQL\Data
M:\SQLLog\MSSQL\Log
M:\SQLTmp\MSSQL\TempDB
M:\SQLBak\MSSQL\Bak
Going further in another example, you could employ multiple root disks and multiple instances of SQL Server in the following manner
Root disk 1 D:
Root disk 2 E:
Root disk 3 M:
You would have multiple mount points, each instance would have 4 separate physical disks mounted under
Instance1 D:\INST1Data, D:\INST1Log, D:\INST1Tempdb, D:\INST1Bak
Instance 2 E:\INST2Data, E:\INST2Log, E:\INST2Tempdb, E:\INST2Bak
Instance 3 M:\INST3Data, M:\INST3Log, M:\INST3Tempdb, M:\INST3Bak
The SQL server instances would be created along the following paths
D:\INST1Data\MSSQL\Data, D:\INST1Log\MSSQL\Log, D:\INST1Tempdb\MSSQL\TempDB, D:\INST1Bak\MSSQL\Backups
E:\INST2Data\MSSQL\Data, E:\INST2Log\MSSQL\Log, E:\INST2Tempdb\MSSQL\TempDB, E:\INST2Bak\MSSQL\Backups
M:\INST3Data\MSSQL\Data, M:\INST3Log\MSSQL\Log, M:\INST3Tempdb\MSSQL\TempDB, M:\INST3Bak\MSSQL\Backups
It's easy to see how the management becomes more complex with multiple root disks and mounted volumes, it can't be stressed enough that documenting your setup as you go is more valuable than anything else, you do have an HLD and an LLD provided by the architect? Concise documentation gives you and anyone who follows you, a reference point for your system setup and can make troubleshooting easier too.
With Mount Points it's important to remember the following pointers
- Do not install software to the root of the mount point. Rather, create and install into a folder under the mount point to allow the correct propagation of permissions.
- Ideally, the root drive should not have any data created upon it, it is merely the root tree for the new volumes.
- Permissions applied to the root drive file system and the mount point root folders do not propagate to the mount point file systems, they are controlled separately.
- Mount points are not supported for all SQL Server 2000 configurations (in case you had a burning desire to use them). This is detailed in the relevant section above.
Windows requires online disks with formatted partitions before you are able to utilise mount points, let's look at those next as they form the basis for mounting any partition\volume.
Choosing Partition Types
When initialising and before formatting your disks within Windows disk management, you should first decide on the disk type and partition type to be used.
In brief the 2 disk types possible are Basic and Dynamic. Basic disks are the default disk type used by the Windows Operating System. Dynamic disks are rarely used within Windows and are really only useful for creating spanned volumes or software RAID systems. In reality, the Windows disk you are formatting likely already has an underlying RAID system (such as a local controller) or it is a disk presented from a SAN or other network storage device. For this reason you should generally you should leave your disks as Basic type, especially for cluster support.
As well as the disk type, there are also 2 different partition types that may be used. The 2 types available are MBR (master boot record) or GPT (GUID partition type).
- MBR partition types only allow a maximum of 4 partitions per disk (3 primary and 1 extended).
- GPT partition types support very large disk types, a maximum of 128 partitions and are generally much more efficient and resistant to corruption.
However, I would recommend if your disk is under 500GB and has only 1 partition that you leave this as MBR.
The following Microsoft KB article provides further details on Basic and Dynamic disks and also MBR and GPT partition types. You should digest this before creating disk partitions:
http://msdn.microsoft.com/en-us/library/aa363785(VS.85).aspx
Windows Storage Spaces
Storage Spaces is a feature first introduced in Windows 2012. Storage Spaces takes groups of disks and uses software RAID implementation to create highly redundant disk structures, it can be used for standalone Windows Servers or for clustered implementations.
For more info check this Microsoft link.
To re cap, the items you will require to create your mount points are;
- A root disk (or disks) with a formatted partition and a drive letter assigned within Windows
- A defined set of mount point folders to be created upon the root disk.
- Disk devices available for use within Windows Storage Spaces.
Creating a Mount Point Demonstration.
We will now work through a short demonstration showing how to create mount points within Windows. For the demonstration we need a small root disk, the smaller the better (mines 500MB but it could\should be as small as a few MB).
Note: Remember, the root disk is merely a file system place holder for the mounted drives, it will only contain empty folders which are then used to mount the NTFS volumes. The temptation for users to dump files on the root drive is high and something I've seen a lot over the years.
The first requirement is to create a storage pool, this is a group of disks that will act as one array. The disks I have presented initially are shown below;
The following cmdlet is used to create the new disk pool, i have supplied default parameters for provisioning type and resiliency setting, which will be inherited by each virtual disk and volume;
To partition and format a volume on the root disk I use a single PowerShell cmdlet, you could use the individual cmdlets to achieve each step but the following cmdlet completes the operations in a single request as shown below;
On the root drive (F:) I have created the following folders, these folders currently display the typical Windows folder icon.
The next step is to create each new volume and specify an access path on the new root disk, this still uses the new volume cmdlet but with slightly different parameters. The cmdlet syntax shown below is executed once for each of the 4 new volumes (the FriendlyNames and AccessPaths change);
Once you have created the 4 volumes and mounted them to the specific folders, you will see the following in Server Management. Your mount points are now ready to use for a stand alone SQL Server installation.
Once the folders have mount points the icons change as shown below.
Mount Points for Clustered Instances
For clustered scenarios the storage pools are managed by Storage Spaces Direct, this feature is enabled via PowerShell once the WSFC has been created and the storage has been presented to the nodes.
S2D, the name for Storage Spaces Direct, claims all available and suitable disks on each node and creates a storage pool inside the WSFC. What does this mean, let's look at a scenario for a 3 node cluster;
Each of the nodes above has 3 x 1TB disks attached to a SCSI adapter, they're local disks on the local controllers. When enabling S2D you will be asked to confirm that all disks are to be claimed and added to the storage pool name that you supply.
Once the operation is complete, one large disk pool is created across all of the disks. To create the volumes and assign the paths, let's look through the example below;
First create the root drive, in the Server Manager storage blade, select storage pools and select the storage pool you created. Click Tasks under Virtual Disks and select New Virtual Disk:
Click OK to create the virtual disk on the storage pool.
Supply a virtual disk name and click Next.
Specify the virtual disk size, this is the root drive so as small as possible, and click Next.
Review the selections and click Create.
Once the Virtual Disk has been created you can close the wizard, if the check box to create a volume is left checked, the new volume wizard will start when the virtual disk wizard dialog closes.
The new volume wizard starts and first requires the new disk target, select the new Virtual Disk as your target and click Next.
Confirm the size of the new volume and click Next.
Assign your drive letter and click Next.
Specify any volume settings and a label, for the root disk, the filesystem and allocation unit size can be default. Click Next.
Review the selections and click Create.
Once the new volume wizard completes successfully, close the dialog and review what you have on the Server Manager storage blade.
The steps to create the mount points are the same with exception to the Assign Drive Letter or Folder step. Below are the pop out steps in this section of the wizard.
Select the option for "The following folder" and open the folder browser, within the "Select folder" dialog you are able to create and then select\assign a folder for your new volume, rather than a drive letter.
Once you have created the folder, select the appropriate folder and click Next.
On the volume settings dialog, select your filesystem and allocation unit size along with your label and click Next.
Proceed with the rest of the wizard as you did for the root drive. That's really all that's required, check Server Manager for the newly created Virtual Disks and volumes and finally, open Failover Cluster Manager to view the new clustered disk objects. Install SQL server using the section in my stairway to HA guide at this link.
This should give you more of an understanding of mount points and how they are set up. They operate just the same as any other folder with the exception that permissions grants are handled by each individual mount point and not by the root directory. Granting permissions at the root drive and pushing child inheritance will not affect the mount points. Still, they are essentially NTFS folders and they are also an excellent way to bypass the Windows 26 drive letter limitation.
As always, test this in an offline environment, virtual machines make a great test lab and are extremely easy to set up. Have fun and of course, post back if you have any troubles.