SQL_Pez

This is the SQL Server Blog for Perry Whittle

Database File Growths & File Shrinking in SQL Server

clock February 21, 2025 15:57 by author TopDog

Welcome to this post on file growths and shrinking in SQL Server. In this post I would like to discuss the file growths, sizes,
shrinking and how they impact daily use.

First the elephant in the room

File Shrinking

Not much to say here but don't, just don't!
It's very well documented that shrinking files, whether data or log, is a bad practice and totally unnecessary.
Typically, what I have seen over the years in my career, is when an administrator or DBA (junior or cross platform) sees
free space in a file(s) and shrinks the file(s) as it's viewed as an unnecessary use of space.

We all know that for data files, it will reintroduce internal fragmentation in any indexes, which is not ideal especially
when your maintenance jobs spend a lot of time keeping them healthy. When it comes to log files, I've seen
scenarios where administrators have set up daily jobs to shrink the log ( it will only grow again! ).

If proper capacity and planning has been performed, you will naturally have a set amount of free space within your files
to prevent constant growth placing undue stress on the storage sub systems. The files should be right-sized and grown
out to your expected 3 year size to provide sufficient data storage for the incoming data.

For the transaction log, testing will provide you with see the sweet spot size that is required to prevent constant growth
and an unnecessary increase in log VLFs, if you know the space will be used why remove it?
This leads us nicely onto file growths and sizes.

File Sizes and Growths

Where large transaction logs are encountered, it's important to realise that the transaction log has grown to that size
as that is the space it requires at busy periods to handle the transaction rate between log backups. To help maintain a
more controlled log size consider increasing the log backup frequency during busy times.

For example you may have your log backups run every 20 mins throughout the day, but if you know that from
11pm to 1am you have a large daily import it would make sense to add a new schedule that takes log backups every
5 minutes or so during that time window.

The mileage will vary on this and you should test to find the sweet spot but the flexibility is there if required.
There is no need to continually shrink log files only for them to grow rapidly again when an import job or
index maintenance job launches. Furthermore, your storage admins will likely thank you for reducing the stress on the
storage subsystem and may even be impressed that you have taken steps to ensure that the storage system suffers
no severe consequences.

What's the way forward

A positive start can be achieved by right sizing the storage and the database files by understanding the possible sizes
required for each. When implementing any application deployment, engaging with the vendor to obtain expected database
size information is the best route available. Microsoft also provide a database sizing tool to help with size estimations.

The Microsoft sizing calculations can be found at the following link:

Microsoft Database Size Calculations

For in house developed applications, the sizing calculations are even more valuable and provide an opportunity to ratify
database design by spotting columns that may benefit from a reduced\different data type to that which is proposed.

As an example, if a column is spotted as AddressLine1 and it's data type is defined as varchar(MAX), there's a valid
reason to challenge this data type usage for something a little more appropriate.

Be mindful of the database file sizes and track usages over periods of time and where auto grow is used set realistic
file growths to prevent regular unwanted growths.



Azure Lifecycle Management Policies and Backup Blob Management

clock September 26, 2024 10:12 by author TopDog

 In this post i want to cover the use of Azure Lifecycle Management Policies and the purging of sql server backup blobs in storage account containers. Anyone now using BackupToURL for their sql server instances will likely have an Azure storage account steadily accruing a large number of backup files.

I will cover BackupToURL in another post, but for now, i want to focus on the management of the accrued backup files. If, when you setup your backup strategy, you did nothing else other than configure the instance and create the jobs, then files will be accruing at a fair rate. The option to delete files in some backup scripts does not reliably manage this for you either, have you checked your storage account usage recently to see just how much data is stored there?

This is where the Lifecycle Policy option comes into play. You can find this feature on the Azure Storage Account blade, as shown below:

 

 

The typical view for a storage account with containers is shown below:

 

 



Within the container i have folders for each logical backup type and these are easily configurable using everyones favourite backup scripts. The folders are symbolic links and not real folders, deleting all files from the folder will make the folder disappear from the container. The structure i have is shown below:



I have separate folders created for each backup type of FULL, DIFF and LOG. Looking at the FULL backup folder i have a couple of test files as shown below:

 

The policy used to purge these folders is detailed below. You can create the policy either by list view or code view.

{
  "rules": [
    {
      "enabled": true,
      "name": "rm-diff-log-sqlpw01",
      "type": "Lifecycle",
      "definition": {
        "actions": {
          "baseBlob": {
            "delete": {
              "daysAfterModificationGreaterThan": 2
            }
          }
        },
        "filters": {
          "blobTypes": [
            "blockBlob"
          ],
          "prefixMatch": [
            "mycontainer01/LOG/LOG",
            "mycontainer01/DIFF/DIFF"
          ]
        }
      }
    },
    {
      "enabled": true,
      "name": "rm-full-sqlpw01",
      "type": "Lifecycle",
      "definition": {
        "actions": {
          "baseBlob": {
            "delete": {
              "daysAfterModificationGreaterThan": 4
            }
          }
        },
        "filters": {
          "blobTypes": [
            "blockBlob"
          ],
          "prefixMatch": [
            "mycontainer01/FULL/FULL"
          ]
        }
      }
    }
  ]
}

 

The policy performs a pre fix match looking for files to action. The best way i found to ensure the policy purged the files was to prefix each backup file name with the backup type rather than other naming conventions where the backup type may be a middle part of the string. This is easily configurable in any backup script, a typical naming convention i use is: 

backuptype_servername_dbname_yyyymmdd_hhmmss.bak

 

Since you will always usually retain the full backups for longer than the PIT backups i have a longer retention for Full backups on the lifecycle policies. Typically, full backups may be retained for 6 or even 12 months or more, PIT recovery would not usually be required after 35 days or so, this is the logic applied for Azure SQL Database PITR periods. You can also move older backup files to cooler storage tiers to make further cost savings while still retaining the data for longer.
For your environment you should test to find the best solution that fits, offloading database backups to the cloud offers high redundancy and flexibility. For backups stored in a locally redundant storage account in the UK South region, 3 copies of the data are kept, 1 in each availability zone.There will be some work required to ensure that your environment and network configuration supports BackupToURL, more can be found on this topic at the following Microsoft links: 
 

SQL Server BackupToURL 

Troubleshooting BackupToURL



Creating and Using Storage Spaces Mount Points in SQL Server

clock July 1, 2024 10:33 by author TopDog

Further to my original post Creating and Using Mount Points in SQL Server, this post contains updated content using Windows Storage Spaces. Storage Spaces is a storage feature introduced in Windows 2012, the old 3rd party Disk Management Snapin is replaced by Microsoft's own disk management Snapin tool, although the original is still present even in Windows 2022. Although the storage can be easily managed via the Server Manager GUI, the intended management interface for the disk objects is using PowerShell and we'll look at the cmdlets in detail as we go through an example.

You should have already checked whether your planned SQL Server instance will support Mount Points as not all versions provide this. With that in mind, let's briefly look at this now

Validating SQL Server Compatibility

Check whether or not your chosen version of SQL Server will support mount points. Not all editions of SQL Server do support the use of mount points, therefore, before configuring mount points and attempting to install SQL Server you must refer to the following Microsoft KB article for further details: 

http://support.microsoft.com/kb/819546

In brief, Versions of SQL Server 2005 and beyond fully support mount points for stand alone and clustered environments.

What are Mount Points?

Anybody who has ever used a Unix system would have come across the use of mount points before, they are common within this Network Operating System. Anything from a disk partition to a CD drive or floppy drive all require mounting before use, these file systems are usually mounted under a root partition folder.

Within the Windows operating system we have the ability to perform similar actions with multiple Windows disks. We have 26 drive letters initially available for assignment, subtract from this the reserved set of drives (A, B, C) and you are commonly left with 23. This doesn't leave much scope, especially if you plan to use many data locations per instance. You may have multiple SQL Server instances and require multiple disk sets to offload I\O, For each instance you may originally plan to use separate assignments for

  • SQL Server data
  • SQL Server logs
  • SQL Server backups
  • SQL Server tempdb

That's 4 drive letters for a single instance. Mount points enable you to use a single drive letter for all of the chosen locations but still implement separate physical drives for each of the individual data locations. How is this possible?


Mount points are simply file system locations (in the form of folders) under which a physical disk partition may be mounted\referenced. You start with a root reference known as the root disk. The root disk can, in fact should be, as small as possible. This will avoid multiple users creating data on the root disk and make administration easier. You must also bear in mind that all I\O destined for the Mount Point occurs on the mounted physical disk and not the root disk.

We'll go through actually creating mount points later in the article, for now let's look at a little theory. We have a new SQL instance required and we need to separate out the SQL Server files to separate disks, our basic example is as follows

Root disk 1 M:

 

Multiple mount points are then employed in the following configuration for our single instance, remember each of these mount points is actually a separate physical drive from the Server Manager File and Storage view.

M:\SQLData

M:\SQLLog

M:\SQLTempdb

M:\SQLBak

 

The instance may be installed using the paths below. To assign permissions we apply the ACLs at the MSSQL folder level on each mount point and they then propagate down to the child folders\files. Applying permissions at the root level M: or the mount point folders would not affect the mount point child folder\file structures.

M:\SQLData\MSSQL\Data

M:\SQLLog\MSSQL\Log

M:\SQLTmp\MSSQL\TempDB

M:\SQLBak\MSSQL\Bak

 

Going further in another example, you could employ multiple root disks and multiple instances of SQL Server in the following manner

Root disk 1 D:

Root disk 2 E:

Root disk 3 M:

 

You would have multiple mount points, each instance would have 4 separate physical disks mounted under

Instance1 D:\INST1Data, D:\INST1Log, D:\INST1Tempdb, D:\INST1Bak

Instance 2 E:\INST2Data, E:\INST2Log, E:\INST2Tempdb, E:\INST2Bak

Instance 3 M:\INST3Data, M:\INST3Log, M:\INST3Tempdb, M:\INST3Bak

 

The SQL server instances would be created along the following paths

D:\INST1Data\MSSQL\Data, D:\INST1Log\MSSQL\Log, D:\INST1Tempdb\MSSQL\TempDB, D:\INST1Bak\MSSQL\Backups

E:\INST2Data\MSSQL\Data, E:\INST2Log\MSSQL\Log, E:\INST2Tempdb\MSSQL\TempDB, E:\INST2Bak\MSSQL\Backups

M:\INST3Data\MSSQL\Data, M:\INST3Log\MSSQL\Log, M:\INST3Tempdb\MSSQL\TempDB, M:\INST3Bak\MSSQL\Backups

 

It's easy to see how the management becomes more complex with multiple root disks and mounted volumes, it can't be stressed enough that documenting your setup as you go is more valuable than anything else, you do have an HLD and an LLD provided by the architect? Concise documentation gives you and anyone who follows you, a reference point for your system setup and can make troubleshooting easier too.


With Mount Points it's important to remember the following pointers

  • Do not install software to the root of the mount point. Rather, create and install into a folder under the mount point to allow the correct propagation of permissions.
  • Ideally, the root drive should not have any data created upon it, it is merely the root tree for the new volumes.
  • Permissions applied to the root drive file system and the mount point root folders do not propagate to the mount point file systems, they are controlled separately.
  • Mount points are not supported for all SQL Server 2000 configurations (in case you had a burning desire to use them). This is detailed in the relevant section above.

Windows requires online disks with formatted partitions before you are able to utilise mount points, let's look at those next as they form the basis for mounting any partition\volume.

Choosing Partition Types

When initialising and before formatting your disks within Windows disk management, you should first decide on the disk type and partition type to be used.

In brief the 2 disk types possible are Basic and Dynamic. Basic disks are the default disk type used by the Windows Operating System. Dynamic disks are rarely used within Windows and are really only useful for creating spanned volumes or software RAID systems. In reality, the Windows disk you are formatting likely already has an underlying RAID system (such as a local controller) or it is a disk presented from a SAN or other network storage device. For this reason you should generally you should leave your disks as Basic type, especially for cluster support.

As well as the disk type, there are also 2 different partition types that may be used. The 2 types available are MBR (master boot record) or GPT (GUID partition type).

  • MBR partition types only allow a maximum of 4 partitions per disk (3 primary and 1 extended).
  • GPT partition types support very large disk types, a maximum of 128 partitions and are generally much more efficient and resistant to corruption.

However, I would recommend if your disk is under 500GB and has only 1 partition that you leave this as MBR.

The following Microsoft KB article provides further details on Basic and Dynamic disks and also MBR and GPT partition types. You should digest this before creating disk partitions:

 http://msdn.microsoft.com/en-us/library/aa363785(VS.85).aspx

 

Windows Storage Spaces

Storage Spaces is a feature first introduced in Windows 2012. Storage Spaces takes groups of disks and uses software RAID implementation to create highly redundant disk structures, it can be used for standalone Windows Servers or for clustered implementations.

For more info check this Microsoft link. 

To re cap, the items you will require to create your mount points are;

  • A root disk (or disks) with a formatted partition and a drive letter assigned within Windows
  • A defined set of mount point folders to be created upon the root disk.
  • Disk devices available for use within Windows Storage Spaces.

Creating a Mount Point Demonstration.

We will now work through a short demonstration showing how to create mount points within Windows. For the demonstration we need a small root disk, the smaller the better (mines 500MB but it could\should be as small as a few MB).

Note: Remember, the root disk is merely a file system place holder for the mounted drives, it will only contain empty folders which are then used to mount the NTFS volumes. The temptation for users to dump files on the root drive is high and something I've seen a lot over the years.


The first requirement is to create a storage pool, this is a group of disks that will act as one array. The disks I have presented initially are shown below;



The following cmdlet is used to create the new disk pool, i have supplied default parameters for provisioning type and resiliency setting, which will be inherited by each virtual disk and volume;

 


To partition and format a volume on the root disk I use a single PowerShell cmdlet, you could use the individual cmdlets to achieve each step but the following cmdlet completes the operations in a single request as shown below;

 

On the root drive (F:) I have created the following folders, these folders currently display the typical Windows folder icon.


The next step is to create each new volume and specify an access path on the new root disk, this still uses the new volume cmdlet but with slightly different parameters. The cmdlet syntax shown below is executed once for each of the 4 new volumes (the FriendlyNames and AccessPaths change);


Once you have created the 4 volumes and mounted them to the specific folders, you will see the following in Server Management. Your mount points are now ready to use for a stand alone SQL Server installation.

 

 

Once the folders have mount points the icons change as shown below.

 

Mount Points for Clustered Instances

For clustered scenarios the storage pools are managed by Storage Spaces Direct, this feature is enabled via PowerShell once the WSFC has been created and the storage has been presented to the nodes.
S2D, the name for Storage Spaces Direct, claims all available and suitable disks on each node and creates a storage pool inside the WSFC. What does this mean, let's look at a scenario for a 3 node cluster;

 

Each of the nodes above has 3 x 1TB disks attached to a SCSI adapter, they're local disks on the local controllers. When enabling S2D you will be asked to confirm that all disks are to be claimed and added to the storage pool name that you supply.

Once the operation is complete, one large disk pool is created across all of the disks. To create the volumes and assign the paths, let's look through the example below;

First create the root drive, in the Server Manager storage blade, select storage pools and select the storage pool you created. Click Tasks under Virtual Disks and select New Virtual Disk:

 

Click OK to create the virtual disk on the storage pool.

 

Supply a virtual disk name and click Next.

 

 Specify the virtual disk size, this is the root drive so as small as possible, and click Next.

 

 Review the selections and click Create.

 

 Once the Virtual Disk has been created you can close the wizard, if the check box to create a volume is left checked, the new volume wizard will start when the virtual disk wizard dialog closes.

 

The new volume wizard starts and first requires the new disk target, select the new Virtual Disk as your target and click Next.


Confirm the size of the new volume and click Next.


Assign your drive letter and click Next.


Specify any volume settings and a label, for the root disk, the filesystem and allocation unit size can be default. Click Next.


Review the selections and click Create.



Once the new volume wizard completes successfully, close the dialog and review what you have on the Server Manager storage blade.

The steps to create the mount points are the same with exception to the Assign Drive Letter or Folder step. Below are the pop out steps in this section of the wizard.
Select the option for "The following folder" and open the folder browser, within the "Select folder" dialog you are able to create and then select\assign a folder for your new volume, rather than a drive letter.

 

 

Once you have created the folder, select the appropriate folder and click Next.

 

 On the volume settings dialog, select your filesystem and allocation unit size along with your label and click Next.

 

Proceed with the rest of the wizard as you did for the root drive. That's really all that's required, check Server Manager for the newly created Virtual Disks and volumes and finally, open Failover Cluster Manager to view the new clustered disk objects. Install SQL server using the section in my stairway to HA guide at this link.

This should give you more of an understanding of mount points and how they are set up. They operate just the same as any other folder with the exception that permissions grants are handled by each individual mount point and not by the root directory. Granting permissions at the root drive and pushing child inheritance will not affect the mount points. Still, they are essentially NTFS folders and they are also an excellent way to bypass the Windows 26 drive letter limitation.

As always, test this in an offline environment, virtual machines make a great test lab and are extremely easy to set up. Have fun and of course, post back if you have any troubles.



Creating and Using Mount Points in SQL Server

clock May 22, 2024 11:05 by author TopDog

Mount points seem to be the subject of much confusion on various SQL Server forums. In light of this, the following post will seek to detail exactly how mount points are supposed to work and provide a demonstration on how to create mount points within Windows.

You should have already checked whether your planned SQL Server instance will support Mount Points as not all versions provide this. With that in mind, let's briefly look at this now

Validating SQL Server Compatibility

Check whether or not your chosen version of SQL Server will support mount points. Not all editions of SQL Server do support the use of mount points, therefore, before configuring mount points and attempting to install SQL Server you must refer to the following Microsoft KB article for further details: 

http://support.microsoft.com/kb/819546

In brief, Versions of SQL Server 2005 and beyond fully support mount points for stand alone and clustered environments.

What are Mount Points?

Anybody who has ever used a Unix system would have come across the use of mount points before, they are common within this Network Operating System. Anything from a disk partition to a CD drive or floppy drive all require mounting before use, these file systems are usually mounted under a root partition folder.

Within the Windows operating system we have the ability to perform similar actions with multiple Windows disks. We have 26 drive letters initially available for assignment, subtract from this the reserved set of drives (A, B, C) and you are commonly left with 23. This doesn't leave much scope, especially if you plan to use many data locations per instance. You may have multiple SQL Server instances and require multiple disk sets to offload I\O, For each instance you may originally plan to use separate assignments for

  • SQL Server data
  • SQL Server logs
  • SQL Server backups
  • SQL Server tempdb

That's 4 drive letters for a single instance. Mount points enable you to use a single drive letter for all of the chosen locations but still implement separate physical drives for each of the individual data locations. How is this possible?


Mount points are simply file system locations (in the form of folders) under which a physical disk partition may be mounted\referenced. You start with a reference root known as the root disk. The root disk can, in fact should be, as small as possible. This will avoid multiple users creating data on the root disk and make administration easier. You must also bear in mind that all I\O destined for the Mount Point occurs on the mounted physical disk and not the root disk

We'll go through actually creating mount points later in the article, for now let's look at a little theory. Our basic example is as follows

Root disk 1 M:      (Windows disk 2)

 

Multiple mount points are employed in the following configuration for a single instance, remember each of these mount points is actually a separate physical drive from the Windows disk management snapin.

M:\SQLData         (Windows disk 3)

M:\SQLLog           (Windows disk 4)

M:\SQLTmp          (Windows disk 5)

M:\SQLBak           (Windows disk 6)

 

The instance may be installed using the paths below. To assign permissions we apply at the MSSQL folder level and they then propagate down to the child folders\files. Applying permissions at the root level M: or the mount point folders would not affect the mount point child file structures.

M:\SQLData\MSSQL\Data

M:\SQLLog\MSSQL\Log

M:\SQLTmp\MSSQL\TempDB

M:\SQLBak\MSSQL\Bak

 

Going further in another example, you could employ multiple root disks and multiple instances of SQL Server in the following manner

Root disk 1 D:           (Windows disk 2)

Root disk 2 E:           (Windows disk 3)

Root disk 3 M:           (Windows disk 4)

 

You would have multiple mount points, each instance would have 4 separate physical disks mounted under

Instance1 D:\INST1Data, D:\INST1Log, D:\INST1Tempdb, D:\INST1Bak     (Windows disks 6 - 9)

Instance 2 E:\INST2Data, E:\INST2Log, E:\INST2Tempdb, E:\INST2Bak    (Windows disks 10 - 13)

Instance 3 M:\INST3Data, M:\INST3Log, M:\INST3Tempdb, M:\INST3Bak    (Windows disks 14 - 17)

 

The SQL server instances would be created along the following paths

D:\INST1Data\MSSQL\Data, D:\INST1Log\MSSQL\Log, D:\INST1Tempdb\MSSQL\TempDB, D:\INST1Bak\MSSQL\Backups

E:\INST2Data\MSSQL\Data, E:\INST2Log\MSSQL\Log, E:\INST2Tempdb\MSSQL\TempDB, E:\INST2Bak\MSSQL\Backups

M:\INST3Data\MSSQL\Data, M:\INST3Log\MSSQL\Log, M:\INST3Tempdb\MSSQL\TempDB, M:\INST3Bak\MSSQL\Backups

 

It's easy to see how the management becomes more complex with multiple root disks and mounted volumes, it can't be stressed enough that documenting your setup as you go is more valuable than anything else, you do have an HLD and an LLD provided by the architect? Concise documentation gives you and anyone who follows you, a reference point for your system setup and can make troubleshooting easier too.


With Mount Points it's important to remember the following pointers

  • Do not install software to the root of the mount point. Rather, create and install into a folder under the mount point to allow the correct propagation of permissions.
  • Ideally, the root drive should not have any data created upon it, it is merely the root tree for the new volumes.
  • Permissions applied to the root drive file system and the mount point root folders do not propagate to the mount point file systems, they are controlled separately.
  • Mount points are not supported for all SQL Server 2000 configurations (in case you had a burning desire to use them). This is detailed in the relevant section above.

Windows requires online disks with formatted partitions before you are able to utilise mount points, let's look at those next as they form the basis for mounting any partition\volume.

Setting up the New Partitions

When initialising and before formatting your disks within Windows disk management, you should first decide on the disk type and partition type to be used.

In brief the 2 disk types possible are Basic and Dynamic. Basic disks are the default disk type used by the Windows Operating System. Dynamic disks are rarely used within Windows and are really only useful for creating spanned volumes or software RAID systems. In reality, the Windows disk you are formatting likely already has an underlying RAID system (such as a local controller) or it is a disk presented from a SAN or other network storage device. For this reason you should generally you should leave your disks as Basic type, especially for cluster support.

As well as the disk type, there are also 2 different partition types that may be used. The 2 types available are MBR (master boot record) or GPT (GUID partition type).

  • MBR partition types only allow a maximum of 4 partitions per disk (3 primary and 1 extended).
  • GPT partition types support very large disk types, a maximum of 128 partitions and are generally much more efficient and resistant to corruption.

However, I would suggest if your disk is under 500GB and has only 1 partition that you leave this as MBR.

The following Microsoft KB article provides further details on Basic and Dynamic disks and also MBR and GPT partition types. You should digest this before creating disk partitions:

 http://msdn.microsoft.com/en-us/library/aa363785(VS.85).aspx

 

To re cap, the items you will require to create your mount points are;

  • A root disk (or disks) with a formatted partition and a drive letter assigned within Windows
  • A defined set of mount point folders to be created upon the root disk.
  • Disk devices available for use within Windows disk management (be they local or SAN\NAS attached).

Creating a Mount Point Demonstration.

We will now work through a short demonstration showing how to create mount points within Windows. For the demonstration we need a small root disk, the smaller the better (mines 500MB but it could\should be as small as a few MB).

Note: Remember, the root disk is merely a file system place holder for the mounted drives, it will only contain empty folders which are then used to mount the NTFS volumes. The temptation for users to dump files on the root drive is high and something I've seen a lot over the years.


My partitioned and formatted root disk is shown below

 

On the root drive (M:) I have created the following folders, these folders currently display the typical Windows folder icon.

 

Open Windows Disk Management as we first need to create the new volumes. Create the first volume by right clicking an unallocated disk and selecting "New simple volume" as shown below.

 

Click “Next” in the new volume wizard to continue.

 

Specify the volume size and click “Next”.

 

It's at this point we specify where we want to mount this new volume, we could have chosen a drive letter but we want to use a mount point. Select the option to “Mount in the following empty NTFS folder:”, browse to the path and click “OK”.

 

With the mount point specified, click “Next” to continue.

 

Specify the format options and click “Next” to continue.

 

Click “Finish” to create the new volume. Repeat these steps for each volume to be created\mounted.

 

Once you have created the 3 volumes and mounted them to the specific folders, you will see the following in disk management. Your mount points are now ready to use for a stand alone SQL Server installation.

 

Mount Points for Clustered Instances

For clustered scenarios there are further checks to be made when utilising mount points for your clustered SQL Server instance drives. Those steps are detailed in the following Microsoft KB: 

http://support.microsoft.com/kb/280297

In brief, you now need to add this storage to the cluster via Failover Cluster Manager. Right click “Storage” and select “Add a disk” as shown below.

 

With the required disks selected, click “OK”.

 

Cluster disks 6, 8, 9 and 11 added. Note cluster disk 6 is the Root drive (M:).

 

Now add the storage to our newly created service\application group. My example uses a pre created application titled “Denali Mount Points”.

 

You must now set the dependencies between the root drive and the new volumes. Right click the first mounted volume and select “Properties” as shown below.

Note: the root drive must come online before the mounted volumes to ensure the full mount path and drive letter are available.

 

Set the root disk as the resource dependency, in this case cluster disk 6. Click “Apply” and “OK”. Do this for each of the 3 mounted volumes.

 

During the SQL server instance installation, select the existing Denali service\application group shown with a green tick.

 

At the cluster disk selection you must select the root drive for installation. The mounted drives are not available, attempting to select one of the mounted volumes causes setup to direct you to select the root tree drive (in my case cluster Disk 6 M: ).

 

On the database engine "Data Directories" tab, set your paths for the requested disk locations.

 

Post installation, these are the new SQL server folders under the mount points when viewed through explorer. Notice that the mount points now have a different icon and not the standard Windows folder icon.

 

This should give you more of an understanding of mount points and how they are set up. They operate just the same as any other folder with the exception that permissions grants are handled by each individual mount point and not by the root directory. Granting permissions at the root drive and pushing child inheritance will not affect the mount points. Still, they are essentially NTFS folders and, as the screenshot above shows, look seem less to the Windows filesystem. As you can see they are also an excellent way to bypass the Windows 26 drive letter limitation.

As always, test this in an offline environment, virtual machines make a great test lab and are extremely easy to set up using my three part guide on this site. Have fun and of course, post back if you have any troubles.



Sign In