SQL_Pez

This is the SQL Server Blog for Perry Whittle

Encrypted Backup Feature in SQL Server

clock June 18, 2025 14:26 by author TopDog

Welcome to my latest post, detailing encrypted backups in SQL Server 2014. In this article we'll be looking at the security employed to encrypt the backups
and how to implement them. We'll also look at how it interacts with backup compression. For the purposes of this article the following apply,

ACLs Access control lists, used to secure files on an NTFS filesystem
TDE Transparent Data Encryption
NTFS New Technology File System, the filesystem type used by Windows Server
OS operating system
SMK Service Master Key
DMK Database Master Key

 

Previously the options available to secure database backups, unless you were using a 3rd party product, consisted solely of a backup media password and\or
NTFS filesystem ACLs. However, the password option is more appropriate to prevent an accidental restore of an incorrect media rather than securing the
backup from unauthorised users, as early as SQL Server 2005, Books Online has detailed the following,

For sometime now, although it really doesn't seem that long ago, we have been offered the ability to use Transparent Data Encryption (TDE) to
protect the data at rest. This provided the ability to not only protect the raw database files, but also any backups that were created using
the BACKUP DATABASE command.
The problem with TDE is that when combining this with native compression, the compression was pretty much zero, the encryption negates
the ability to compress the data.

Now there's a viable alternative, the media encryption option is a new feature in SQL Server 2014 and offers the ability to securely encrypt your backups
as well as being able to utilise native compression, something which was not previously available natively.

How Does Encryption Work?

The encryption option in the BACKUP DATABASE command, when selected, uses an encryption algorithm to encrypt the backup.
The algorithm options available are,

  • AES_128

  • AES_192

  • AES_256

  • TRIPLE_DES_3KEY

  • NO_ENCRYPTION  

The algorithm in turn uses an encryptor, either in the form of a server certificate or a server asymmetric key. Both of these options
are fairly similar, the certificate uses the CREATE CERTIFICATE command whereas the asymmetrical key uses the
CREATE ASYMMETRICAL KEY command. Both utilise a public key and private key pair, the main difference is that an asymmetric
key cannot be exported, it also does not have expiry options.

The default protection option for both these encryptors is via the database master key. If this key does not exist when attempting to
create the encryptor, the command will fail and you will be asked to either supply an encryption password or create a database master key.

Since I've opted for protection by certificates, let's look at the creation process now, in order to create our certificate. On my new instance of
SQL Server I first need to create a Database Master Key, I use the CREATE MASTER KEY command as follows:

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'P@ssw0rd1'

 

A quick check via the sys.databases catalog view shows the following, the master key is also encrypted by the Service Master Key and may
be opened\closed automatically.

Note: This behaviour can be altered by dropping encryption by the SMK but this requires the DMK to then be manually opened\closed.

 

 

Now that we have our DMK created, we may create the encryptor in the form of a certificate. This is done using the following T-SQL:

USE master
GO
CREATE CERTIFICATE
[MyNewCert] AUTHORIZATION [dbo] --ENCRYPTION BY PASSWORD = 'P@sswordt0encryptcert'
WITH SUBJECT = 'My New Certificate' , START_DATE = '2014-01-01 00:00:00' , EXPIRY_DATE = '2020-01-01 00:00:00'


Depending on how much you trust your administrators, to specify encryption by a password instead of the DMK
uncomment the line above. Password protecting the certificate will restrict further who may backup and restore the certificate.

A quick check via the sys.certificates catalog view shows the following:

Note: the certificate is protected by the DMK

 

As soon as you have created the new certificate you must back it up immediately to avoid the possibility of data loss due to a missing certificate.
The backup is created via the following T-SQL:

USE master 
GO 
BACKUP CERTIFICATE [MyNewCert] TO 
FILE = 'E:\Bak\MSSQL12.INST1\MSSQL\Backup\MyNewCert.cer' WITH PRIVATE KEY( --DECRYPTION BY PASSWORD = 'P@sswordt0encryptcert', FILE = 'E:\Bak\MSSQL12.INST1\MSSQL\Backup\MyNewCert.pky', ENCRYPTION BY PASSWORD = 'P@sswordt0encryptcertbackup')


If in the previous section when creating the certificate, you encrypted the certificates private key with a password rather than the DMK,
you will need to uncomment the line above and provide this password to decrypt the private key.

Backup The database

The backups may now be encrypted using T-SQL or you may use the backup GUI within SSMS. To backup via the GUI, see the screenshots below

 

On the media options, you must specify a new media set to use backup encryption,

 

On the backup options page the "Encrypt backup" option is available.

Note: if you do not create a new media set, the encrypt backup option will be greyed out!

 

Alternatively, to backup via T-SQL use the following command

BACKUP DATABASE [AdventureWorks2014_ENCBACKUP] TO
DISK = N'E:\Bak\MSSQL12.INST1\MSSQL\Backup\AdventureWorks2014_ENCBACKUP.bak'
WITH INIT, MEDIANAME = N'Encrypted backup'
NAME = N'AdventureWorks2014_ENCBACKUP-Full Database Backup',
COMPRESSION, ENCRYPTION ( ALGORITHM = AES_256, SERVER CERTIFICATE = [MyNewCert] )
GO

 

Restore The Database

What happens when we want to restore our backup to another server, let's imagine for a moment we are going to create an
AlwaysOn group and we are using these backups to initialise the secondary database(s) on our new instance(s). If you attempt
to restore the backup immediately via the GUI you will see the following screenshot. There's an error but it's not helpful.

  

An attempt to read the backup header also produces an error, more meaningful this time.

 

We need to create the certificate on each instance where we intend to restore the database. This is done by copying the public key
and private key certificate backups that were taken previously, to a location accessible by the new instance and issuing a
CREATE CERTIFICATE command from a file. You must also create a Database Master Key on the new instance if it doesn't exist already.

The DMK does not have to match the source instance, they have no dependency on each other.

The following T-SQL shows how to create the certificate from a file on the new instance

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'Someseccurepassword'
GO CREATE CERTIFICATE [MyNewCert] AUTHORIZATION [dbo]
FROM FILE = 'E:\Bak\MSSQL12.INST1\MSSQL\Backup\MyNewCert.cer'
WITH PRIVATE KEY (
DECRYPTION BY PASSWORD = 'P@sswordt0encryptbackup'
, FILE = 'E:\Bak\MSSQL12.INST1\MSSQL\Backup\MyNewCert.pky'
--, ENCRYPTION BY PASSWORD = 'P@sswordt0encryptcert'
) 

When restoring the certificate backup, if you wish to encrypt the private key by password instead of the DMK, uncomment the line above.

A fresh attempt at reading the backup header shows us the certificate has been correctly installed.

  

The restore via the GUI is now possible.

 

Effects of Encryption on Compressed Backups

To test the effects I took three backups. Backup A used no compression or encryption. Backup B used compression only. Backup C
used compression and AES256 encryption. The results were as follows;

Backup Name Time Taken (secs) Size (MBs)
Backup A (none) 11.88 190
Backup B (compressed) 3.66 44.5
Backup C (compressed\AES 256) 3.62 44.5


Finally, SQL Server now offers the ability to encrypt your data alongside using efficient compression. Whether or not they will rival popular
3rd party backup products remains to be seen, but at least in native mode you can be sure your data is both stored efficiently to conserve
space and is safe from prying eyes. Just ensure you manage the certificates correctly and any passwords used to secure them.
Have fun and as always, if you have any questions, send me an email and I'll help all I can.



Database File Growths & File Shrinking in SQL Server

clock February 21, 2025 15:57 by author TopDog

Welcome to this post on file growths and shrinking in SQL Server. In this post I would like to discuss the file growths, sizes,
shrinking and how they impact daily use.

First the elephant in the room

File Shrinking

Not much to say here but don't, just don't!
It's very well documented that shrinking files, whether data or log, is a bad practice and totally unnecessary.
Typically, what I have seen over the years in my career, is when an administrator or DBA (junior or cross platform) sees
free space in a file(s) and shrinks the file(s) as it's viewed as an unnecessary use of space.

We all know that for data files, it will reintroduce internal fragmentation in any indexes, which is not ideal especially
when your maintenance jobs spend a lot of time keeping them healthy. When it comes to log files, I've seen
scenarios where administrators have set up daily jobs to shrink the log ( it will only grow again! ).

If proper capacity and planning has been performed, you will naturally have a set amount of free space within your files
to prevent constant growth placing undue stress on the storage sub systems. The files should be right-sized and grown
out to your expected 3 year size to provide sufficient data storage for the incoming data.

For the transaction log, testing will provide you with see the sweet spot size that is required to prevent constant growth
and an unnecessary increase in log VLFs, if you know the space will be used why remove it?
This leads us nicely onto file growths and sizes.

File Sizes and Growths

Where large transaction logs are encountered, it's important to realise that the transaction log has grown to that size
as that is the space it requires at busy periods to handle the transaction rate between log backups. To help maintain a
more controlled log size consider increasing the log backup frequency during busy times.

For example you may have your log backups run every 20 mins throughout the day, but if you know that from
11pm to 1am you have a large daily import it would make sense to add a new schedule that takes log backups every
5 minutes or so during that time window.

The mileage will vary on this and you should test to find the sweet spot but the flexibility is there if required.
There is no need to continually shrink log files only for them to grow rapidly again when an import job or
index maintenance job launches. Furthermore, your storage admins will likely thank you for reducing the stress on the
storage subsystem and may even be impressed that you have taken steps to ensure that the storage system suffers
no severe consequences.

What's the way forward

A positive start can be achieved by right sizing the storage and the database files by understanding the possible sizes
required for each. When implementing any application deployment, engaging with the vendor to obtain expected database
size information is the best route available. Microsoft also provide a database sizing tool to help with size estimations.

The Microsoft sizing calculations can be found at the following link:

Microsoft Database Size Calculations

For in house developed applications, the sizing calculations are even more valuable and provide an opportunity to ratify
database design by spotting columns that may benefit from a reduced\different data type to that which is proposed.

As an example, if a column is spotted as AddressLine1 and it's data type is defined as varchar(MAX), there's a valid
reason to challenge this data type usage for something a little more appropriate.

Be mindful of the database file sizes and track usages over periods of time and where auto grow is used set realistic
file growths to prevent regular unwanted growths.



Azure Lifecycle Management Policies and Backup Blob Management

clock September 26, 2024 10:12 by author TopDog

 In this post i want to cover the use of Azure Lifecycle Management Policies and the purging of sql server backup blobs in storage account containers. Anyone now using BackupToURL for their sql server instances will likely have an Azure storage account steadily accruing a large number of backup files.

I will cover BackupToURL in another post, but for now, i want to focus on the management of the accrued backup files. If, when you setup your backup strategy, you did nothing else other than configure the instance and create the jobs, then files will be accruing at a fair rate. The option to delete files in some backup scripts does not reliably manage this for you either, have you checked your storage account usage recently to see just how much data is stored there?

This is where the Lifecycle Policy option comes into play. You can find this feature on the Azure Storage Account blade, as shown below:

 

 

The typical view for a storage account with containers is shown below:

 

 



Within the container i have folders for each logical backup type and these are easily configurable using everyones favourite backup scripts. The folders are symbolic links and not real folders, deleting all files from the folder will make the folder disappear from the container. The structure i have is shown below:



I have separate folders created for each backup type of FULL, DIFF and LOG. Looking at the FULL backup folder i have a couple of test files as shown below:

 

The policy used to purge these folders is detailed below. You can create the policy either by list view or code view.

{
  "rules": [
    {
      "enabled": true,
      "name": "rm-diff-log-sqlpw01",
      "type": "Lifecycle",
      "definition": {
        "actions": {
          "baseBlob": {
            "delete": {
              "daysAfterModificationGreaterThan": 2
            }
          }
        },
        "filters": {
          "blobTypes": [
            "blockBlob"
          ],
          "prefixMatch": [
            "mycontainer01/LOG/LOG",
            "mycontainer01/DIFF/DIFF"
          ]
        }
      }
    },
    {
      "enabled": true,
      "name": "rm-full-sqlpw01",
      "type": "Lifecycle",
      "definition": {
        "actions": {
          "baseBlob": {
            "delete": {
              "daysAfterModificationGreaterThan": 4
            }
          }
        },
        "filters": {
          "blobTypes": [
            "blockBlob"
          ],
          "prefixMatch": [
            "mycontainer01/FULL/FULL"
          ]
        }
      }
    }
  ]
}

 

The policy performs a pre fix match looking for files to action. The best way i found to ensure the policy purged the files was to prefix each backup file name with the backup type rather than other naming conventions where the backup type may be a middle part of the string. This is easily configurable in any backup script, a typical naming convention i use is: 

backuptype_servername_dbname_yyyymmdd_hhmmss.bak

 

Since you will always usually retain the full backups for longer than the PIT backups i have a longer retention for Full backups on the lifecycle policies. Typically, full backups may be retained for 6 or even 12 months or more, PIT recovery would not usually be required after 35 days or so, this is the logic applied for Azure SQL Database PITR periods. You can also move older backup files to cooler storage tiers to make further cost savings while still retaining the data for longer.
For your environment you should test to find the best solution that fits, offloading database backups to the cloud offers high redundancy and flexibility. For backups stored in a locally redundant storage account in the UK South region, 3 copies of the data are kept, 1 in each availability zone.There will be some work required to ensure that your environment and network configuration supports BackupToURL, more can be found on this topic at the following Microsoft links: 
 

SQL Server BackupToURL 

Troubleshooting BackupToURL



Database Placement and Recovery in Availability Groups

clock July 12, 2024 09:03 by author TopDog

Hi, In this post I want to discuss database placement and recovery within a SQL Server Availability Group.

I've seen posts in the past offering a wide range of recommendations and solutions to placement and recovery
and it's important to clarify the ideal practice.

As we know, there is no current maximum for the number of databases in an AG, although, Microsoft have
tested various configurations up to 100 databases.

But first, what's the point of an AG, why use one? If we understand this we have a guide for our maximum number.

 

First let's recall what happens during the AG failover:

All databases in an AG go through a recovery process whereby the committed log records are replayed into the database,
uncommitted records are rolled back. The thread that performs this operation is now multi threaded since SQL Server 2016,
but it must be kept in mind that the instance only has a finite number of worker threads available.
You can track the recovery process inside the SQL Server error log, a sample is shown below:

 

 


Once all of the databases have been recovered, the group will be brought online and the databases are accessible,
this is the important part to realise. If you have, 5 or even 10 databases this could be fairly quick. If you have 100 or more
databases this could be a lengthy task. Databases with a long outstanding redo queue will affect the group recovery time too.
Very large databases could have a lengthy redo queue even with multiple threads active.

 

Second, what's the point of an AG?

A SQL server Availability Group is designed to ensure that logically grouped sets of databases failover together as a unit in the event
of an issue. You may have an application that has 3 or more logically grouped databases and they're all required to be online for the
application to function correctly. Over the years i have seen many configurations, one in particular I encountered was where the AG
had just about every database on the instance ( around 130 databases ), none of which were logically linked and the group recovery
time took over 15 minutes.

Also, with separate AGs and listeners per application you can expose dedicated endpoints into the databases.

 

So what is the best\ideal configuration to use?

Understanding the application architecture is an important point, knowing how the application databases are utilised will give a clear
picture on how the highly available database placements can be configured. Engaging with the architect team to highlight preferences
for application design\topology is important to ensure that scenarios have been considered that they may have overlooked.
The takeaway here is, don't just put all of your databases into one AG and hope for the best.
Plan based on application requirements and find the best configuration.



Creating and Using Storage Spaces Mount Points in SQL Server

clock July 1, 2024 10:33 by author TopDog

Further to my original post Creating and Using Mount Points in SQL Server, this post contains updated content using Windows Storage Spaces. Storage Spaces is a storage feature introduced in Windows 2012, the old 3rd party Disk Management Snapin is replaced by Microsoft's own disk management Snapin tool, although the original is still present even in Windows 2022. Although the storage can be easily managed via the Server Manager GUI, the intended management interface for the disk objects is using PowerShell and we'll look at the cmdlets in detail as we go through an example.

You should have already checked whether your planned SQL Server instance will support Mount Points as not all versions provide this. With that in mind, let's briefly look at this now

Validating SQL Server Compatibility

Check whether or not your chosen version of SQL Server will support mount points. Not all editions of SQL Server do support the use of mount points, therefore, before configuring mount points and attempting to install SQL Server you must refer to the following Microsoft KB article for further details: 

http://support.microsoft.com/kb/819546

In brief, Versions of SQL Server 2005 and beyond fully support mount points for stand alone and clustered environments.

What are Mount Points?

Anybody who has ever used a Unix system would have come across the use of mount points before, they are common within this Network Operating System. Anything from a disk partition to a CD drive or floppy drive all require mounting before use, these file systems are usually mounted under a root partition folder.

Within the Windows operating system we have the ability to perform similar actions with multiple Windows disks. We have 26 drive letters initially available for assignment, subtract from this the reserved set of drives (A, B, C) and you are commonly left with 23. This doesn't leave much scope, especially if you plan to use many data locations per instance. You may have multiple SQL Server instances and require multiple disk sets to offload I\O, For each instance you may originally plan to use separate assignments for

  • SQL Server data
  • SQL Server logs
  • SQL Server backups
  • SQL Server tempdb

That's 4 drive letters for a single instance. Mount points enable you to use a single drive letter for all of the chosen locations but still implement separate physical drives for each of the individual data locations. How is this possible?


Mount points are simply file system locations (in the form of folders) under which a physical disk partition may be mounted\referenced. You start with a root reference known as the root disk. The root disk can, in fact should be, as small as possible. This will avoid multiple users creating data on the root disk and make administration easier. You must also bear in mind that all I\O destined for the Mount Point occurs on the mounted physical disk and not the root disk.

We'll go through actually creating mount points later in the article, for now let's look at a little theory. We have a new SQL instance required and we need to separate out the SQL Server files to separate disks, our basic example is as follows

Root disk 1 M:

 

Multiple mount points are then employed in the following configuration for our single instance, remember each of these mount points is actually a separate physical drive from the Server Manager File and Storage view.

M:\SQLData

M:\SQLLog

M:\SQLTempdb

M:\SQLBak

 

The instance may be installed using the paths below. To assign permissions we apply the ACLs at the MSSQL folder level on each mount point and they then propagate down to the child folders\files. Applying permissions at the root level M: or the mount point folders would not affect the mount point child folder\file structures.

M:\SQLData\MSSQL\Data

M:\SQLLog\MSSQL\Log

M:\SQLTmp\MSSQL\TempDB

M:\SQLBak\MSSQL\Bak

 

Going further in another example, you could employ multiple root disks and multiple instances of SQL Server in the following manner

Root disk 1 D:

Root disk 2 E:

Root disk 3 M:

 

You would have multiple mount points, each instance would have 4 separate physical disks mounted under

Instance1 D:\INST1Data, D:\INST1Log, D:\INST1Tempdb, D:\INST1Bak

Instance 2 E:\INST2Data, E:\INST2Log, E:\INST2Tempdb, E:\INST2Bak

Instance 3 M:\INST3Data, M:\INST3Log, M:\INST3Tempdb, M:\INST3Bak

 

The SQL server instances would be created along the following paths

D:\INST1Data\MSSQL\Data, D:\INST1Log\MSSQL\Log, D:\INST1Tempdb\MSSQL\TempDB, D:\INST1Bak\MSSQL\Backups

E:\INST2Data\MSSQL\Data, E:\INST2Log\MSSQL\Log, E:\INST2Tempdb\MSSQL\TempDB, E:\INST2Bak\MSSQL\Backups

M:\INST3Data\MSSQL\Data, M:\INST3Log\MSSQL\Log, M:\INST3Tempdb\MSSQL\TempDB, M:\INST3Bak\MSSQL\Backups

 

It's easy to see how the management becomes more complex with multiple root disks and mounted volumes, it can't be stressed enough that documenting your setup as you go is more valuable than anything else, you do have an HLD and an LLD provided by the architect? Concise documentation gives you and anyone who follows you, a reference point for your system setup and can make troubleshooting easier too.


With Mount Points it's important to remember the following pointers

  • Do not install software to the root of the mount point. Rather, create and install into a folder under the mount point to allow the correct propagation of permissions.
  • Ideally, the root drive should not have any data created upon it, it is merely the root tree for the new volumes.
  • Permissions applied to the root drive file system and the mount point root folders do not propagate to the mount point file systems, they are controlled separately.
  • Mount points are not supported for all SQL Server 2000 configurations (in case you had a burning desire to use them). This is detailed in the relevant section above.

Windows requires online disks with formatted partitions before you are able to utilise mount points, let's look at those next as they form the basis for mounting any partition\volume.

Choosing Partition Types

When initialising and before formatting your disks within Windows disk management, you should first decide on the disk type and partition type to be used.

In brief the 2 disk types possible are Basic and Dynamic. Basic disks are the default disk type used by the Windows Operating System. Dynamic disks are rarely used within Windows and are really only useful for creating spanned volumes or software RAID systems. In reality, the Windows disk you are formatting likely already has an underlying RAID system (such as a local controller) or it is a disk presented from a SAN or other network storage device. For this reason you should generally you should leave your disks as Basic type, especially for cluster support.

As well as the disk type, there are also 2 different partition types that may be used. The 2 types available are MBR (master boot record) or GPT (GUID partition type).

  • MBR partition types only allow a maximum of 4 partitions per disk (3 primary and 1 extended).
  • GPT partition types support very large disk types, a maximum of 128 partitions and are generally much more efficient and resistant to corruption.

However, I would recommend if your disk is under 500GB and has only 1 partition that you leave this as MBR.

The following Microsoft KB article provides further details on Basic and Dynamic disks and also MBR and GPT partition types. You should digest this before creating disk partitions:

 http://msdn.microsoft.com/en-us/library/aa363785(VS.85).aspx

 

Windows Storage Spaces

Storage Spaces is a feature first introduced in Windows 2012. Storage Spaces takes groups of disks and uses software RAID implementation to create highly redundant disk structures, it can be used for standalone Windows Servers or for clustered implementations.

For more info check this Microsoft link. 

To re cap, the items you will require to create your mount points are;

  • A root disk (or disks) with a formatted partition and a drive letter assigned within Windows
  • A defined set of mount point folders to be created upon the root disk.
  • Disk devices available for use within Windows Storage Spaces.

Creating a Mount Point Demonstration.

We will now work through a short demonstration showing how to create mount points within Windows. For the demonstration we need a small root disk, the smaller the better (mines 500MB but it could\should be as small as a few MB).

Note: Remember, the root disk is merely a file system place holder for the mounted drives, it will only contain empty folders which are then used to mount the NTFS volumes. The temptation for users to dump files on the root drive is high and something I've seen a lot over the years.


The first requirement is to create a storage pool, this is a group of disks that will act as one array. The disks I have presented initially are shown below;



The following cmdlet is used to create the new disk pool, i have supplied default parameters for provisioning type and resiliency setting, which will be inherited by each virtual disk and volume;

 


To partition and format a volume on the root disk I use a single PowerShell cmdlet, you could use the individual cmdlets to achieve each step but the following cmdlet completes the operations in a single request as shown below;

 

On the root drive (F:) I have created the following folders, these folders currently display the typical Windows folder icon.


The next step is to create each new volume and specify an access path on the new root disk, this still uses the new volume cmdlet but with slightly different parameters. The cmdlet syntax shown below is executed once for each of the 4 new volumes (the FriendlyNames and AccessPaths change);


Once you have created the 4 volumes and mounted them to the specific folders, you will see the following in Server Management. Your mount points are now ready to use for a stand alone SQL Server installation.

 

 

Once the folders have mount points the icons change as shown below.

 

Mount Points for Clustered Instances

For clustered scenarios the storage pools are managed by Storage Spaces Direct, this feature is enabled via PowerShell once the WSFC has been created and the storage has been presented to the nodes.
S2D, the name for Storage Spaces Direct, claims all available and suitable disks on each node and creates a storage pool inside the WSFC. What does this mean, let's look at a scenario for a 3 node cluster;

 

Each of the nodes above has 3 x 1TB disks attached to a SCSI adapter, they're local disks on the local controllers. When enabling S2D you will be asked to confirm that all disks are to be claimed and added to the storage pool name that you supply.

Once the operation is complete, one large disk pool is created across all of the disks. To create the volumes and assign the paths, let's look through the example below;

First create the root drive, in the Server Manager storage blade, select storage pools and select the storage pool you created. Click Tasks under Virtual Disks and select New Virtual Disk:

 

Click OK to create the virtual disk on the storage pool.

 

Supply a virtual disk name and click Next.

 

 Specify the virtual disk size, this is the root drive so as small as possible, and click Next.

 

 Review the selections and click Create.

 

 Once the Virtual Disk has been created you can close the wizard, if the check box to create a volume is left checked, the new volume wizard will start when the virtual disk wizard dialog closes.

 

The new volume wizard starts and first requires the new disk target, select the new Virtual Disk as your target and click Next.


Confirm the size of the new volume and click Next.


Assign your drive letter and click Next.


Specify any volume settings and a label, for the root disk, the filesystem and allocation unit size can be default. Click Next.


Review the selections and click Create.



Once the new volume wizard completes successfully, close the dialog and review what you have on the Server Manager storage blade.

The steps to create the mount points are the same with exception to the Assign Drive Letter or Folder step. Below are the pop out steps in this section of the wizard.
Select the option for "The following folder" and open the folder browser, within the "Select folder" dialog you are able to create and then select\assign a folder for your new volume, rather than a drive letter.

 

 

Once you have created the folder, select the appropriate folder and click Next.

 

 On the volume settings dialog, select your filesystem and allocation unit size along with your label and click Next.

 

Proceed with the rest of the wizard as you did for the root drive. That's really all that's required, check Server Manager for the newly created Virtual Disks and volumes and finally, open Failover Cluster Manager to view the new clustered disk objects. Install SQL server using the section in my stairway to HA guide at this link.

This should give you more of an understanding of mount points and how they are set up. They operate just the same as any other folder with the exception that permissions grants are handled by each individual mount point and not by the root directory. Granting permissions at the root drive and pushing child inheritance will not affect the mount points. Still, they are essentially NTFS folders and they are also an excellent way to bypass the Windows 26 drive letter limitation.

As always, test this in an offline environment, virtual machines make a great test lab and are extremely easy to set up. Have fun and of course, post back if you have any troubles.



Restore database code

clock June 25, 2024 14:27 by author TopDog

USE [master]

GO

 

DECLARE@dbnameVARCHAR(128)

,@TGTDBNameSYSNAME = ''--TestDB'

,@BackupPathandFile NVARCHAR(2048) =

--'\\server\Backup\folder\FULL\dbname_FULL_20220214_001157.bak'

'F:\Data\MSSQL14.CORP\MSSQL\Backup\agtest_full_20220215.bak'

,@DataFilePathVARCHAR(256) = ''

--@DataFilepath VARCHAR(256)='C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\DATA'

,@LogFilePathVARCHAR(256) = ''

--@LogFilePath VARCHAR(256)='C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\DATA'

,@MediaVARCHAR(5) = 'disk' 

,@DebugINT = 1

,@recoverCHAR(1) = 'y'

,@replaceINT = 1

,@RestoreCommandNVARCHAR(MAX)

,@SQLCommandNVARCHAR(MAX)

 

SET NOCOUNT ON

SET CONCAT_NULL_YIELDS_NULL OFF

 

--Does the backup path and file name exist?

--If not raise an error and exit the proc

DECLARE @result INT

IF UPPER(@Media) = 'DISK'

BEGIN

EXEC master.sys.xp_fileexist @BackupPathandFile, @result OUTPUT

IF @result = 0

BEGIN 

PRINT '--The file "' + @BackupPathandFile + '" does not exist, check path and filename!' + CHAR(10) + '--Ensure you have not specified a URL backup for a disk based path.' + CHAR(10)

RETURN

END

ELSE PRINT '--The file "' + @backuppathandfile + '" is present' + CHAR(10)

END

 

--Check the media type, if invalid return

IF UPPER(@Media) <> 'DISK'

BEGIN

PRINT 'Media type invalid, check the media type specified and re run the procedure'

RETURN

END

 

 

--Are the database file paths null or empty string?

--If they are get the default locations from the instance

DECLARE @DefaultData VARCHAR(256),@DefaultLog VARCHAR(256),@instancename VARCHAR(64),@instance VARCHAR(128),@prodmajver VARCHAR(4)

 

SELECT @prodmajver = CAST(SERVERPROPERTY('ProductMajorVersion') AS VARCHAR(4))

SELECT @Instance= 'SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL' + @prodmajver + '.' + 

CASE 

WHEN SERVERPROPERTY('InstanceName') IS NULL THEN 'MSSQLSERVER'

ELSE CAST(SERVERPROPERTY('InstanceName') AS VARCHAR(64))

END

+ '\MSSQLServer'

 

IF @DataFilepath = '' or @DataFilepath IS NULL

BEGIN

--EXEC [master].dbo.xp_instance_regread N'HKEY_LOCAL_MACHINE', N'SOFTWARE\Microsoft\MSSQLServer\MSSQLServer', N'DefaultData', @DefaultData OUTPUT

EXEC [master].dbo.xp_regread N'HKEY_LOCAL_MACHINE', @instance, N'DefaultData', @DefaultData OUTPUT

IF (@DefaultData IS NULL)

BEGIN

PRINT 'Check the DefaultData registry key is set in the SQL Server instance hive at:' + CHAR(10) + 'HKEY_LOCAL_MACHINE\' + @instance + CHAR(10) +

'and retry, or supply a valid datafile path to the procedure'

RETURN

END

ELSE SELECT @DataFilePath = @DefaultData

END 

 

IF @LogFilePath = '' OR @LogFilePath IS NULL

BEGIN

--EXEC [master].dbo.xp_instance_regread N'HKEY_LOCAL_MACHINE', N'SOFTWARE\Microsoft\MSSQLServer\MSSQLServer', N'DefaultLog', @DefaultLog OUTPUT

EXEC [master].dbo.xp_regread N'HKEY_LOCAL_MACHINE', @instance, N'DefaultLog', @DefaultLog OUTPUT

IF (@DefaultLog IS NULL)

BEGIN

PRINT 'Check the DefaultLog registry key is set in the SQL Server instance hive at:' + CHAR(10) + 'HKEY_LOCAL_MACHINE\' + @instance + CHAR(10) +

'and retry, or supply a valid logfile path to the procedure'

RETURN

END

ELSE SELECT @LogFilePath = @DefaultLog

END

 

--Create a temporary table to store the results from the RESTORE FILELISTONLY command

IF OBJECT_ID('tempdb..#filelist', 'U') IS NOT NULL

BEGIN

DROP TABLE #filelist

END

 

CREATE TABLE #filelist (

[LogicalName]NVARCHAR(128)

,[PhysicalName]NVARCHAR(260)

,[Type]CHAR(1)

,[FileGroupName]NVARCHAR(128)

,[Size]NUMERIC(20,0)

,[MaxSize]NUMERIC(20,0)

,[FileID]INT

,[CreateLSN]NUMERIC(25,0)

,[DropLSN]NUMERIC(25,0) NULL

,[UniqueID]UNIQUEIDENTIFIER

,[ReadOnlyLSN]NUMERIC(25,0) NULL

,[ReadWriteLSN]NUMERIC(25,0) NULL

,[BackupSizeInBytes]BIGINT

,[SourceBlockSize]INT

,[FileGroupID]INT

,[LogGroupGUID]UNIQUEIDENTIFIER NULL

,[DifferentialBaseLSN]NUMERIC(25,0) NULL

,[DifferentialBaseGUID]UNIQUEIDENTIFIER

,[IsReadOnly]BIT

,[IsPresent]BIT

,[TDEThumbprint]VARBINARY(32)

,[SnapshotURL]VARCHAR(1024)

)

 

--Create a temporary table to store the results from the RESTORE HEADERONLY command

IF OBJECT_ID('tempdb..#filehead', 'U') IS NOT NULL

BEGIN

DROP TABLE #filehead

END

 

CREATE TABLE #filehead(

BackupNameVARCHAR(128)

,BackupDescriptionVARCHAR(255)

,BackupTypeSMALLINT

,ExpirationDateDATETIME

,CompressedBIT

,PositionSMALLINT

,DeviceTypeTINYINT

,UserNameVARCHAR(128)

,ServerNameVARCHAR(128)

,DatabaseNameVARCHAR(128)

,DatabaseVersionINT

,DatabaseCreationDateDATETIME

,BackupSizeNUMERIC(20,0)

,FirstLSNNUMERIC(25,0)

,LastLSNNUMERIC(25,0)

,CheckpointLSNNUMERIC(25,0)

,DatabaseBackupLSNNUMERIC(25,0)

,BackupStartDateDATETIME

,BackupFinishDateDATETIME

,SortOrderSMALLINT

,[CodePage]SMALLINT

,UnicodeLocaleIdINT

,UnicodeComparisonStyleINT

,CompatibilityLevelTINYINT

,SoftwareVendorIdINT

,SoftwareVersionMajorINT

,SoftwareVersionMinorINT

,SoftwareVersionBuildINT

,MachineNameVARCHAR(128)

,FlagsINT

,BindingIDUNIQUEIDENTIFIER

,RecoveryForkIDUNIQUEIDENTIFIER

,CollationVARCHAR(128)

,FamilyGUIDUNIQUEIDENTIFIER

,HasBulkLoggedDataBIT

,IsSnapshotBIT

,IsReadOnlyBIT

,IsSingleUserBIT

,HasBackupChecksumsBIT

,IsDamagedBIT

,BeginsLogChainBIT

,HasIncompleteMetaDataBIT

,IsForceOfflineBIT

,IsCopyOnlyBIT

,FirstRecoveryForkIDUNIQUEIDENTIFIER

,ForkPointLSNNUMERIC(25,0)

,RecoveryModelVARCHAR(60)

,DifferentialBaseLSNNUMERIC(25,0)

,DifferentialBaseGUIDUNIQUEIDENTIFIER

,BackupTypeDescriptionVARCHAR(60)

,BackupSetGUIDUNIQUEIDENTIFIER

,CompressedBackupSizeBIGINT

,containmentTINYINT

,KeyAlgorithmVARCHAR(32)

,EncryptorThumbprintVARBINARY(20)

,EncryptorTypeVARCHAR(32)

)

 

SET @RestoreCommand = 'RESTORE HEADERONLY FROM ' + UPPER(@Media) + '=''' + @BackupPathandFile + ''''

 

BEGIN TRY

INSERT INTO #filehead

EXEC (@RestoreCommand)

IF @Debug = 1

BEGIN

PRINT '--DEBUG MSG: The header restore command used is;' + CHAR(13) + @RestoreCommand + CHAR(10)

END

END TRY

BEGIN CATCH

PRINT 'Error restoring filelist, check the media type matches the backup file path\name specified'

END CATCH

 

SELECT @dbname = databasename FROM #filehead

IF @Debug = 1

PRINT '--The database name from the backup file is [' + @dbname + ']' + CHAR(13)

 

/* fetch list of database files from source backup file */

SET @RestoreCommand = ''

SET @RestoreCommand = 'RESTORE FILELISTONLY FROM ' + UPPER(@Media) + '=''' + @BackupPathandFile + ''''

 

BEGIN TRY

INSERT INTO #filelist

EXEC (@RestoreCommand)

IF @Debug = 1

BEGIN

PRINT '--DEBUG MSG: The filelist restore command used is;' + CHAR(13) + @RestoreCommand + CHAR(10)

END

END TRY

BEGIN CATCH

PRINT 'Error restoring filelist, check the media type matches the backup file path\name specified'

END CATCH

 

 

IF (SELECT [type] FROM #filelist WHERE type = 'S') IS NOT NULL

BEGIN

IF (SELECT [value] FROM sys.configurations WHERE [name] = N'filestream access level') = 0

BEGIN

PRINT '--Filestream db detected, check filestream is enabled'

IF @Debug = 1

BEGIN

PRINT '--DEBUG MSG: Filestream is not enabled, enable FileStream and retry' + CHAR(10)

RETURN

END

END

END

 

--Restoring to same db or to new name?

--If the target name is null or empty string get the dbname from the backupfile header

IF @TGTDBName IS NULL OR @TGTDBName = ''

BEGIN

IF @Debug = 1

PRINT '--DEBUG MSG: Target name not supplied, using name ' + QUOTENAME(@dbname) + ' from the backupset' + CHAR(10) + CHAR(13)

SELECT @TGTDBName = @dbname

END

ELSE

BEGIN

IF @Debug = 1

PRINT '--DEBUG MSG: Target name supplied, using name ' + QUOTENAME(@TGTDBName) + CHAR(10) --+ CHAR(13)

END

 

 

--If the existing database is online, take it offline first to kill connections

IF DATABASEPROPERTYEX(@TGTDBName,'Status') = 'ONLINE'

BEGIN

SET @SQLCommand = 'ALTER DATABASE [' + @TGTDBName + '] SET OFFLINE WITH ROLLBACK IMMEDIATE' + CHAR(10) + CHAR(13)

 

IF @Debug = 0

BEGIN

EXEC (@SQLCommand)

END

ELSE PRINT '--DEBUG MSG: ' + @SQLCommand

END

 

/* Construct the restore command */

SELECT @RestoreCommand = ''

SET @RestoreCommand = 'RESTORE DATABASE [' + @TGTDBName + '] FROM ' + UPPER(@Media) + '=N''' + @BackupPathandFile + ''' WITH ' + CHAR(10)

 

--Data and Log files

SELECT @RestoreCommand += 'MOVE N''' + LogicalName + 

CASE [type]

WHEN 'D' THEN ''' TO N''' + @DataFilePath + '\' 

WHEN 'L' THEN ''' TO N''' + @LogFilePath + '\' 

END  + 

REPLACE(REVERSE(SUBSTRING(REVERSE(PhysicalName),0, CHARINDEX('\', REVERSE(PhysicalName), 0))), @dbname, @TGTDBName)

+ ''',' + CHAR(10) 

FROM #filelist 

WHERE type <> 'S'

ORDER BY [Type],[FileID]

 

 

--Get the FileStream paths too if there are any

SELECT @RestoreCommand += 'MOVE N''' + LogicalName + 

CASE type

WHEN 'S' THEN ''' TO N''' + @DataFilePath + '\'

END +

REPLACE(REVERSE(SUBSTRING(REVERSE(PhysicalName),0, CHARINDEX('\', REVERSE(PhysicalName), 0))), @dbname, @TGTDBName)

+ ''',' + CHAR(10)

FROM #filelist

WHERE type = 'S'

 

SELECT @RestoreCommand +=

CASE @replace 

WHEN 0 THEN 'STATS=10'

ELSE 'REPLACE,STATS=10'

END +

CASE UPPER(@recover)

WHEN 'Y' THEN ''

WHEN 'N' THEN ',norecovery'

END

 

/* Perform the restore */

PRINT '--Restore command is: ' + CHAR(10) + @RestoreCommand

IF @Debug = 0

BEGIN TRY

EXEC (@RestoreCommand)

END TRY

BEGIN CATCH

PRINT 'Restore failed, check the SQL Server error log for more detail'

END CATCH

 

/* Perform the restore */

PRINT '--Restore command is: ' + CHAR(10) + @RestoreCommand

IF @Debug = 0

BEGIN TRY

EXEC (@RestoreCommand)

END TRY

BEGIN CATCH

PRINT 'Restore failed, check the SQL Server error log for more detail'

END CATCH

 

/* Tidy up */

DROP TABLE #filelist

DROP TABLE #filehead

GO



Listing server logins holding roles and server permissions

clock June 17, 2024 23:26 by author TopDog

The following script will list all server principals and any elevated roles or permissions they hold.
The script is valid for SQL Server instances and Azure SQL Managed Instances

 

SELECT 	sp.name										AS ServerPrincipal, 
	sp.type_desc									AS LoginType,
	sp.type,
	CASE sp.is_disabled
	WHEN 0 THEN 'No'
	WHEN 1 THEN 'Yes'
	END												AS UserDisabled, 
	sp.create_date									AS DateCreated, 
	sp.modify_date									AS DateModified,
	sp.default_database_name						AS DefaultDB, 
	sp.default_language_name						AS DefaultLang,
	ISNULL(STUFF((SELECT ',' + 
		CASE ssp22.name
		WHEN 'sysadmin' THEN ssp22.name + ' "Danger Will Robinson"'
		ELSE ssp22.name
		END		   
		FROM sys.server_principals ssp2 
		INNER JOIN sys.server_role_members ssrm2 
		ON ssp2.principal_id = ssrm2.member_principal_id
		INNER JOIN sys.server_principals ssp22
		ON ssrm2.role_principal_id = ssp22.principal_id
		WHERE ssp2.principal_id = sp.principal_id
		ORDER BY ssp2.name	 
		FOR XML PATH(N''), TYPE).value(N'.[1]', 
		N'nvarchar(max)'), 1, 1, N''), 'NoRolesHeld')			AS ListofServerRoles,		
	ISNULL(STUFF((SELECT ';' + ' Permission [' + sspm3.permission_name + '] is [' +
		CASE 
		WHEN sspm3.state_desc LIKE 'GRANT%' THEN 'Granted]'
		WHEN sspm3.state_desc = 'DENY' THEN 'Denied]'
		END AS PermGrants
		FROM sys.server_principals ssp3 
		INNER JOIN sys.server_permissions sspm3 
		ON ssp3.principal_id = sspm3.grantee_principal_id		
		WHERE sspm3.class = 100 AND 
		sspm3.grantee_principal_id = sp.principal_id AND
		sspm3.permission_name <> 'CONNECT SQL'
		FOR XML PATH(N''), TYPE).value(N'.[1]', 
		N'nvarchar(max)'), 1, 1, N''), 'NoServerPermissions') 
		+ ' in Server::' + @@ServerName + ''					AS PermGrants	
						
FROM sys.server_principals sp 
WHERE sp.type <> 'R' AND sp.name NOT LIKE '##%##' 
ORDER BY ServerPrincipal


Guide to Azure SQL Database

clock May 31, 2024 16:30 by author TopDog

Welcome to my blog, in this post I want to discuss Azure SQLDB.

There's a fair level of misunderstanding around this PaaS Azure feature, questions around this have recently appeared on the forums, so this post will seek to clarify what the feature is and what it can do.

What is SQLDB?

Azure SQLDB is a single instance of a SQL Server database deployed within a subscription in the Azure Cloud.
The platform has different tier levels and different redundancy levels to suit different performance and redundancy requirements. The Tier levels are based on either vCore or DTU based provisioning, the tier levels are;

Service Tier HA model Locally Redundant Zone Redundant
General Purpose (vCore) Remote Storage Yes Yes
Business Critical (vCore) Local Storage Yes Yes
Hyperscale (vCore) Hyperscale Yes Yes
Basic (DTU) Remote Storage Yes No
Standard (DTU) Remote Storage Yes No
Premium (DTU) Local Storage Yes Yes

 

vCore

This tier level represents virtual CPU core provisioning much like a Virtual Machine. Logical CPUs are defined and in turn dictate the amount of memory available and the maximum storage size. Sizing details are shown below;

Use caseGeneral PurposeBusiness CriticalHyperscale
Best for Most business workloads. Offers budget-oriented, balanced, and scalable compute and storage options. Offers business applications the highest resilience to failures by using several high availability secondary replicas, and provides the highest I/O performance. The widest variety of workloads, including those workloads with highly scalable storage and read-scale requirements. Offers higher resilience to failures by allowing configuration of more than one high availability secondary replica.
Compute size 2 to 128 vCores 2 to 128 vCores 2 to 128 vCores
Storage type Premium remote storage (per instance) Super-fast local SSD storage (per instance) Decoupled storage with local SSD cache (per compute replica)
Storage size 1 GB – 4 TB 1 GB – 4 TB 10 GB – 100 TB
IOPS 320 IOPS per vCore with 16,000 maximum IOPS 4,000 IOPS per vCore with 327,680 maximum IOPS 327,680 IOPS with max local SSD
Hyperscale is a multi-tiered architecture with caching at multiple levels. Effective IOPS depend on the workload.
Memory/vCore 5.1 GB 5.1 GB 5.1 GB or 10.2 GB
Backups A choice of geo-redundant, zone-redundant, or locally redundant backup storage, 1-35 day retention (default 7 days)
Long term retention available up to 10 years
A choice of geo-redundant, zone-redundant, or locally redundant backup storage, 1-35 day retention (default 7 days)
Long term retention available up to 10 years
A choice of locally redundant (LRS), zone-redundant (ZRS), or geo-redundant (GRS) storage
1-35 days (7 days by default) retention, with up to 10 years of long-term retention available
Availability One replica, no read-scale replicas,
zone-redundant high availability (HA)
Three replicas, one read-scale replica,
zone-redundant high availability (HA)
zone-redundant high availability (HA)
Pricing/billing vCore, reserved storage, and backup storage are charged.
IOPS aren't charged.
vCore, reserved storage, and backup storage are charged.
IOPS aren't charged.
vCore for each replica and used storage are charged.
IOPS aren't charged.
Discount models Reserved instances
Azure Hybrid Benefit (not available on dev/test subscriptions)
Enterprise and Pay-As-You-Go Dev/Test subscriptions
Reserved instances
Azure Hybrid Benefit (not available on dev/test subscriptions)
and Pay-As-You-Go Dev/Test subscriptions
Azure Hybrid Benefit (not available on dev/test subscriptions) 1
Enterprise and Pay-As-You-Go Dev/Test subscriptions

 

DTU

The Database Transaction Unit tier level represents a unit of work grouped by memory, CPU and I\O requirements. The DTU model has a maximum storage size available. The maximum size available is generally less than the vCore options. 

 BasicStandardPremium
Target workload Development and production Development and production Development and production
Uptime SLA 99.99% 99.99% 99.99%
Backup A choice of geo-redundant, zone-redundant, or locally redundant backup storage, 1-7 day retention (default 7 days)
Long term retention available up to 10 years
A choice of geo-redundant, zone-redundant, or locally redundant backup storage, 1-35 day retention (default 7 days)
Long term retention available up to 10 years
A choice of locally-redundant (LRS), zone-redundant (ZRS), or geo-redundant (GRS) storage
1-35 days (7 days by default) retention, with up to 10 years of long-term retention available
CPU Low Low, Medium, High Medium, High
IOPS (approximate)* 1-4 IOPS per DTU 1-4 IOPS per DTU >25 IOPS per DTU
IO latency (approximate) 5 ms (read), 10 ms (write) 5 ms (read), 10 ms (write) 2 ms (read/write)
Columnstore indexing N/A Standard S3 and higher Supported
In-memory OLTP N/A N/A Supported
Maximum storage size 2 GB 1 TB 4 TB
Maximum DTUs 5 3000 4000



Elastic Pools and Limits

You can also configure a feature called "Elastic Pools", these are pools configured with a set amount of compute and storage and the databases within the pool use the provisioned resource as and when they require.

To learn more, review Resource limits for pooled databases.

 BasicStandardPremium
Maximum storage size per database 2 GB 1 TB 1 TB
Maximum storage size per pool 156 GB 4 TB 4 TB
Maximum eDTUs per database 5 3000 4000
Maximum eDTUs per pool 1600 3000 4000
Maximum number of databases per pool 500 500 100

 

Logical Server

The logical server is merely a connection management point for the Azure SQL databases that it manages. There is no need to configure server principals or server roles\permissions at the virtual master database level. The server name is generated during the deployment process and is a unique combination of the following;

yourservername.database.windows.net


The network connection and policy will also be specified during the deployment process. We can see on the create database blade below, the options that are chosen.

 

 

Of note from the above screenshot is the connection policy. The default uses either Proxy or Redirect, based upon where the traffic is initiated.

A connection from the public internet to the Public Endpoint would be proxied through one of the Azure SQLDB gateways for that region. This has the potential for high latency and is generally not recommended.

A connection from a private endpoint (which could be an extension of your on-premise network) would be redirected directly to the host servicing the database.
There are also firewall rules that are configured firstly at the database level and finally at the logical server level, the 2 stored procedures used for these are "sp_set_firewall_rule" and "sp_set_database_firewall_rule". Of course, you can also set these rules on the blade for the logical server. The database level supports only TSQL in the database(s).

For connections that come in;

  • via the internet, the database rules for the incoming connection are first checked, if a match for the range is found the connection is granted.
  • If no database rule is matched, the server rules are enumerated, if a server level rule is found the connection is granted.
  • If there is no server rule match the connection fails.


Just remember that server level rules cover all databases so for security reasons it all depends which databases the connections require access to.

For Azure based connections an open rule is configured and this is the only time this is recommended, the start and end ip for this rule are 0.0.0.0 - 0.0.0.0 

Logical Server Permissions

Logical server permissions and roles have changed quite a bit for Azure SQLDB. There is no need to (and indeed you shouldn't) create logins at the virtual server level, administrators are defined during the logical server deployment and typically consist of a SQL server admin account and a Microsoft Entra user or group. The only requirement for server level and database level administrator rights would be to configure the firewall rules mentioned above, apart from that no users should hold any elevated roles or permissions.
The old server roles we've all been used to are gone, along with most of the server level securables. In fact, the system catalogs that tracked these are now gone in the virtual master database, any attempt to grant a legacy role or permission fails with the errors shown below.

 

There is now, a new set of fixed server roles and database roles. The server roles consist of the following;

Fixed server-level roleDescription
##MS_DatabaseConnector## Members of the ##MS_DatabaseConnector## fixed server role can connect to any database without requiring a User-account in the database to connect to.

To deny the CONNECT permission to a specific database, users can create a matching user account for this login in the database and then DENY the CONNECT permission to the database-user. This DENY permission overrules the GRANT CONNECT permission coming from this role.
##MS_DatabaseManager## Members of the ##MS_DatabaseManager## fixed server role can create and delete databases. A member of the ##MS_DatabaseManager## role that creates a database, becomes the owner of that database, which allows that user to connect to that database as the dbo user. The dbo user has all database permissions in the database. Members of the ##MS_DatabaseManager## role don't necessarily have permission to access databases that they don't own. You should use this server role over the dbmanager database level role that exists in master.
##MS_DefinitionReader## Members of the ##MS_DefinitionReader## fixed server role can read all catalog views that are covered by VIEW ANY DEFINITION, respectively VIEW DEFINITION on any database on which the member of this role has a user account.
##MS_LoginManager## Members of the ##MS_LoginManager## fixed server role can create and delete logins. You should use this server role over the loginmanager database level role that exists in master.
##MS_SecurityDefinitionReader## Members of the ##MS_SecurityDefinitionReader## fixed server role can read all catalog views that are covered by VIEW ANY SECURITY DEFINITION, and respectively has VIEW SECURITY DEFINITION permission on any database on which the member of this role has a user account. This is a small subset of what the ##MS_DefinitionReader## server role has access to.
##MS_ServerStateManager## Members of the ##MS_ServerStateManager## fixed server role have the same permissions as the ##MS_ServerStateReader## role. Also, it holds the ALTER SERVER STATE permission, which allows access to several management operations, such as: DBCC FREEPROCCACHE, DBCC FREESYSTEMCACHE ('ALL'), DBCC SQLPERF();
##MS_ServerStateReader## Members of the ##MS_ServerStateReader## fixed server role can read all dynamic management views (DMVs) and functions that are covered by VIEW SERVER STATE, respectively VIEW DATABASE STATE on any database on which the member of this role has a user account.

 

The database roles are now listed as;

Role nameDescription
dbmanager Can create and delete databases. A member of the dbmanager role that creates a database, becomes the owner of that database, which allows that user to connect to that database as the dbo user. The dbo user has all database permissions in the database. Members of the dbmanager role don't necessarily have permission to access databases that they don't own.
db_exporter Applies only to Azure Synapse Analytics dedicated SQL pools (formerly SQL DW).
Members of the db_exporter fixed database role can perform all data export activities. Permissions granted via this role are CREATE TABLE, ALTER ANY SCHEMA, ALTER ANY EXTERNAL DATA SOURCE, ALTER ANY EXTERNAL FILE FORMAT.
loginmanager Can create and delete logins in the virtual master database.

 

Again, you should only use these roles when a specific need arises and they should not be used freely just because they exist.

Authorisation is handled directly by the database the users connect to, much in the same way Contained databases worked in on-premise environments. No server level interaction is required and security best practices should be followed. Role Based Access Control should be employed to ensure only the required permissions are granted in order to carry out business tasks.
Microsoft Defender for Cloud brings rich configuration checking for Azure SQL DB and can alert when conditions are breached, for example a user is added to an elevated role or granted elevated permissions outside of a known role\permission set.
That's it for this basic guide, it should help clear up some misunderstandings around the platform and i would urge to read up in detail on the topics discussed on this blog post.



Guide to Azure SQL Managed Instance

clock May 31, 2024 16:29 by author TopDog

Welcome to my blog, in this post I want to discuss Azure SQL Managed Instance. Azure MI is the 2nd SQL PaaS offering from Microsoft, questions around this regularly appear on the forums, so this post will seek to clarify what the feature is and what it can do.

What is Azure SQL MI?

Azure SQL Managed Instance is a fully provisioned and managed database engine including a SQL Server agent. The platform, including its base OS, is fully managed and no access is possible to this layer, you only manage and provision objects within the instance itself.
For General Purpose instances, you are able to stop and start the instance which can help to save on costs when the instance may be idle for periods of time.
Azure MI is provisioned based on service tiers and hardware generations, for the full list check the Microsoft website for detailed information, but the details are:

FeatureStandard-series (Gen5)Premium-seriesMemory optimized premium-series
CPU Intel® E5-2673 v4 (Broadwell) 2.3 GHz, Intel® SP-8160 (Skylake), and Intel® 8272CL (Cascade Lake) 2.5-GHz processors Intel® 8370C (Ice Lake) 2.8-GHz processors Intel® 8370C (Ice Lake) 2.8-GHz processors
Number of vCores
vCore=1 LP (hyper-thread)
21 -80 vCores 21 -128 vCores 4-128 vCores
Max memory (memory/vCore ratio) 5.1 GB per vCore - 408 GB maximum
Add more vCores to get more memory.
7 GB per vCore up to 80 vCores - 560 GB maximum 13.6 GB per vCore up to 64 vCores - 870.4 GB maximum
Max In-Memory OLTP memory Instance limit: 0.8 - 1.65 GB per vCore Instance limit: 1.1 - 2.3 GB per vCore Instance limit: 2.2 - 4.5 GB per vCore
Max instance reserved storage 2 General Purpose: up to 16 TB
Business Critical: up to 4 TB
General Purpose: up to 16 TB
Business Critical: up to 16 TB3
General Purpose: up to 16 TB
Business Critical: up to 16 TB



Backups and patching are fully managed and these are configured on the Azure SQL MI blade in the portal. For patching and upgrades, specify the maintenance window.

For the backups, specify the recovery interval time and also the retention periods for short and long term retention. 35 days is the maximum for point in time recovery, generally you shouldn't require anymore than that. It's important to note that you do not get access to the storage account that contains the managed backups. This should be immediately obvious as to why, how can Microsoft maintain an SLA and backup resilience if users were simply able to access the storage account and potentially remove files? 

As SQL MI is a managed platform there are some points to be aware of, these are:

  • No access to the base operating system for CLR modules.
  • SQL Server version uplifts automatically.
  • Reliance on virtual network IP configuration, requires at least 32 IPs and must be dedicated to the MI.
  • Max number of database and files per instance based on tier type.
  • Instance storage sizes must be specified in multiples of 32GB.
  • Lead time required to deploy SQL MI.
  • One transaction log file per database.
  • Limited number of agent job step types.
  • Limited support for SQL Agent notifications.


Lets take a look at these in more detail to get an idea of how they affect the MI deployment\management.

CLR Integration

There are no physical CLR modules that can be deployed since you have no access to the OS drive. CLR assemblies are supported, but must be created from the assembly bits using the CREATE ASSEMBLY command passing in the assembly bits instead of a file path\name, an example is shown below.  

CREATE ASSEMBLY HelloWorld FROM 0x4D5A900000000000 WITH PERMISSION_SET = SAFE;

  

SQL Server Uplift

When setting up the maintenance window on the Azure SQL MI blade, you are specifying a window within which patches can be applied, but also if the next SQL Server version becomes available, it will be upgraded. You do not have the option to remain at a particular SQL Server version, you would need to deploy an IaaS SQL Server instance. Having said that, Microsoft still want you to uplift versions to remain in support, legacy platforms can be maintained but you are agreeing to provide a "best efforts" approach to uplift to supported product versions. Azure SQL MI provides this automatically for you.

Virtual Network Reliance

Azure SQL Managed Instances are placed into dedicated subnets within a virtual network. An Azure SQL MI is placed onto a set of VMs within a virtual cluster, the nodes each need an IP within the dedicated subnet for the SQL MI. The subnet has a minimum requirement for the number of IP addresses available per instance. A subnet for an SQL MI must have at least 32 IP addresses per instance. The azure networking will consume 5 IP addresses so you should bear this in mind when defining the subnet size..

No other resources apart from Azure SQL MIs may be placed into the subnet. You should be especially careful when applying resource locks on:

  • subnets
  • SQL Managed Instances
  • resource groups they may be a member of

As this can affect functionality.

Max Number of Databases and Files

Within Azure SQL Managed Instance lies a maximum limit on the number of databases, the number of files and the assigned backend storage size for General Purpose instances.

When you create or a restore a database in Azure SQL Managed Instance, each database file is placed on its own disk, the unused space is not charged but does contribute to a maximum backend limit. The disk sizes can be one of the following based on the file size: 

  • 128 GB.
  • 256 GB.
  • 512 GB.
  • 1 TB.
  • 4 TB.


You have a maximum backend storage size of 35 TB, because of this limitation, exceeding the maximum storage space with small database files may restrict the number of databases you can host on the instance to less than the supported maximum. Also, because of this limitation, if you do exceed the backend limit, any operation to create or restore a database or alter the file size will fail. A typical example for how the limit is exceeded would be as follows:
 

A General Purpose deployment of SQL Managed Instance might have one large file that's 1.2 TB in size placed on a 4-TB disk. It may then have 248 x 1GB files which would all be placed on different disks too.
In this example, the total allocated disk storage size is 1 x 4 TB + 248 x 128 GB = 35 TB.
The total reserved space for databases on the instance is only 1 x 1.2 TB + 248 x 1 GB = 1.4 TB.
This example shows how file creation and distribution can easily exceed hard limits, the total number of files may be across only 25 databases for example. After all, its not uncommon to have a database with multiple data files, but it can impede on the instances maximum 35-TB limit.
In this example, existing databases continue to work and can grow without any problem as long as new files aren't added and also as long as a file doesn't exceed its current size allocation.
New databases can't be created or restored because there isn't enough space for new disk drives, even if the total size of all databases doesn't reach the instance size limit.
The error message you receive when you have hit the limit and try to add a new file\database is completely unhelpful and it's not immediately clear what the issue is. If you are aware of these limitations you can easily get to the root of the problem fairly quickly.

There are system catalogs that can be used to check the file specs. If you do reach the hard limit, the options are to switch to the Business Critical tier, or shrink database files that may have a higher excess of space than desired. The entire limitation detail is shown in the table below.

FeatureGeneral PurposeNext-gen General PurposeBusiness Critical
Max database size Up to currently available instance size (depending on the number of vCores). Up to currently available instance size (depending on the number of vCores). Up to currently available instance size (depending on the number of vCores).
Max tempdb database size Limited to 24 GB/vCore (96 - 1,920 GB) and currently available instance storage size.
Add more vCores to get more tempdb space.
Log file size is limited to 120 GB.
Limited to 24 GB/vCore (96 - 1,920 GB) and currently available instance storage size.
Add more vCores to get more tempdb space.
Log file size is limited to 120 GB.
Up to currently available instance storage size.
Max number of tempdb files 128 128 128
Max number of databases per instance 100 user databases, unless the instance storage size limit has been reached. 500 user databases 100 user databases, unless the instance storage size limit has been reached.
Max number of database files 280 per instance, unless the instance storage size or Azure Premium Disk storage allocation space limit has been reached. 4,096 files per database 32,767 files per database, unless the instance storage size limit has been reached.
Max data file size Maximum size of each data file is 8 TB. Use at least two data files for databases larger than 8 TB. Up to currently available instance size (depending on the number of vCores). Up to currently available instance size (depending on the number of vCores).
Max log file size Limited to 2 TB and currently available instance storage size. Limited to 2 TB and currently available instance storage size. Limited to 2 TB and currently available instance storage size.
Data/Log IOPS (approximate) 500 - 7500 per file
*Increase file size to get more IOPS
Reserved storage * 3 - up to the VM limit. 300 in case of 32 GB, 64 GB, and 96 GB of reserved storage.
VM limit depends on the number of vCores
6400 IOPS for a VM with 4 vCores - 80 K IOPS for a VM with 128 vCores
16 K - 320 K (4000 IOPS/vCore)
Add more vCores to get better IO performance.
Data throughput (approximate) 100 - 250 MiB/s per file
*Increase the file size to get better IO performance
IOPS / 30 MBps - up to the VM limit. 75 MBps in case of 32 GB, 64 GB, and 96 GB of reserved storage. Not limited.
Log write throughput limit (per instance) 4.5 MiB/s per vCore
Max 120 MiB/s per instance
22 - 65 MiB/s per DB (depending on log file size)
Increase the file size to get better IO performance
4.5 MiB/s per vCore
Max 192 MiB/s
4.5 MiB/s per vCore
For Standard-series: Max 96 MiB/s
For Premium-series and Memory optimized premium-series: Max 192 MiB/s
Storage IO latency (approximate1) 5-10 ms 3-5 ms 1-2 ms

 

From the table above, another point to understand is that if a database file on a GP instance requires more performance, the best option is to increase the file (and subsequently disk) size so that the IOPs provision is uplifted. Again though, be aware that this contributes to the backend hard limit.

 

Deployment Lead time

When deploying Azure SQL Managed Instance there is an associated lead time, this is mainly due to the infrastructure requirements behind the scenes. For certain types of configuration though, Microsoft now offer Fast Provisioning, this reduces the deployment time to 30 minutes or under for these assets.
Microsoft do plan to increase this functionality to cover more configurations but at the time of writing, the deployment time for high level configurations can still take up to 4 hours. The table below shows the deployment steps and the anticipated run times.

OperationLong-running segmentEstimated duration
First instance in an empty subnet1 Virtual cluster creation (fast provisioning) 90% of operations finish in 30 minutes.
First instance in an empty subnet Virtual cluster creation 90% of operations finish in less than 4 hours.
First instance with a different hardware generation or maintenance window in a non-empty subnet (for example, the first Premium-series instance in a subnet with Standard-series instances) Adding new virtual machine group to the virtual cluster2 90% of operations finish in less than 4 hours.
Subsequent instance creation within the non-empty subnet (2nd, 3rd, etc. instance) Virtual cluster resizing 90% of operations finish in 60 minutes.

1. Fast provisioning is currently supported only for the first instance in the subnet, with 4 or 8 vCores, and with default maintenance window configuration.
2 A separate virtual machine group is created for each hardware generation and maintenance window configuration.

 

SQL Agent

SQL agent is available in Azure SQL MI, but it does have certain limitations due to the managed platform element. You can still use agent jobs to perform all the usual tasks you would with traditional instances of SQL Server, such as maintenance jobs, data movement and data collection. The following list from the Microsoft knowledge base lists items for which features are not available or partially available in Azure SQL MI SQL server agent.

  • Enabling and disabling SQL Server Agent is currently not supported in SQL Managed Instance. SQL Agent is always running.
  • Job schedule trigger based on an idle CPU isn't supported.
  • SQL Server Agent settings are read only. The procedure "sp_set_agent_properties" isn't supported in SQL Managed Instance.
  • Jobs
    • T-SQL job steps are supported.
    • The following replication jobs are supported:
      • Transaction-log reader
      • Snapshot
      • Distributor
    • SSIS job steps are supported.
    • Other types of job steps aren't currently supported:
      • The merge replication job step isn't supported.
      • Queue Reader isn't supported.
      • Command shell isn't yet supported.
    • SQL Managed Instance can't access external resources, for example, network shares via robocopy.
    • SQL Server Analysis Services isn't supported.
  • Notifications are partially supported.
  • Email notification is supported, although it requires that you configure a Database Mail profile. SQL Server Agent can use only one Database Mail profile, and it must be called "AzureManagedInstance_dbmail_profile".
    • Pager isn't supported.
    • NetSend isn't supported.
    • Alerts aren't yet supported.
    • Proxies aren't supported.
  • EventLog isn't supported.
  • User must be directly mapped to the Microsoft Entra server login to create, modify, or execute SQL Agent jobs. Users that aren't directly mapped, for example, users that belong to a Microsoft Entra group that has the rights to create, modify or execute SQL Agent jobs, will not effectively be able to perform those actions. This is due to SQL Managed Instance impersonation and EXECUTE AS limitations.
  • The Multi Server Administration feature for master/target (MSX/TSX) jobs aren't supported.

Some of the options are host dependant in a traditional instance of SQL Server, such as NetSend and EventLog, that is why these are not available. In general, all of the features that would be used to run day to day tasks are portable over to Azure SQL MI. 

Summary

The detail above should provide a quick reference to Azure SQL Managed Instance and help to avoid some of the pitfalls of initial deployment and management. It cannot be stressed enough how important it is to fully understand the needs for your applications database tier and whether or not this fits in with a PaaS solution or an IaaS solution. Due diligence and proof of concept work can avoid wasted time on deployments and embarrassed faces down the line.
Microsoft do offer tools to help decide whether to go the PaaS or IaaS routes but even then, careful planning and decision making lay with the migration team.
Often, organisations adopt a cloud-first migration strategy but it should be realised that not everything fits in with this methodolgy, there will usually always be those systems that are bespoke or adhoc enough to require a different approach.

With that said, IaaS migrations allow rapid datacenter exit routes which provides the opportunity to repurpose assets once deployed to the cloud environment.



Repairing Log Shipping Plan Using Primary Differential Backup

clock May 22, 2024 11:12 by author TopDog

Have you ever had the scenario where your Log Shipping plan was interrupted?

If so, how did you repair the broken Log Shipping Plan?

These issues are common and regularly seen on the SSC forum whereby Log Shipping plans have been interrupted and the recommendation has been to completely re initialise Log Shipping from Full and Log backups. The suggestion to resume the Log Shipping plan with a differential backup from the primary database was met with some scepticism, so this article seeks to address this and explain why this works. Having to re initialise a Log Shipping plan can be a massive task especially when your database is a little on the large side, so any action that can minimise the effort and downtime has to be considered.

Ok, so you know the Scenario;

The junior DBA took a log backup outside of the Log Shipping plan and this backup was not restored to the secondary database, instead it was deleted from its disk location and has been lost. Your pager starts bleeping and already the boss wants to know why no new data is being sent to the secondary reporting database!

You're about to re initialise the whole Log Shipping plan which is a real killer as this database is a monster! Then you stop, you think and remember, didn't I read somewhere that a differential backup will bridge that LSN gap? Well, you're right and here's why it works.

 

Why Does the Differential Restore Work?

So, why does the differential restore work even when there are continual full backups occurring on the primary database?

The key here is the Log Shipping plans 3 SQL Server agent jobs that backup, copy and restore the transaction log backups and constantly replicate data from the primary to the secondary. Embedded within this 'Replication' is the Differential Base LSN which is incremented inline with the primary database. Let's look at the following steps involved in the typical Log Shipping process and the Differential Base LSN points that are created.           
                                                       

Step No Step Name Primary Diff BaseLSN Secondary Diff BaseLSN
1 Log shipping implemented via backup and restore 62000000061100036 62000000061100036
2 First log backup runs, is copied and restored 62000000061100036 62000000061100036
3 Subsequent log backups copied and restored 62000000061100036 62000000061100036
4 Full backup occurs on primary 62000000064400043 62000000061100036
5 Log backup taken, copied and restored 62000000064400043 62000000064400043

 

Now, what this means is that, as long as log backups are restored to the secondary, the Differential base LSN remains in step allowing you to bridge LSN gaps between the primary and the secondary databases using a differential backup.

Looking at your typical backup configuration on your Log Shipped database you could well have the following in place

Backup Type Frequency
Full Every Sunday 20:00pm
Differential Every night Monday - Saturday 20:00pm
Log Via LS plan every 15 minutes during the day

 

Given the scenario above, regular log restores will keep the secondary in step with the primary, any break in the Log Shipping plan may be bridged by restoring a differential backup. The only caveat here would be the following scenario;

  • Sunday 19:30pm the Log Shipping plan breaks, a log backup is missing and the restore job is now skipping backup files. No new data is being sent to the secondary
  • Sunday 20:00pm the Full backup occurs and completes successfully
  • Monday 09:00am the DBA comes into work and wishes they'd stayed at home
  • Monday 09:15am the DBA takes a differential backup of the primary and attempts to restore to the secondary but receives the following error message dialog.

 

Why is this?

If the full backup occurs after log shipping has broken but before you take the differential backup, no further logs are restored and the Differential Base LSNs are no longer synchronised, you will need to re initialise Log Shipping!

So, as we have already discovered, it's the transaction logs shipped and restored to the secondary that keeps the differential Base LSNs in step.

The situation above can be avoided to a certain extent by having suitable monitoring and notification to alert you immediately once a Log Shipping plan breaks. BAU DBAs can respond quicker to the alert and effect an immediate repair. You now just need to educate them on the process to be used.

As always, test thoroughly in your sandbox\test environment and if you're still stuck, post back I'll help all I can.



Sign In