SQL_Guy

This is the SQL Server Blog for Perry Whittle

Creating and Using Storage Spaces Mount Points in SQL Server

clock July 1, 2024 10:33 by author TopDog

Further to my original post Creating and Using Mount Points in SQL Server, this post contains updated content using Windows Storage Spaces. Storage Spaces is a storage feature introduced in Windows 2012, the old 3rd party Disk Management Snapin is replaced by Microsoft's own disk management Snapin tool, although the original is still present even in Windows 2022. Although the storage can be easily managed via the Server Manager GUI, the intended management interface for the disk objects is using PowerShell and we'll look at the cmdlets in detail as we go through an example.

You should have already checked whether your planned SQL Server instance will support Mount Points as not all versions provide this. With that in mind, let's briefly look at this now

Validating SQL Server Compatibility

Check whether or not your chosen version of SQL Server will support mount points. Not all editions of SQL Server do support the use of mount points, therefore, before configuring mount points and attempting to install SQL Server you must refer to the following Microsoft KB article for further details: 

http://support.microsoft.com/kb/819546

In brief, Versions of SQL Server 2005 and beyond fully support mount points for stand alone and clustered environments.

What are Mount Points?

Anybody who has ever used a Unix system would have come across the use of mount points before, they are common within this Network Operating System. Anything from a disk partition to a CD drive or floppy drive all require mounting before use, these file systems are usually mounted under a root partition folder.

Within the Windows operating system we have the ability to perform similar actions with multiple Windows disks. We have 26 drive letters initially available for assignment, subtract from this the reserved set of drives (A, B, C) and you are commonly left with 23. This doesn't leave much scope, especially if you plan to use many data locations per instance. You may have multiple SQL Server instances and require multiple disk sets to offload I\O, For each instance you may originally plan to use separate assignments for

  • SQL Server data
  • SQL Server logs
  • SQL Server backups
  • SQL Server tempdb

That's 4 drive letters for a single instance. Mount points enable you to use a single drive letter for all of the chosen locations but still implement separate physical drives for each of the individual data locations. How is this possible?


Mount points are simply file system locations (in the form of folders) under which a physical disk partition may be mounted\referenced. You start with a root reference known as the root disk. The root disk can, in fact should be, as small as possible. This will avoid multiple users creating data on the root disk and make administration easier. You must also bear in mind that all I\O destined for the Mount Point occurs on the mounted physical disk and not the root disk.

We'll go through actually creating mount points later in the article, for now let's look at a little theory. We have a new SQL instance required and we need to separate out the SQL Server files to separate disks, our basic example is as follows

Root disk 1 M:

 

Multiple mount points are then employed in the following configuration for our single instance, remember each of these mount points is actually a separate physical drive from the Server Manager File and Storage view.

M:\SQLData

M:\SQLLog

M:\SQLTempdb

M:\SQLBak

 

The instance may be installed using the paths below. To assign permissions we apply the ACLs at the MSSQL folder level on each mount point and they then propagate down to the child folders\files. Applying permissions at the root level M: or the mount point folders would not affect the mount point child folder\file structures.

M:\SQLData\MSSQL\Data

M:\SQLLog\MSSQL\Log

M:\SQLTmp\MSSQL\TempDB

M:\SQLBak\MSSQL\Bak

 

Going further in another example, you could employ multiple root disks and multiple instances of SQL Server in the following manner

Root disk 1 D:

Root disk 2 E:

Root disk 3 M:

 

You would have multiple mount points, each instance would have 4 separate physical disks mounted under

Instance1 D:\INST1Data, D:\INST1Log, D:\INST1Tempdb, D:\INST1Bak

Instance 2 E:\INST2Data, E:\INST2Log, E:\INST2Tempdb, E:\INST2Bak

Instance 3 M:\INST3Data, M:\INST3Log, M:\INST3Tempdb, M:\INST3Bak

 

The SQL server instances would be created along the following paths

D:\INST1Data\MSSQL\Data, D:\INST1Log\MSSQL\Log, D:\INST1Tempdb\MSSQL\TempDB, D:\INST1Bak\MSSQL\Backups

E:\INST2Data\MSSQL\Data, E:\INST2Log\MSSQL\Log, E:\INST2Tempdb\MSSQL\TempDB, E:\INST2Bak\MSSQL\Backups

M:\INST3Data\MSSQL\Data, M:\INST3Log\MSSQL\Log, M:\INST3Tempdb\MSSQL\TempDB, M:\INST3Bak\MSSQL\Backups

 

It's easy to see how the management becomes more complex with multiple root disks and mounted volumes, it can't be stressed enough that documenting your setup as you go is more valuable than anything else, you do have an HLD and an LLD provided by the architect? Concise documentation gives you and anyone who follows you, a reference point for your system setup and can make troubleshooting easier too.


With Mount Points it's important to remember the following pointers

  • Do not install software to the root of the mount point. Rather, create and install into a folder under the mount point to allow the correct propagation of permissions.
  • Ideally, the root drive should not have any data created upon it, it is merely the root tree for the new volumes.
  • Permissions applied to the root drive file system and the mount point root folders do not propagate to the mount point file systems, they are controlled separately.
  • Mount points are not supported for all SQL Server 2000 configurations (in case you had a burning desire to use them). This is detailed in the relevant section above.

Windows requires online disks with formatted partitions before you are able to utilise mount points, let's look at those next as they form the basis for mounting any partition\volume.

Choosing Partition Types

When initialising and before formatting your disks within Windows disk management, you should first decide on the disk type and partition type to be used.

In brief the 2 disk types possible are Basic and Dynamic. Basic disks are the default disk type used by the Windows Operating System. Dynamic disks are rarely used within Windows and are really only useful for creating spanned volumes or software RAID systems. In reality, the Windows disk you are formatting likely already has an underlying RAID system (such as a local controller) or it is a disk presented from a SAN or other network storage device. For this reason you should generally you should leave your disks as Basic type, especially for cluster support.

As well as the disk type, there are also 2 different partition types that may be used. The 2 types available are MBR (master boot record) or GPT (GUID partition type).

  • MBR partition types only allow a maximum of 4 partitions per disk (3 primary and 1 extended).
  • GPT partition types support very large disk types, a maximum of 128 partitions and are generally much more efficient and resistant to corruption.

However, I would recommend if your disk is under 500GB and has only 1 partition that you leave this as MBR.

The following Microsoft KB article provides further details on Basic and Dynamic disks and also MBR and GPT partition types. You should digest this before creating disk partitions:

 http://msdn.microsoft.com/en-us/library/aa363785(VS.85).aspx

 

Windows Storage Spaces

Storage Spaces is a feature first introduced in Windows 2012. Storage Spaces takes groups of disks and uses software RAID implementation to create highly redundant disk structures, it can be used for standalone Windows Servers or for clustered implementations.

For more info check this Microsoft link. 

To re cap, the items you will require to create your mount points are;

  • A root disk (or disks) with a formatted partition and a drive letter assigned within Windows
  • A defined set of mount point folders to be created upon the root disk.
  • Disk devices available for use within Windows Storage Spaces.

Creating a Mount Point Demonstration.

We will now work through a short demonstration showing how to create mount points within Windows. For the demonstration we need a small root disk, the smaller the better (mines 500MB but it could\should be as small as a few MB).

Note: Remember, the root disk is merely a file system place holder for the mounted drives, it will only contain empty folders which are then used to mount the NTFS volumes. The temptation for users to dump files on the root drive is high and something I've seen a lot over the years.


The first requirement is to create a storage pool, this is a group of disks that will act as one array. The disks I have presented initially are shown below;



The following cmdlet is used to create the new disk pool, i have supplied default parameters for provisioning type and resiliency setting, which will be inherited by each virtual disk and volume;

 


To partition and format a volume on the root disk I use a single PowerShell cmdlet, you could use the individual cmdlets to achieve each step but the following cmdlet completes the operations in a single request as shown below;

 

On the root drive (F:) I have created the following folders, these folders currently display the typical Windows folder icon.


The next step is to create each new volume and specify an access path on the new root disk, this still uses the new volume cmdlet but with slightly different parameters. The cmdlet syntax shown below is executed once for each of the 4 new volumes (the FriendlyNames and AccessPaths change);


Once you have created the 4 volumes and mounted them to the specific folders, you will see the following in Server Management. Your mount points are now ready to use for a stand alone SQL Server installation.

 

 

Once the folders have mount points the icons change as shown below.

 

Mount Points for Clustered Instances

For clustered scenarios the storage pools are managed by Storage Spaces Direct, this feature is enabled via PowerShell once the WSFC has been created and the storage has been presented to the nodes.
S2D, the name for Storage Spaces Direct, claims all available and suitable disks on each node and creates a storage pool inside the WSFC. What does this mean, let's look at a scenario for a 3 node cluster;

 

Each of the nodes above has 3 x 1TB disks attached to a SCSI adapter, they're local disks on the local controllers. When enabling S2D you will be asked to confirm that all disks are to be claimed and added to the storage pool name that you supply.

Once the operation is complete, one large disk pool is created across all of the disks. To create the volumes and assign the paths, let's look through the example below;

First create the root drive, in the Server Manager storage blade, select storage pools and select the storage pool you created. Click Tasks under Virtual Disks and select New Virtual Disk:

 

Click OK to create the virtual disk on the storage pool.

 

Supply a virtual disk name and click Next.

 

 Specify the virtual disk size, this is the root drive so as small as possible, and click Next.

 

 Review the selections and click Create.

 

 Once the Virtual Disk has been created you can close the wizard, if the check box to create a volume is left checked, the new volume wizard will start when the virtual disk wizard dialog closes.

 

The new volume wizard starts and first requires the new disk target, select the new Virtual Disk as your target and click Next.


Confirm the size of the new volume and click Next.


Assign your drive letter and click Next.


Specify any volume settings and a label, for the root disk, the filesystem and allocation unit size can be default. Click Next.


Review the selections and click Create.



Once the new volume wizard completes successfully, close the dialog and review what you have on the Server Manager storage blade.

The steps to create the mount points are the same with exception to the Assign Drive Letter or Folder step. Below are the pop out steps in this section of the wizard.
Select the option for "The following folder" and open the folder browser, within the "Select folder" dialog you are able to create and then select\assign a folder for your new volume, rather than a drive letter.

 

 

Once you have created the folder, select the appropriate folder and click Next.

 

 On the volume settings dialog, select your filesystem and allocation unit size along with your label and click Next.

 

Proceed with the rest of the wizard as you did for the root drive. That's really all that's required, check Server Manager for the newly created Virtual Disks and volumes and finally, open Failover Cluster Manager to view the new clustered disk objects. Install SQL server using the section in my stairway to HA guide at this link.

This should give you more of an understanding of mount points and how they are set up. They operate just the same as any other folder with the exception that permissions grants are handled by each individual mount point and not by the root directory. Granting permissions at the root drive and pushing child inheritance will not affect the mount points. Still, they are essentially NTFS folders and they are also an excellent way to bypass the Windows 26 drive letter limitation.

As always, test this in an offline environment, virtual machines make a great test lab and are extremely easy to set up. Have fun and of course, post back if you have any troubles.



Restore database code

clock June 25, 2024 14:27 by author TopDog

USE [master]

GO

 

DECLARE@dbnameVARCHAR(128)

,@TGTDBNameSYSNAME = ''--TestDB'

,@BackupPathandFile NVARCHAR(2048) =

--'\\server\Backup\folder\FULL\dbname_FULL_20220214_001157.bak'

'F:\Data\MSSQL14.CORP\MSSQL\Backup\agtest_full_20220215.bak'

,@DataFilePathVARCHAR(256) = ''

--@DataFilepath VARCHAR(256)='C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\DATA'

,@LogFilePathVARCHAR(256) = ''

--@LogFilePath VARCHAR(256)='C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\DATA'

,@MediaVARCHAR(5) = 'disk' 

,@DebugINT = 1

,@recoverCHAR(1) = 'y'

,@replaceINT = 1

,@RestoreCommandNVARCHAR(MAX)

,@SQLCommandNVARCHAR(MAX)

 

SET NOCOUNT ON

SET CONCAT_NULL_YIELDS_NULL OFF

 

--Does the backup path and file name exist?

--If not raise an error and exit the proc

DECLARE @result INT

IF UPPER(@Media) = 'DISK'

BEGIN

EXEC master.sys.xp_fileexist @BackupPathandFile, @result OUTPUT

IF @result = 0

BEGIN 

PRINT '--The file "' + @BackupPathandFile + '" does not exist, check path and filename!' + CHAR(10) + '--Ensure you have not specified a URL backup for a disk based path.' + CHAR(10)

RETURN

END

ELSE PRINT '--The file "' + @backuppathandfile + '" is present' + CHAR(10)

END

 

--Check the media type, if invalid return

IF UPPER(@Media) <> 'DISK'

BEGIN

PRINT 'Media type invalid, check the media type specified and re run the procedure'

RETURN

END

 

 

--Are the database file paths null or empty string?

--If they are get the default locations from the instance

DECLARE @DefaultData VARCHAR(256),@DefaultLog VARCHAR(256),@instancename VARCHAR(64),@instance VARCHAR(128),@prodmajver VARCHAR(4)

 

SELECT @prodmajver = CAST(SERVERPROPERTY('ProductMajorVersion') AS VARCHAR(4))

SELECT @Instance= 'SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL' + @prodmajver + '.' + 

CASE 

WHEN SERVERPROPERTY('InstanceName') IS NULL THEN 'MSSQLSERVER'

ELSE CAST(SERVERPROPERTY('InstanceName') AS VARCHAR(64))

END

+ '\MSSQLServer'

 

IF @DataFilepath = '' or @DataFilepath IS NULL

BEGIN

--EXEC [master].dbo.xp_instance_regread N'HKEY_LOCAL_MACHINE', N'SOFTWARE\Microsoft\MSSQLServer\MSSQLServer', N'DefaultData', @DefaultData OUTPUT

EXEC [master].dbo.xp_regread N'HKEY_LOCAL_MACHINE', @instance, N'DefaultData', @DefaultData OUTPUT

IF (@DefaultData IS NULL)

BEGIN

PRINT 'Check the DefaultData registry key is set in the SQL Server instance hive at:' + CHAR(10) + 'HKEY_LOCAL_MACHINE\' + @instance + CHAR(10) +

'and retry, or supply a valid datafile path to the procedure'

RETURN

END

ELSE SELECT @DataFilePath = @DefaultData

END 

 

IF @LogFilePath = '' OR @LogFilePath IS NULL

BEGIN

--EXEC [master].dbo.xp_instance_regread N'HKEY_LOCAL_MACHINE', N'SOFTWARE\Microsoft\MSSQLServer\MSSQLServer', N'DefaultLog', @DefaultLog OUTPUT

EXEC [master].dbo.xp_regread N'HKEY_LOCAL_MACHINE', @instance, N'DefaultLog', @DefaultLog OUTPUT

IF (@DefaultLog IS NULL)

BEGIN

PRINT 'Check the DefaultLog registry key is set in the SQL Server instance hive at:' + CHAR(10) + 'HKEY_LOCAL_MACHINE\' + @instance + CHAR(10) +

'and retry, or supply a valid logfile path to the procedure'

RETURN

END

ELSE SELECT @LogFilePath = @DefaultLog

END

 

--Create a temporary table to store the results from the RESTORE FILELISTONLY command

IF OBJECT_ID('tempdb..#filelist', 'U') IS NOT NULL

BEGIN

DROP TABLE #filelist

END

 

CREATE TABLE #filelist (

[LogicalName]NVARCHAR(128)

,[PhysicalName]NVARCHAR(260)

,[Type]CHAR(1)

,[FileGroupName]NVARCHAR(128)

,[Size]NUMERIC(20,0)

,[MaxSize]NUMERIC(20,0)

,[FileID]INT

,[CreateLSN]NUMERIC(25,0)

,[DropLSN]NUMERIC(25,0) NULL

,[UniqueID]UNIQUEIDENTIFIER

,[ReadOnlyLSN]NUMERIC(25,0) NULL

,[ReadWriteLSN]NUMERIC(25,0) NULL

,[BackupSizeInBytes]BIGINT

,[SourceBlockSize]INT

,[FileGroupID]INT

,[LogGroupGUID]UNIQUEIDENTIFIER NULL

,[DifferentialBaseLSN]NUMERIC(25,0) NULL

,[DifferentialBaseGUID]UNIQUEIDENTIFIER

,[IsReadOnly]BIT

,[IsPresent]BIT

,[TDEThumbprint]VARBINARY(32)

,[SnapshotURL]VARCHAR(1024)

)

 

--Create a temporary table to store the results from the RESTORE HEADERONLY command

IF OBJECT_ID('tempdb..#filehead', 'U') IS NOT NULL

BEGIN

DROP TABLE #filehead

END

 

CREATE TABLE #filehead(

BackupNameVARCHAR(128)

,BackupDescriptionVARCHAR(255)

,BackupTypeSMALLINT

,ExpirationDateDATETIME

,CompressedBIT

,PositionSMALLINT

,DeviceTypeTINYINT

,UserNameVARCHAR(128)

,ServerNameVARCHAR(128)

,DatabaseNameVARCHAR(128)

,DatabaseVersionINT

,DatabaseCreationDateDATETIME

,BackupSizeNUMERIC(20,0)

,FirstLSNNUMERIC(25,0)

,LastLSNNUMERIC(25,0)

,CheckpointLSNNUMERIC(25,0)

,DatabaseBackupLSNNUMERIC(25,0)

,BackupStartDateDATETIME

,BackupFinishDateDATETIME

,SortOrderSMALLINT

,[CodePage]SMALLINT

,UnicodeLocaleIdINT

,UnicodeComparisonStyleINT

,CompatibilityLevelTINYINT

,SoftwareVendorIdINT

,SoftwareVersionMajorINT

,SoftwareVersionMinorINT

,SoftwareVersionBuildINT

,MachineNameVARCHAR(128)

,FlagsINT

,BindingIDUNIQUEIDENTIFIER

,RecoveryForkIDUNIQUEIDENTIFIER

,CollationVARCHAR(128)

,FamilyGUIDUNIQUEIDENTIFIER

,HasBulkLoggedDataBIT

,IsSnapshotBIT

,IsReadOnlyBIT

,IsSingleUserBIT

,HasBackupChecksumsBIT

,IsDamagedBIT

,BeginsLogChainBIT

,HasIncompleteMetaDataBIT

,IsForceOfflineBIT

,IsCopyOnlyBIT

,FirstRecoveryForkIDUNIQUEIDENTIFIER

,ForkPointLSNNUMERIC(25,0)

,RecoveryModelVARCHAR(60)

,DifferentialBaseLSNNUMERIC(25,0)

,DifferentialBaseGUIDUNIQUEIDENTIFIER

,BackupTypeDescriptionVARCHAR(60)

,BackupSetGUIDUNIQUEIDENTIFIER

,CompressedBackupSizeBIGINT

,containmentTINYINT

,KeyAlgorithmVARCHAR(32)

,EncryptorThumbprintVARBINARY(20)

,EncryptorTypeVARCHAR(32)

)

 

SET @RestoreCommand = 'RESTORE HEADERONLY FROM ' + UPPER(@Media) + '=''' + @BackupPathandFile + ''''

 

BEGIN TRY

INSERT INTO #filehead

EXEC (@RestoreCommand)

IF @Debug = 1

BEGIN

PRINT '--DEBUG MSG: The header restore command used is;' + CHAR(13) + @RestoreCommand + CHAR(10)

END

END TRY

BEGIN CATCH

PRINT 'Error restoring filelist, check the media type matches the backup file path\name specified'

END CATCH

 

SELECT @dbname = databasename FROM #filehead

IF @Debug = 1

PRINT '--The database name from the backup file is [' + @dbname + ']' + CHAR(13)

 

/* fetch list of database files from source backup file */

SET @RestoreCommand = ''

SET @RestoreCommand = 'RESTORE FILELISTONLY FROM ' + UPPER(@Media) + '=''' + @BackupPathandFile + ''''

 

BEGIN TRY

INSERT INTO #filelist

EXEC (@RestoreCommand)

IF @Debug = 1

BEGIN

PRINT '--DEBUG MSG: The filelist restore command used is;' + CHAR(13) + @RestoreCommand + CHAR(10)

END

END TRY

BEGIN CATCH

PRINT 'Error restoring filelist, check the media type matches the backup file path\name specified'

END CATCH

 

 

IF (SELECT [type] FROM #filelist WHERE type = 'S') IS NOT NULL

BEGIN

IF (SELECT [value] FROM sys.configurations WHERE [name] = N'filestream access level') = 0

BEGIN

PRINT '--Filestream db detected, check filestream is enabled'

IF @Debug = 1

BEGIN

PRINT '--DEBUG MSG: Filestream is not enabled, enable FileStream and retry' + CHAR(10)

RETURN

END

END

END

 

--Restoring to same db or to new name?

--If the target name is null or empty string get the dbname from the backupfile header

IF @TGTDBName IS NULL OR @TGTDBName = ''

BEGIN

IF @Debug = 1

PRINT '--DEBUG MSG: Target name not supplied, using name ' + QUOTENAME(@dbname) + ' from the backupset' + CHAR(10) + CHAR(13)

SELECT @TGTDBName = @dbname

END

ELSE

BEGIN

IF @Debug = 1

PRINT '--DEBUG MSG: Target name supplied, using name ' + QUOTENAME(@TGTDBName) + CHAR(10) --+ CHAR(13)

END

 

 

--If the existing database is online, take it offline first to kill connections

IF DATABASEPROPERTYEX(@TGTDBName,'Status') = 'ONLINE'

BEGIN

SET @SQLCommand = 'ALTER DATABASE [' + @TGTDBName + '] SET OFFLINE WITH ROLLBACK IMMEDIATE' + CHAR(10) + CHAR(13)

 

IF @Debug = 0

BEGIN

EXEC (@SQLCommand)

END

ELSE PRINT '--DEBUG MSG: ' + @SQLCommand

END

 

/* Construct the restore command */

SELECT @RestoreCommand = ''

SET @RestoreCommand = 'RESTORE DATABASE [' + @TGTDBName + '] FROM ' + UPPER(@Media) + '=N''' + @BackupPathandFile + ''' WITH ' + CHAR(10)

 

--Data and Log files

SELECT @RestoreCommand += 'MOVE N''' + LogicalName + 

CASE [type]

WHEN 'D' THEN ''' TO N''' + @DataFilePath + '\' 

WHEN 'L' THEN ''' TO N''' + @LogFilePath + '\' 

END  + 

REPLACE(REVERSE(SUBSTRING(REVERSE(PhysicalName),0, CHARINDEX('\', REVERSE(PhysicalName), 0))), @dbname, @TGTDBName)

+ ''',' + CHAR(10) 

FROM #filelist 

WHERE type <> 'S'

ORDER BY [Type],[FileID]

 

 

--Get the FileStream paths too if there are any

SELECT @RestoreCommand += 'MOVE N''' + LogicalName + 

CASE type

WHEN 'S' THEN ''' TO N''' + @DataFilePath + '\'

END +

REPLACE(REVERSE(SUBSTRING(REVERSE(PhysicalName),0, CHARINDEX('\', REVERSE(PhysicalName), 0))), @dbname, @TGTDBName)

+ ''',' + CHAR(10)

FROM #filelist

WHERE type = 'S'

 

SELECT @RestoreCommand +=

CASE @replace 

WHEN 0 THEN 'STATS=10'

ELSE 'REPLACE,STATS=10'

END +

CASE UPPER(@recover)

WHEN 'Y' THEN ''

WHEN 'N' THEN ',norecovery'

END

 

/* Perform the restore */

PRINT '--Restore command is: ' + CHAR(10) + @RestoreCommand

IF @Debug = 0

BEGIN TRY

EXEC (@RestoreCommand)

END TRY

BEGIN CATCH

PRINT 'Restore failed, check the SQL Server error log for more detail'

END CATCH

 

/* Perform the restore */

PRINT '--Restore command is: ' + CHAR(10) + @RestoreCommand

IF @Debug = 0

BEGIN TRY

EXEC (@RestoreCommand)

END TRY

BEGIN CATCH

PRINT 'Restore failed, check the SQL Server error log for more detail'

END CATCH

 

/* Tidy up */

DROP TABLE #filelist

DROP TABLE #filehead

GO



Listing server logins holding roles and server permissions

clock June 17, 2024 23:26 by author TopDog

The following script will list all server principals and any elevated roles or permissions they hold.
The script is valid for SQL Server instances and Azure SQL Managed Instances

 

SELECT 	sp.name										AS ServerPrincipal, 
	sp.type_desc									AS LoginType,
	sp.type,
	CASE sp.is_disabled
	WHEN 0 THEN 'No'
	WHEN 1 THEN 'Yes'
	END												AS UserDisabled, 
	sp.create_date									AS DateCreated, 
	sp.modify_date									AS DateModified,
	sp.default_database_name						AS DefaultDB, 
	sp.default_language_name						AS DefaultLang,
	ISNULL(STUFF((SELECT ',' + 
		CASE ssp22.name
		WHEN 'sysadmin' THEN ssp22.name + ' "Danger Will Robinson"'
		ELSE ssp22.name
		END		   
		FROM sys.server_principals ssp2 
		INNER JOIN sys.server_role_members ssrm2 
		ON ssp2.principal_id = ssrm2.member_principal_id
		INNER JOIN sys.server_principals ssp22
		ON ssrm2.role_principal_id = ssp22.principal_id
		WHERE ssp2.principal_id = sp.principal_id
		ORDER BY ssp2.name	 
		FOR XML PATH(N''), TYPE).value(N'.[1]', 
		N'nvarchar(max)'), 1, 1, N''), 'NoRolesHeld')			AS ListofServerRoles,		
	ISNULL(STUFF((SELECT ';' + ' Permission [' + sspm3.permission_name + '] is [' +
		CASE 
		WHEN sspm3.state_desc LIKE 'GRANT%' THEN 'Granted]'
		WHEN sspm3.state_desc = 'DENY' THEN 'Denied]'
		END AS PermGrants
		FROM sys.server_principals ssp3 
		INNER JOIN sys.server_permissions sspm3 
		ON ssp3.principal_id = sspm3.grantee_principal_id		
		WHERE sspm3.class = 100 AND 
		sspm3.grantee_principal_id = sp.principal_id AND
		sspm3.permission_name <> 'CONNECT SQL'
		FOR XML PATH(N''), TYPE).value(N'.[1]', 
		N'nvarchar(max)'), 1, 1, N''), 'NoServerPermissions') 
		+ ' in Server::' + @@ServerName + ''					AS PermGrants	
						
FROM sys.server_principals sp 
WHERE sp.type <> 'R' AND sp.name NOT LIKE '##%##' 
ORDER BY ServerPrincipal


Guide to Azure SQL Database

clock May 31, 2024 16:30 by author TopDog

Welcome to my blog, in this post I want to discuss Azure SQLDB.

There's a fair level of misunderstanding around this PaaS Azure feature, questions around this have recently appeared on the forums, so this post will seek to clarify what the feature is and what it can do.

What is SQLDB?

Azure SQLDB is a single instance of a SQL Server database deployed within a subscription in the Azure Cloud.
The platform has different tier levels and different redundancy levels to suit different performance and redundancy requirements. The Tier levels are based on either vCore or DTU based provisioning, the tier levels are;

Service Tier HA model Locally Redundant Zone Redundant
General Purpose (vCore) Remote Storage Yes Yes
Business Critical (vCore) Local Storage Yes Yes
Hyperscale (vCore) Hyperscale Yes Yes
Basic (DTU) Remote Storage Yes No
Standard (DTU) Remote Storage Yes No
Premium (DTU) Local Storage Yes Yes

 

vCore

This tier level represents virtual CPU core provisioning much like a Virtual Machine. Logical CPUs are defined and in turn dictate the amount of memory available and the maximum storage size. Sizing details are shown below;

Use caseGeneral PurposeBusiness CriticalHyperscale
Best for Most business workloads. Offers budget-oriented, balanced, and scalable compute and storage options. Offers business applications the highest resilience to failures by using several high availability secondary replicas, and provides the highest I/O performance. The widest variety of workloads, including those workloads with highly scalable storage and read-scale requirements. Offers higher resilience to failures by allowing configuration of more than one high availability secondary replica.
Compute size 2 to 128 vCores 2 to 128 vCores 2 to 128 vCores
Storage type Premium remote storage (per instance) Super-fast local SSD storage (per instance) Decoupled storage with local SSD cache (per compute replica)
Storage size 1 GB – 4 TB 1 GB – 4 TB 10 GB – 100 TB
IOPS 320 IOPS per vCore with 16,000 maximum IOPS 4,000 IOPS per vCore with 327,680 maximum IOPS 327,680 IOPS with max local SSD
Hyperscale is a multi-tiered architecture with caching at multiple levels. Effective IOPS depend on the workload.
Memory/vCore 5.1 GB 5.1 GB 5.1 GB or 10.2 GB
Backups A choice of geo-redundant, zone-redundant, or locally redundant backup storage, 1-35 day retention (default 7 days)
Long term retention available up to 10 years
A choice of geo-redundant, zone-redundant, or locally redundant backup storage, 1-35 day retention (default 7 days)
Long term retention available up to 10 years
A choice of locally redundant (LRS), zone-redundant (ZRS), or geo-redundant (GRS) storage
1-35 days (7 days by default) retention, with up to 10 years of long-term retention available
Availability One replica, no read-scale replicas,
zone-redundant high availability (HA)
Three replicas, one read-scale replica,
zone-redundant high availability (HA)
zone-redundant high availability (HA)
Pricing/billing vCore, reserved storage, and backup storage are charged.
IOPS aren't charged.
vCore, reserved storage, and backup storage are charged.
IOPS aren't charged.
vCore for each replica and used storage are charged.
IOPS aren't charged.
Discount models Reserved instances
Azure Hybrid Benefit (not available on dev/test subscriptions)
Enterprise and Pay-As-You-Go Dev/Test subscriptions
Reserved instances
Azure Hybrid Benefit (not available on dev/test subscriptions)
and Pay-As-You-Go Dev/Test subscriptions
Azure Hybrid Benefit (not available on dev/test subscriptions) 1
Enterprise and Pay-As-You-Go Dev/Test subscriptions

 

DTU

The Database Transaction Unit tier level represents a unit of work grouped by memory, CPU and I\O requirements. The DTU model has a maximum storage size available. The maximum size available is generally less than the vCore options. 

 BasicStandardPremium
Target workload Development and production Development and production Development and production
Uptime SLA 99.99% 99.99% 99.99%
Backup A choice of geo-redundant, zone-redundant, or locally redundant backup storage, 1-7 day retention (default 7 days)
Long term retention available up to 10 years
A choice of geo-redundant, zone-redundant, or locally redundant backup storage, 1-35 day retention (default 7 days)
Long term retention available up to 10 years
A choice of locally-redundant (LRS), zone-redundant (ZRS), or geo-redundant (GRS) storage
1-35 days (7 days by default) retention, with up to 10 years of long-term retention available
CPU Low Low, Medium, High Medium, High
IOPS (approximate)* 1-4 IOPS per DTU 1-4 IOPS per DTU >25 IOPS per DTU
IO latency (approximate) 5 ms (read), 10 ms (write) 5 ms (read), 10 ms (write) 2 ms (read/write)
Columnstore indexing N/A Standard S3 and higher Supported
In-memory OLTP N/A N/A Supported
Maximum storage size 2 GB 1 TB 4 TB
Maximum DTUs 5 3000 4000



Elastic Pools and Limits

You can also configure a feature called "Elastic Pools", these are pools configured with a set amount of compute and storage and the databases within the pool use the provisioned resource as and when they require.

To learn more, review Resource limits for pooled databases.

 BasicStandardPremium
Maximum storage size per database 2 GB 1 TB 1 TB
Maximum storage size per pool 156 GB 4 TB 4 TB
Maximum eDTUs per database 5 3000 4000
Maximum eDTUs per pool 1600 3000 4000
Maximum number of databases per pool 500 500 100

 

Logical Server

The logical server is merely a connection management point for the Azure SQL databases that it manages. There is no need to configure server principals or server roles\permissions at the virtual master database level. The server name is generated during the deployment process and is a unique combination of the following;

yourservername.database.windows.net


The network connection and policy will also be specified during the deployment process. We can see on the create database blade below the options that are chosen.

 

 

Of note from the above screenshot is the connection policy. The default uses either Proxy or Redirect, based upon where the traffic is initiated.

A connection from the public internet to the Public Endpoint would be proxied through one of the Azure SQLDB gateways for that region. This has the potential for high latency and is generally not recommended.

A connection from a private endpoint (which could be an extension of your on-premise network) would be redirected directly to the host servicing the database.
There are also firewall rules that are configured firstly at the logical server level and finally at the database level, the 2 stored procedures used for these are "sp_set_firewall_rule" and "sp_set_database_firewall_rule". Of course, you can also set these rules on the blade for the logical server and via TSQL in the database(s).

For connections that come in;

  • via the internet, the database rules for the incoming connection are first checked, if a match for the range is found the connection is granted.
  • If no database rule is matched, the server rules are enumerated, if a server level rule is found the connection is granted.
  • If there is no server rule match the connection fails.


Just remember that server level rules cover all databases so for security reasons it all depends which databases the connections require access to.

For Azure based connections an open rule is configured and this is the only time this is recommended, the start and end ip for this rule are 0.0.0.0 - 0.0.0.0 

Logical Server Permissions

Logical server permissions and roles have changed quite a bit for Azure SQLDB. There is no need to (and indeed you shouldn't) create logins at the virtual server level, administrators are defined during the logical server deployment and typically consist of a SQL server admin account and a Microsoft Entra user or group. The only requirement for server level and database level administrator rights would be to configure the firewall rules mentioned above, apart from that no users should hold any elevated roles or permissions.
The old server roles we've all been used to are gone, along with most of the server level securables. In fact, the system catalogs that tracked these are now gone in the virtual master database, any attempt to grant a legacy role or permission fails with the errors shown below.

 

There is now, a new set of fixed server roles and database roles. The server roles consist of the following;

Fixed server-level roleDescription
##MS_DatabaseConnector## Members of the ##MS_DatabaseConnector## fixed server role can connect to any database without requiring a User-account in the database to connect to.

To deny the CONNECT permission to a specific database, users can create a matching user account for this login in the database and then DENY the CONNECT permission to the database-user. This DENY permission overrules the GRANT CONNECT permission coming from this role.
##MS_DatabaseManager## Members of the ##MS_DatabaseManager## fixed server role can create and delete databases. A member of the ##MS_DatabaseManager## role that creates a database, becomes the owner of that database, which allows that user to connect to that database as the dbo user. The dbo user has all database permissions in the database. Members of the ##MS_DatabaseManager## role don't necessarily have permission to access databases that they don't own. You should use this server role over the dbmanager database level role that exists in master.
##MS_DefinitionReader## Members of the ##MS_DefinitionReader## fixed server role can read all catalog views that are covered by VIEW ANY DEFINITION, respectively VIEW DEFINITION on any database on which the member of this role has a user account.
##MS_LoginManager## Members of the ##MS_LoginManager## fixed server role can create and delete logins. You should use this server role over the loginmanager database level role that exists in master.
##MS_SecurityDefinitionReader## Members of the ##MS_SecurityDefinitionReader## fixed server role can read all catalog views that are covered by VIEW ANY SECURITY DEFINITION, and respectively has VIEW SECURITY DEFINITION permission on any database on which the member of this role has a user account. This is a small subset of what the ##MS_DefinitionReader## server role has access to.
##MS_ServerStateManager## Members of the ##MS_ServerStateManager## fixed server role have the same permissions as the ##MS_ServerStateReader## role. Also, it holds the ALTER SERVER STATE permission, which allows access to several management operations, such as: DBCC FREEPROCCACHE, DBCC FREESYSTEMCACHE ('ALL'), DBCC SQLPERF();
##MS_ServerStateReader## Members of the ##MS_ServerStateReader## fixed server role can read all dynamic management views (DMVs) and functions that are covered by VIEW SERVER STATE, respectively VIEW DATABASE STATE on any database on which the member of this role has a user account.

 

The database roles are now listed as;

Role nameDescription
dbmanager Can create and delete databases. A member of the dbmanager role that creates a database, becomes the owner of that database, which allows that user to connect to that database as the dbo user. The dbo user has all database permissions in the database. Members of the dbmanager role don't necessarily have permission to access databases that they don't own.
db_exporter Applies only to Azure Synapse Analytics dedicated SQL pools (formerly SQL DW).
Members of the db_exporter fixed database role can perform all data export activities. Permissions granted via this role are CREATE TABLE, ALTER ANY SCHEMA, ALTER ANY EXTERNAL DATA SOURCE, ALTER ANY EXTERNAL FILE FORMAT.
loginmanager Can create and delete logins in the virtual master database.

 

Again, you should only use these roles when a specific need arises and they should not be used freely just because they exist.

Authorisation is handled directly by the database the users connect to, much in the same way Contained databases worked in on-premise environments. No server level interaction is required and security best practices should be followed. Role Based Access Control should be employed to ensure only the required permissions are granted in order to carry out business tasks.
Microsoft Defender for Cloud brings rich configuration checking for Azure SQL DB and can alert when conditions are breached, for example a user is added to an elevated role or granted elevated permissions outside of a known role\permission set.
That's it for this basic guide, it should help clear up some misunderstandings around the platform and i would urge to read up in detail on the topics discussed on this blog post.



Repairing Log Shipping Plan Using Primary Differential Backup

clock May 22, 2024 11:12 by author TopDog

Have you ever had the scenario where your Log Shipping plan was interrupted?

If so, how did you repair the broken Log Shipping Plan?

These issues are common and regularly seen on the SSC forum whereby Log Shipping plans have been interrupted and the recommendation has been to completely re initialise Log Shipping from Full and Log backups. The suggestion to resume the Log Shipping plan with a differential backup from the primary database was met with some scepticism, so this article seeks to address this and explain why this works. Having to re initialise a Log Shipping plan can be a massive task especially when your database is a little on the large side, so any action that can minimise the effort and downtime has to be considered.

Ok, so you know the Scenario;

The junior DBA took a log backup outside of the Log Shipping plan and this backup was not restored to the secondary database, instead it was deleted from its disk location and has been lost. Your pager starts bleeping and already the boss wants to know why no new data is being sent to the secondary reporting database!

You're about to re initialise the whole Log Shipping plan which is a real killer as this database is a monster! Then you stop, you think and remember, didn't I read somewhere that a differential backup will bridge that LSN gap? Well, you're right and here's why it works.

 

Why Does the Differential Restore Work?

So, why does the differential restore work even when there are continual full backups occurring on the primary database?

The key here is the Log Shipping plans 3 SQL Server agent jobs that backup, copy and restore the transaction log backups and constantly replicate data from the primary to the secondary. Embedded within this 'Replication' is the Differential Base LSN which is incremented inline with the primary database. Let's look at the following steps involved in the typical Log Shipping process and the Differential Base LSN points that are created.           
                                                       

Step No Step Name Primary Diff BaseLSN Secondary Diff BaseLSN
1 Log shipping implemented via backup and restore 62000000061100036 62000000061100036
2 First log backup runs, is copied and restored 62000000061100036 62000000061100036
3 Subsequent log backups copied and restored 62000000061100036 62000000061100036
4 Full backup occurs on primary 62000000064400043 62000000061100036
5 Log backup taken, copied and restored 62000000064400043 62000000064400043

 

Now, what this means is that, as long as log backups are restored to the secondary, the Differential base LSN remains in step allowing you to bridge LSN gaps between the primary and the secondary databases using a differential backup.

Looking at your typical backup configuration on your Log Shipped database you could well have the following in place

Backup Type Frequency
Full Every Sunday 20:00pm
Differential Every night Monday - Saturday 20:00pm
Log Via LS plan every 15 minutes during the day

 

Given the scenario above, regular log restores will keep the secondary in step with the primary, any break in the Log Shipping plan may be bridged by restoring a differential backup. The only caveat here would be the following scenario;

  • Sunday 19:30pm the Log Shipping plan breaks, a log backup is missing and the restore job is now skipping backup files. No new data is being sent to the secondary
  • Sunday 20:00pm the Full backup occurs and completes successfully
  • Monday 09:00am the DBA comes into work and wishes they'd stayed at home
  • Monday 09:15am the DBA takes a differential backup of the primary and attempts to restore to the secondary but receives the following error message dialog.

 

Why is this?

If the full backup occurs after log shipping has broken but before you take the differential backup, no further logs are restored and the Differential Base LSNs are no longer synchronised, you will need to re initialise Log Shipping!

So, as we have already discovered, it's the transaction logs shipped and restored to the secondary that keeps the differential Base LSNs in step.

The situation above can be avoided to a certain extent by having suitable monitoring and notification to alert you immediately once a Log Shipping plan breaks. BAU DBAs can respond quicker to the alert and effect an immediate repair. You now just need to educate them on the process to be used.

As always, test thoroughly in your sandbox\test environment and if you're still stuck, post back I'll help all I can.



Creating and Using Mount Points in SQL Server

clock May 22, 2024 11:05 by author TopDog

Mount points seem to be the subject of much confusion on various SQL Server forums. In light of this, the following post will seek to detail exactly how mount points are supposed to work and provide a demonstration on how to create mount points within Windows.

You should have already checked whether your planned SQL Server instance will support Mount Points as not all versions provide this. With that in mind, let's briefly look at this now

Validating SQL Server Compatibility

Check whether or not your chosen version of SQL Server will support mount points. Not all editions of SQL Server do support the use of mount points, therefore, before configuring mount points and attempting to install SQL Server you must refer to the following Microsoft KB article for further details: 

http://support.microsoft.com/kb/819546

In brief, Versions of SQL Server 2005 and beyond fully support mount points for stand alone and clustered environments.

What are Mount Points?

Anybody who has ever used a Unix system would have come across the use of mount points before, they are common within this Network Operating System. Anything from a disk partition to a CD drive or floppy drive all require mounting before use, these file systems are usually mounted under a root partition folder.

Within the Windows operating system we have the ability to perform similar actions with multiple Windows disks. We have 26 drive letters initially available for assignment, subtract from this the reserved set of drives (A, B, C) and you are commonly left with 23. This doesn't leave much scope, especially if you plan to use many data locations per instance. You may have multiple SQL Server instances and require multiple disk sets to offload I\O, For each instance you may originally plan to use separate assignments for

  • SQL Server data
  • SQL Server logs
  • SQL Server backups
  • SQL Server tempdb

That's 4 drive letters for a single instance. Mount points enable you to use a single drive letter for all of the chosen locations but still implement separate physical drives for each of the individual data locations. How is this possible?


Mount points are simply file system locations (in the form of folders) under which a physical disk partition may be mounted\referenced. You start with a reference root known as the root disk. The root disk can, in fact should be, as small as possible. This will avoid multiple users creating data on the root disk and make administration easier. You must also bear in mind that all I\O destined for the Mount Point occurs on the mounted physical disk and not the root disk

We'll go through actually creating mount points later in the article, for now let's look at a little theory. Our basic example is as follows

Root disk 1 M:      (Windows disk 2)

 

Multiple mount points are employed in the following configuration for a single instance, remember each of these mount points is actually a separate physical drive from the Windows disk management snapin.

M:\SQLData         (Windows disk 3)

M:\SQLLog           (Windows disk 4)

M:\SQLTmp          (Windows disk 5)

M:\SQLBak           (Windows disk 6)

 

The instance may be installed using the paths below. To assign permissions we apply at the MSSQL folder level and they then propagate down to the child folders\files. Applying permissions at the root level M: or the mount point folders would not affect the mount point child file structures.

M:\SQLData\MSSQL\Data

M:\SQLLog\MSSQL\Log

M:\SQLTmp\MSSQL\TempDB

M:\SQLBak\MSSQL\Bak

 

Going further in another example, you could employ multiple root disks and multiple instances of SQL Server in the following manner

Root disk 1 D:           (Windows disk 2)

Root disk 2 E:           (Windows disk 3)

Root disk 3 M:           (Windows disk 4)

 

You would have multiple mount points, each instance would have 4 separate physical disks mounted under

Instance1 D:\INST1Data, D:\INST1Log, D:\INST1Tempdb, D:\INST1Bak     (Windows disks 6 - 9)

Instance 2 E:\INST2Data, E:\INST2Log, E:\INST2Tempdb, E:\INST2Bak    (Windows disks 10 - 13)

Instance 3 M:\INST3Data, M:\INST3Log, M:\INST3Tempdb, M:\INST3Bak    (Windows disks 14 - 17)

 

The SQL server instances would be created along the following paths

D:\INST1Data\MSSQL\Data, D:\INST1Log\MSSQL\Log, D:\INST1Tempdb\MSSQL\TempDB, D:\INST1Bak\MSSQL\Backups

E:\INST2Data\MSSQL\Data, E:\INST2Log\MSSQL\Log, E:\INST2Tempdb\MSSQL\TempDB, E:\INST2Bak\MSSQL\Backups

M:\INST3Data\MSSQL\Data, M:\INST3Log\MSSQL\Log, M:\INST3Tempdb\MSSQL\TempDB, M:\INST3Bak\MSSQL\Backups

 

It's easy to see how the management becomes more complex with multiple root disks and mounted volumes, it can't be stressed enough that documenting your setup as you go is more valuable than anything else, you do have an HLD and an LLD provided by the architect? Concise documentation gives you and anyone who follows you, a reference point for your system setup and can make troubleshooting easier too.


With Mount Points it's important to remember the following pointers

  • Do not install software to the root of the mount point. Rather, create and install into a folder under the mount point to allow the correct propagation of permissions.
  • Ideally, the root drive should not have any data created upon it, it is merely the root tree for the new volumes.
  • Permissions applied to the root drive file system and the mount point root folders do not propagate to the mount point file systems, they are controlled separately.
  • Mount points are not supported for all SQL Server 2000 configurations (in case you had a burning desire to use them). This is detailed in the relevant section above.

Windows requires online disks with formatted partitions before you are able to utilise mount points, let's look at those next as they form the basis for mounting any partition\volume.

Setting up the New Partitions

When initialising and before formatting your disks within Windows disk management, you should first decide on the disk type and partition type to be used.

In brief the 2 disk types possible are Basic and Dynamic. Basic disks are the default disk type used by the Windows Operating System. Dynamic disks are rarely used within Windows and are really only useful for creating spanned volumes or software RAID systems. In reality, the Windows disk you are formatting likely already has an underlying RAID system (such as a local controller) or it is a disk presented from a SAN or other network storage device. For this reason you should generally you should leave your disks as Basic type, especially for cluster support.

As well as the disk type, there are also 2 different partition types that may be used. The 2 types available are MBR (master boot record) or GPT (GUID partition type).

  • MBR partition types only allow a maximum of 4 partitions per disk (3 primary and 1 extended).
  • GPT partition types support very large disk types, a maximum of 128 partitions and are generally much more efficient and resistant to corruption.

However, I would suggest if your disk is under 500GB and has only 1 partition that you leave this as MBR.

The following Microsoft KB article provides further details on Basic and Dynamic disks and also MBR and GPT partition types. You should digest this before creating disk partitions:

 http://msdn.microsoft.com/en-us/library/aa363785(VS.85).aspx

 

To re cap, the items you will require to create your mount points are;

  • A root disk (or disks) with a formatted partition and a drive letter assigned within Windows
  • A defined set of mount point folders to be created upon the root disk.
  • Disk devices available for use within Windows disk management (be they local or SAN\NAS attached).

Creating a Mount Point Demonstration.

We will now work through a short demonstration showing how to create mount points within Windows. For the demonstration we need a small root disk, the smaller the better (mines 500MB but it could\should be as small as a few MB).

Note: Remember, the root disk is merely a file system place holder for the mounted drives, it will only contain empty folders which are then used to mount the NTFS volumes. The temptation for users to dump files on the root drive is high and something I've seen a lot over the years.


My partitioned and formatted root disk is shown below

 

On the root drive (M:) I have created the following folders, these folders currently display the typical Windows folder icon.

 

Open Windows Disk Management as we first need to create the new volumes. Create the first volume by right clicking an unallocated disk and selecting "New simple volume" as shown below.

 

Click “Next” in the new volume wizard to continue.

 

Specify the volume size and click “Next”.

 

It's at this point we specify where we want to mount this new volume, we could have chosen a drive letter but we want to use a mount point. Select the option to “Mount in the following empty NTFS folder:”, browse to the path and click “OK”.

 

With the mount point specified, click “Next” to continue.

 

Specify the format options and click “Next” to continue.

 

Click “Finish” to create the new volume. Repeat these steps for each volume to be created\mounted.

 

Once you have created the 3 volumes and mounted them to the specific folders, you will see the following in disk management. Your mount points are now ready to use for a stand alone SQL Server installation.

 

Mount Points for Clustered Instances

For clustered scenarios there are further checks to be made when utilising mount points for your clustered SQL Server instance drives. Those steps are detailed in the following Microsoft KB: 

http://support.microsoft.com/kb/280297

In brief, you now need to add this storage to the cluster via Failover Cluster Manager. Right click “Storage” and select “Add a disk” as shown below.

 

With the required disks selected, click “OK”.

 

Cluster disks 6, 8, 9 and 11 added. Note cluster disk 6 is the Root drive (M:).

 

Now add the storage to our newly created service\application group. My example uses a pre created application titled “Denali Mount Points”.

 

You must now set the dependencies between the root drive and the new volumes. Right click the first mounted volume and select “Properties” as shown below.

Note: the root drive must come online before the mounted volumes to ensure the full mount path and drive letter are available.

 

Set the root disk as the resource dependency, in this case cluster disk 6. Click “Apply” and “OK”. Do this for each of the 3 mounted volumes.

 

During the SQL server instance installation, select the existing Denali service\application group shown with a green tick.

 

At the cluster disk selection you must select the root drive for installation. The mounted drives are not available, attempting to select one of the mounted volumes causes setup to direct you to select the root tree drive (in my case cluster Disk 6 M: ).

 

On the database engine "Data Directories" tab, set your paths for the requested disk locations.

 

Post installation, these are the new SQL server folders under the mount points when viewed through explorer. Notice that the mount points now have a different icon and not the standard Windows folder icon.

 

This should give you more of an understanding of mount points and how they are set up. They operate just the same as any other folder with the exception that permissions grants are handled by each individual mount point and not by the root directory. Granting permissions at the root drive and pushing child inheritance will not affect the mount points. Still, they are essentially NTFS folders and, as the screenshot above shows, look seem less to the Windows filesystem. As you can see they are also an excellent way to bypass the Windows 26 drive letter limitation.

As always, test this in an offline environment, virtual machines make a great test lab and are extremely easy to set up using my three part guide on this site. Have fun and of course, post back if you have any troubles.



Always Encrypted - The Definitive Guide

clock May 18, 2024 19:46 by author TopDog

Hello and welcome to my latest post detailing the Always Encrypted feature within SQL Server. Much has happened within SQL Server these last few years and Always Encrypted is another big feature in the SQL Server encryption arena.

But what is it, how can it help my organisation?

What is "Always Encrypted"?

For a full and detailed explanation, check the Microsoft documentation at the following link;

Always Encrypted (Database Engine)

Always Encrypted was initially made available in Enterprise Edition of SQL Server 2016 RTM, it was later introduced into Standard, Web and Express Editions during Service Pack 1 for SQL Server 2016.
Always encrypted uses a series of steps utilising symmetric and asymmetric keys, the steps are dependent upon who will create and manage the keys and the tools used to create them. Different organisations have different levels of support and role separation. We'll look at these later but for now let's look at exactly what it does and why?

What does Always Encrypted Do and Why?

Traditionally, if you used Transparent Data Encryption, you would have a Certificate and Database Master Key (DMK) in the master database and then a Database Encryption Key (DEK) in the user database. As the Certificate and DMK are SQL Server instance level objects, it's pretty much available to anyone who has the key. Obviously, this cannot be used to truly encrypt and prevent SQL Server users (including DBAs) from reading the encrypted data.

This is where the Column Master Key (CMK) and Column Encryption Key (CEK) come into play. Always Encrypted uses 2 different types of key and an external encryptor to achieve the secure data encryption. The external key provisioning options are; 

  • Hardware Security Module using CNG/KSP (Cryptographic Next Generation using Key Storage Provider)
  • Hardware Security Module using CAPI/CSP (Cryptographic API using Cryptography Service Provider)
  • Azure Key Vault
  • Windows Certificate Store (Computer)
  • Windows Certificate Store (User)

The first 2 options generate a column master key from an external secure source, the 3rd option uses a certificate in an Azure Key Vault.
The last 2 options use the Windows Certificate key store. There are vast differences between the above options and it all depends on your organisations preferred method of key storage.
Of the first 2 options above, CNG\KSP should be preferred over the legacy CAPI/CSP. Both options require the installation of a client side driver to provide interaction between the key store and the Always Encrypted system.
A typical example of a Windows Certificate key store cert would look similar to the screenshot below;

 

Column Master Key (CMK)

The CMK is an asymmetric key which is stored externally to the SQL Server instance, this is usually in the form of a certificate. If required, DBAs have no access to the CMK at all.
A reference to the CMK is held inside the user database, but it only holds the path (including the thumbprint) to the external key not the key itself, it's created and managed outside of the SQL server system.
 

Column Encryption Key (CEK)

The CEK resides inside the user database and is protected and encrypted by the CMK. The CEK key holds only the encrypted CEK value, no plaintext details are stored.

With the Certificate residing outside of the SQL Server system, The CMK certificate is distributed to only those users who are required to decrypt the data, given security and data best practices this list should be limited.


There are a set of DMVs we can use to track the keys and an example is shown below from a test system;

 

As we can see from the output above, the CMK metadata holds very limited detail, the key path includes the thumbprint which is required to create the CEK. This value is encrypted and then stored in the "encrypted_value" column in the CEK key values DMV. 

Table Schemas

The base column definitions stay the same in that their data types do not change, there's no creating the columns as varbinary as was necessary with traditional column level encryption. However, once encryption has been applied, the table definition does change slightly as shown below;

 

When scripting out the schema we can see that the CardNumber, ExpMonth and ExpYear columns now have an extra clause applied to their definition. The type of encryption selected for each chosen column during configuration, now directly applies on a per column basis. The 2 types of encryption which may be applied are defined as one of the following; 

  • Randomised
  • Deterministic 

Randomised

The Randomised encryption type produces a different value each time even when the data encrypted is the same. This is more secure but also prevents the usage of these columns in indexes, grouping and join criteria.

Deterministic

The Deterministic encryption type produces common encryption values. Whilst this aids the use of the columns in join criteria and indexing, etc it could be possible when analysing the encrypted strings to work out patterns in the encrypted values, so exposing the protected data.

Always Encrypted With Secure Enclaves

New in SQL Server 2019 is the feature Always Encrypted with Secure Enclaves. This feature was added to overcome some of the limitations with the way encrypted data is used. Traditionally, the data encryption operations occur on the client side via the client driver. The client modifies the data unencrypted and then sends the encrypted value(s) back to the server. The data and the encryption keys are never exposed over the link. This affects the way the data is used though and one reason why the choice for Randomised or Deterministic encryption should be carefully made, as in the case of column joins and grouping, etc.

With the feature enhancement, some computations of the unencrypted clear text data are available on the server within secure enclaves. They are protected regions of memory assigned to the SQL Server process so that further computation operations are possible over the standard Always Encrypted functionality. Of course, this would involve extra design and configuration tasks and also development routes would need to be changed too, but the feature is there for those who wish to consume it. This link details further;

https://learn.microsoft.com/en-us/sql/relational-databases/security/encryption/always-encrypted-enclaves?view=sql-server-ver16


Always Encrypted Deployment Routes

Whilst it is possible for a DBA to manage the whole process, this may not fit all organisations, especially where role separation is enforced. So the way in which the encryption is applied depends on whether role separation is employed and whether you fully trust the DBA team or are they supplied by a 3rd party and not authorised to view the sensitive data.

Of course you may have Core employee DBAs who are trusted but the Security team handle all encryption. The answer to these questions would straight away drive a particular route, let's look at some scenarios;

Scenario A

The DBA is a direct employee of the organisation and trusted fully with the database systems. They are either authorised to view the data or do not exercise any elevated privilege to view data they are not required to have access to. They would have access to an Azure Key Vault or Windows Certificate Stores or HSM modules.

Scenario B

The DBA is a resource supplied by a 3rd party support vendor to the organisation. The organisation also has a separate security admin team that manage security keys. Neither team are authorised to have access to the encrypted data but are required to implement encryption and provide BAU management of the database system.

The scenarios above map to the following;

Scenario A (Singularly Managed)

The management via SQL Server Management Studio will be suitable for those who entrust their DBAs with the sensitive data held in the database or where the DBA has a wide range of permissions for their role outside of SQL server. For instance, if selecting a Computer Certificate Store the DBA would require local admin permissions on their machine, otherwise when attempting to create a column master key and selecting the local computer certificate store, an error will be raised.

Allowing the DBAs to create and manage the keys and view the data is fairly straight forward and can be achieved via SQL Server Management Studio, using the [Sales].[CreditCard].[CardNumber] column in the AdventureWorks database as an example, a typical deployment would be as follows;

First, right click the table in SSMS and select "Encrypt Columns" as shown below;

Note: The CMKs and CEKs can be created and managed from SQL Server Management Studio and requires v17.0 and above.

  

Click "Next" through the Introduction screen and you will see the "Column Selection" dialog, select the columns and the encryption type. For this I have selected the CardNumber column and Randomised encryption. A new CEK has been automatically selected since i do not have any existing keys in the database, I also do not have the option of changing the CEK name. Select your options and click "Next";

 

 Landing at the CMK configuration dialog we see the following, since no keys exist the system will auto generate with default names. Click "Next" to continue;

 

 At this point you may script the configuration, leave as "Proceed to Finish Now" and click "Next";

 

Clicking "Finish" will apply the Always Encrypted configuration.

 

At this point you are left with a configuration that may not meet naming standards, this is one of the downsides of using the Always Encrypted wizard to deploy your configuration.
Ideally, you would deploy your CMK certificate and create your CEK before you invoke the "Encrypt Columns" wizard. The CMK would be created on a secure asset within the organisation.
 In this scenario I have used a cert store on the same computer as the SQL server instance, this is for demonstration purposes only and would not meet best practice, this would likely breach any security restrictions imposed by your organisation too.

For the purposes of this demonstration, you should have a basic idea of what is required for a single role deployment.

 

Scenario B (Role Separated)

For scenarios where role separation is employed, the process will need to be broken down and actioned by the appropriate agent. The process employs PowerShell cmdlets in the first steps, which are used by the Security or Infrastructure admins. The DBAs are then passed the relevant thumbprint values and may complete the process via PowerShell or T-SQL.

To use a scenario, this could be where an organisation has a separate security administration team and a team of Database Administrators provided by a 3rd party support vendor.

Neither party are permitted to view encrypted data in the salary column of the employee table. The security operative is to manage the key but no permission to select the column, the DBA is to manage the SQL Server configuration but have no ability to read the table. Without access to the security key itself, selecting the column returns the encrypted data.

The configuration steps for "role separated" organisations are different and are broken down into subsets for each role to perform. The key values provided by the security team to the DBA are the CMK type and path\location and the CEK encrypted value. In this example we will use a user based certificate in the local Windows Certificate store. The Always Encrypted deployment via a role separated route is as follows;

Security Admin

  • Generate a user based certificate on a secure asset within the organisation. Typically the PowerShell cmdlet "New-SelfSignedCertificate" could be used.
  • Create a CMK setting using the PowerShell cmdlet "New-SqlCertificateStoreColumnMasterKeySettings". This cmdlet connects to the SQL PowerShell provider but performs no actual database connectivity.
  • Create a CEK setting using the PowerShell cmdlet "New-SqlColumnEncryptionKeyEncryptedValue". Again, this cmdlet connects to the SQL PowerShell provider but performs no actual database connectivity.
  • Provide the DBA with the CMK path\location and the CEK encrypted value, this can be done via secure sharing of a file generated by the previous steps.

 DBA

  • Connect to the SQL Server instance and database via PowerShell.
  • Create a column master key setting from the settings previously provided by the Security Admin. This uses the PowerShell cmdlet "New-SqlColumnMasterKeySettings". No actual database connectivity is performed at this point.
  • Create a CMK in the user database by using the PowerShell cmdlet "New-SqlColumnMasterKey". This performs database connectivity and creates the new CMK. This key would be available during the column encryption wizard steps as previously seen in the single role deployment.
  • Create a CEK in the user database by using the PowerShell cmdlet "New-SqlColumnEncryptionKey". This also performs database connectivity and creates the new CEK based on the CMK details. Again, this key would be available during the column encryption wizard steps as previously seen in the single role deployment.
  • Complete the column encryption using the newly created keys

After completing the steps above, when first querying the data the encrypted version is returned.

 

After ensuring access to the certificate in my current user Windows Certificate store and reconfiguring the SSMS connection to allow Always Encrypted as shown below;



Re running the query now shows the unencrypted data

 

To remove the encryption simply re run the "Encrypt Columns" wizard and set the columns to "PlainText", then complete and Finish the wizard.

It should be understood from the outset that the configuration and implementation of Always Encrypted requires the skills of an experienced Security Admin and an experienced DBA. Where necessary, Proof Of Concept environments can be used to drive the High Level Designs and Low Level Designs. Configurations outside the basics of the Windows Certificate Store will require deep knowledge of all associated subsystems and PowerShell. It's important that the data security be paramount at all times and that all options are fully evaluated to ensure the correct configurations are applied, observing any rules the organisation follows or are regulated to follow.



Fixing Orphaned Users-The new way

clock May 20, 2022 11:03 by author TopDog

We've all been used to using the stored procedure "sp_change_users_login" for some time now. This SP is no longer the route to take when fixing orphaned users. Now we have the following TSQL command:

ALTER USER xxxx WITH LOGIN = xxxx

 

Admittedly this does not resolve the issue of a missing login, in this situation you simply need a check to ensure that the login exists before attempting to alter the users SID, the following can be used:

IF SUSER_SID('someuser') IS NULL

BEGIN

CREATE LOGIN [xxxx] WITH PASSWORD = 'xxxx' --sql

CREATE LOGIN [xxx\xxxx] FROM WINDOWS --Windows

END

ALTER USER xxxx WITH LOGIN = xxxx

 

 



Moving Database Files in SQL Server

clock January 25, 2013 05:33 by author TopDog

In this article I will be discussing the moving of database files within a SQL Server instance. We'll also work through a typical move scenario, looking at the scripts we should use and the meta data available to help us.

Let's first begin with what we do know.

Under SQL Server 2000, altering database paths was all but limited to the TempDB database. You had to detach the user database and then re-attach it, or you could use backup, then restore. Hmm, messy to say the least. 

Under normal operation, once SQL Server has a handle on a database you will not be able to manipulate the files at the OS level. Any attempt to copy the files, etc., will result in the dialog box below. 

 

To address the first point, thankfully in SQL Server 2005 onwards, this is no longer necessary, and in fact SP_ATTACH_DB has been deprecated and will be removed in a future version of SQL Server. You should now use:

CREATE DATABASE ... FOR ATTACH

 

To address the second point, in order to release the handle the database engine has on the user database files we merely need to Offline the database. We do not need to stop the SQL Server services.

Let's just re-cap that; we do not need to stop the SQL Server services.

 

You may issue the following command to Offline the database;

ALTER DATABASE [yourDB] SET OFFLINE

 

If you have active connections and wish to roll them back and take the database offline you may do so using;

ALTER DATABASE [yourDB] SET OFFLINE WITH ROLLBACK IMMEDIATE

 

Once the database is Offline you may move and\or rename your database files. Just remember that if you delete or rename the files, the database will not come back online again. When attempting to Online the database you will usually receive an error along the lines of: 

 

You must first amend the system catalogs to provide the new paths\filenames, this is done using the ALTER DATABASE command passing in the MODIFY FILE parameters as shown in the following query construct:

ALTER DATABASE [yourDB]
  MODIFY FILE ( name=logicalfilename, 
                filename=N'c:\folder1\folder2\adbfile.mdf'
              )

Important Note: When using the T-SQL command above, SQL Server will accept whatever you type and issue in the ALTER DATABASE statement so be careful and check your typing!


For example this would be valid

ALTER DATABASE [yourDB] MODIFY FILE(name=logicalfilename, 
filename=N'c:\MSSQL\DATA\gobbeldygook.dat')

 

If the path\filename does not exist when the database tries to start you will receive an error!!

 

Moving the Files


 

With the above in mind, let's look at how we would achieve the goal of moving a databases disk files to new locations.

Our Scenario

The server drives are filling up quickly and you have been asked by the manager to move the disk files to a new set of drives provided by the Windows administrator. The engineer has created your new file paths for you and retained all NTFS ACLs required for the SQL Server services. The drives\paths supplied are as follows (I am using my C drive but this could easily be G or F or some other drive letter);

C:\Program Files\Microsoft SQL Server\MSSQL10_50.DBA\MSSQL\Data

Before making any changes to the OS files and their locations first check the metadata available to you in the following system catalog master.sys.master_files. The important metadata to collect consists of the Logical Filenames and the Physical Names. You may obtain this information using the following query;

SELECT database_id,
       name, 
       physical_name 
 FROM sys.master_files 
 WHERE database_id = DB_ID('SampleServiceCompany2009')

 

For my database I have the following: 

 

 

I need to amend these paths from the "C:\Program Files (x86)\Microsoft SQL Server\MSSQL.1\MSSQL\Data" directory to match the new location provided to me. The first task is to construct a set of T-SQL move commands. The ALTER DATABASE ... MODIFY FILE command really only needs 2 parameters for the file move: the files logical name and the physical_name. Both of these are retrieved in the query shown above. The move commands are extremely simple and as shown earlier take the following form; 

 

Issuing these commands now will update the system catalogs, however, the new paths will not take effect until the database has been restarted (or taken Offline then brought back Online). The following message informs us of this: 

 

I'll now take the database Offline using the command highlighted below. 

 

With the database Offline, I may now move the files(and even rename them if I really wanted to).

A word of caution here. It is advisable to copy and paste the files to the new locations. Only when the database comes online successfully would I then remove the old files.

Once you have copied the files to the new locations you would then bring the database Online, this is done using:

ALTER DATABASE [yourDB] SET ONLINE

 

Shown below are the typical screenshots you will see if the database fails to start. From the information dialog below click the message column and you will see details of the issue. 

 

The error details show below provide an indication to the issue, the files probably do not exist (in fact that's generally exactly what an OS Error 2 means). 

 

If the database starts successfully you will see the following 

 

Help, My Database Won't Start

In every scenario I have encountered whereby a database file move has failed, the issue has been down to a mistyped path and\or filename, resulting in the DBA then getting into a vicious loop trying to correct themselves. Should your database fail to start, don't panic. Perform the following tasks;

  • Check the script you used to modify the database file locations, have you got the paths and filenames correct?
  • Have your admin check the permissions to the new path for the SQL Server Database Engine service account.
  • Query the catalog master.sys.master_files, do the paths\filenames here match what you're expecting?
  • If you are unable to complete the move successfully, revert back to the original file paths\names. As you left the files in the source directory simply issue the appropriate ALTER DATABASE ... MODIFY FILE statements and bring the database back online.


Querying master.sys.master_files to obtain the current database file paths 

 

This is a very easy task to complete providing you pay full attention to the task in hand. If you encounter an error, stop and review what you currently have in place both from the system catalog metadata and the physical OS locations\filenames.

As always, enjoy and if you're still stuck post back and I'll help all I can



AlwaysOn Availability Groups

clock April 17, 2012 20:53 by author pezzar

 

Anybody who has worked with High availability for SQL server over the years, will be fully aware of Microsoft Cluster Services (Windows 2003) and Windows Server Failover Clusters (Windows 2008). With a clustered environment we are able to fully mitigate failures by utilising 

  • multiple network cards for network redundancy
  • new Majority node set quorum models (available in Windows server 2003 SP1 onwards) to remove the disk dependency and increase support for multi site clusters.
  • multiple computer nodes to negate core node hardware failure (i.e. motherboard, etc)

 

The only real weak link in the chain is the shared storage. There are many ways to achieve redundancy here, but it usually comes at significant cost and it is often difficult to setup and maintain. Of course, as previously mentioined, a fail over cluster only mitigates the server hardware, it does not provide a single or even multiple secondary databases. We have database mirroring in SQL Server versions prior to SQL Server 2012, but these only provide scope for a single nonreadable (with the exception of snapshots), secondary database. You can combine other SQL server availability technologies with fail over clusters, but think of the increased complexity. 

Well, AlwaysOn is the new high availability feature from Microsoft in SQL Server 2012. AlwaysOn takes the old database mirroring concept and creates a whole new level of HA, this encompasses the following;

  • No shared storage, each server\instance has localised storage and removes the storage Single Point Of Failure.
  • AlwaysOn listener service to accept centralised requests to HA database groups.
  • Multiple availability databases instead of the traditional Principal\Mirror scenario.
  • Better failover functionality leveraging Microsoft Windows Server Failover Clustering.
  • The ability to suspend data movements at the primary level or individual secondary level.
  • Support for multiple IP subnets.
  • Offload backup and maintenance operations to secondary databases.

 

By offloading backup operations to read only replicas, you can reduce the I\O requirements on your production systems. Multiple secondaries can also provide DR and reporting replicas.

AlwaysOn uses the concept of Availability Groups and these may contain one or more databases that you wish to enable for HA. There are a set of strict pre requesites that must be satisfied before a database may participate in an Availability Group, these are fully detailed at the following link.

http://msdn.microsoft.com/en-us/library/ff878487

 

AlwaysOn still uses the familiar SQL server endpoints for instance to instance communication, you may also create a highly available listener service which you will use to accept incoming connections to the availability group. The listener consists of a unique IP address and a unique virtual network name, this is by far one of the most significant changes in making the groups databases highly available. By providing a centralised access point into the availability group, clients are removed from the issues that are generated during failover of a database. Availability group replicas may also be configured for read-only routing, this allows online readable secondaries to handle read only requests removing concurrency issues from the primary replicas.

 

With AlwaysOn you still also have the traditional synchronous and asynchronous modes that were used in database mirroring. Asynchronous replicas support manual failover only while Synchronous replicas support automatic or manual failover.

 

When attempting to setup AO groups there are a range of options for troubleshooting any errors. You have the AlwaysOn dashboard and also the SQL Server and Windows logs. The wizard driven deployment offers the easiest deployment route, whereas manual deployment requires a lot of manual interraction. Despite this, AO groups are still extremely easy to deploy and configure and offer a level of HA that was previously not available without resorting to complicated levels of feature integration.

 

Creating Your Own AlwaysOn Group 

The availability group may be created and configured via the wizard or manually. Manual interraction is not as streamlined as the wizard, for this reason you may want to use the wizard as your main deployment method. When using the wizard you have the option of scripting the process much in the same way as you can when performing other actions within SQL Server.

 

To setup and configure a virtualised SQL Server 2012 Always On complete system you will need the following

  • A physical host machine with your favourite Hypervisor installed (I use VMWare Server 2.0.2).
  • Windows 2008 R2 Enterprise
  • SQL Server 2012 Enterprise

Optional: You may use Windows 2008 R2 Standard edition for your Domain Controller instead of Enterprise edition.

 

The first tasks are to create the required virtual machines, these are 

1 x Windows 2008 R2 domain controller (Standard or Enterprise)

2 x Windows 2008 R2 cluster nodes

 

If you require any help in creating these, please see my previous guide starting at the following link for help with VMWare server and creating the VMs, etc.

http://www.sqlservercentral.com/articles/virtual/72682/

 

For my VMs I have used the following network configurations 

Node Network IP Address
Windows 2008 R2 DC Public 192.168.0.40
Windows 2008 R2 SQLHANode1 Public 192.168.0.42
  Private 192.168.93.42
Windows 2008 R2 SQLHANode2 Public 192.168.0.43
  Private 192.168.93.43

 

 

 

 

 

 

 

All of the VMs will use local virtual disks, no shared storage, this will replicate the scenario required for the AlwaysOn high availability. Once you have all the VMs created, networked and updated you may now install your stand alone SQL Server instances to each of the 2 nodes. SQLHANODE1 has a named instance "INST1" and SQLHANODE2 has a named instance "INST2". Once the instances have been installed and are operational, proceed to the instructions below.

Create the Windows Server Cluster.

Before going any further it is necessary to install the Failover Cluster feature. Add the fail over cluster feature by opening the Server Manager console and select "Features" and then "Add features" as shown below. Step through the wizard to complete the installation, do this for each of the 2 nodes and close server manager once you have completed.

  

 

Once the feature has been installed to both nodes the cluster installation is performed as follows;

Open the failover cluster manager console and click "Create a cluster". 

 

Click "Next" at the first screen and then at the following screen below, select the servers that will participate in the new cluster and click "Next"

 

Select the option to run the tests and return to the create cluster process, then click "Next"

 

Step through the wizard until you see the screen below, select "Run all tests (recommended)" and click "Next".

The wizard will find all local drives and mark them as non shared disks.

 

You should now be asked to confirm and start the validation, Click "Next" 

 

Once the tests have completed successfully, click "Finish" as shown below. Review the validation report if necessary. 

 

You will then be taken to the "Create Cluster" wizard, supply the virtual networkname and IP and click "Next"

 

Confirm the cluster entry point details and click "Next" to continue at the confirmation screen. 

 

When the wizard completes click "Finish" to create the cluster. 

 

You have now created the base Windows Server Failover Cluster. 

 

Any attempt to access the AlwaysOn features in Management Studio with first completing the configuration will result in the error below. 

 

Open the SQL Server Configuration manager tool and under "SQL Server Services", right click the service you wish to configure for AlwaysOn. You will see the dialog below, check the "Enable AlwaysOn Availability Groups" checkbox and click "OK". You will need to restart the service. 

Deploy A New Availability Group Using the Wizard

Now we have AlwaysOn enabled we can start to configure a new group, righ click "Availability Groups" and select to create a new group. The new availability group wizard starts as shown below, supply your group name and click "Next".

 

 

Select the database(s) to participate in the AlwaysOn group and click "Next". 

 

 

Specify the Availability Replicas and move to the "Endpoints" tab. Check the endpoint configuration and move to the "Backup Preferences" tab. 

 

 

Specify the backup preferences and move to the listener tab. 

 

 

Provide details of the Listener IP, TCP port and network name, then click "Next". 

 

Select your synchronisation preference and click "Next" 

 

Check the validation screen and click "Next". 

 

Review the summary and click "Finish". 

 

 Once the wizard finishes, click close.

 

New Availability Group deployed, configured and synchronised. 

 

 

Manually Failover The Availability Group 

To manually failover the new group, right click the group and select "Failover".

 

Click "Next" at the failover wizard welcome screen. 

 

Select the new Primary replica and click "Next". 

 

Connect to the new Primary replica. 

 

Connected so click "Next". 

 

Review the summary screen and click "Finish". 

 

 

When the wizard finishes click close.

 

 

 Manually Deploy A New Availability Group

If this is the first time you are creating an AlwaysOn availability group on your instance and you are performing the action manually you need to perform the following first.

CREATE ENDPOINT MyEndpoint
    STATE = STARTED
    AS TCP ( LISTENER_PORT = 5022 )
    FOR DATA_MIRRORING (AUTHENTICATION = WINDOWS NEGOTIATE,
       ENCRYPTION = SUPPORTED, ROLE=PARTNER);
GO

GRANT CONNECT ON ENDPOINT::MyEndpoint TO [domain\account]
GO


Failure to create the endpoints first will stop the specified replicas from joining to the availability group.


To manually deploy an Availability Group without using the wizard, right click "Availability Groups" and select "New availability group" then use the following steps. 

Provide a group name and also add in any databases you wish to include. Alongside your Primary replica (the instance you are adding the group on), enter the details of one or more secondary replicas (SQL Server instances you wish to act as secondaries), including the mirroring endpoint URL. Once you have populated these details switch to the "Backup Preferences" page.

 

 Select your backup preferences and any replica backup priorities and click "OK". 

  

Expand the new AO group and right click availability group listeners and select "Add Listener". 

  

Right click the partner node (identified by the red stop icon) and select "Join to availability group". 

  

Connect to the instance 

  

Click "OK" to join the group 

  

The secondary replica has been joined to the AO group. 

  

Open the secondary instance and move to the availability group details. Right click the secondary database (with yellow warning triangle icon) and select "Join to availability group". 

  

Click "OK" to continue, 

 

If successful, the secondary database should change to the familiar Microsoft green facing right arrow icon. 

 

Checking Failover Cluster manager we now see the service and its resources created to support the AlwaysOn Availability group. Upon failover of the primary, the cluster group\application moves to the new primary instances node.

 

 

AlwaysOn availability groups offer a range of possibilities not only to provide redundancy but for offloading backup and reporting I\O too. As always, plan carefully for your requirements and implement the correct solution first time. Building out to test systems not only verifies your configuration but helps you to prepare for your Live deployment.