SQL_Guy

This is the SQL Server Blog for Perry Whittle

Listing server logins holding roles and server permissions

clock June 17, 2024 23:26 by author TopDog

The following script will list all server principals and any elevated roles or permissions they hold.
The script is valid for SQL Server instances and Azure SQL Managed Instances

 

SELECT 	sp.name										AS ServerPrincipal, 
	sp.type_desc									AS LoginType,
	sp.type,
	CASE sp.is_disabled
	WHEN 0 THEN 'No'
	WHEN 1 THEN 'Yes'
	END												AS UserDisabled, 
	sp.create_date									AS DateCreated, 
	sp.modify_date									AS DateModified,
	sp.default_database_name						AS DefaultDB, 
	sp.default_language_name						AS DefaultLang,
	ISNULL(STUFF((SELECT ',' + 
		CASE ssp22.name
		WHEN 'sysadmin' THEN ssp22.name + ' "Danger Will Robinson"'
		ELSE ssp22.name
		END		   
		FROM sys.server_principals ssp2 
		INNER JOIN sys.server_role_members ssrm2 
		ON ssp2.principal_id = ssrm2.member_principal_id
		INNER JOIN sys.server_principals ssp22
		ON ssrm2.role_principal_id = ssp22.principal_id
		WHERE ssp2.principal_id = sp.principal_id
		ORDER BY ssp2.name	 
		FOR XML PATH(N''), TYPE).value(N'.[1]', 
		N'nvarchar(max)'), 1, 1, N''), 'NoRolesHeld')			AS ListofServerRoles,		
	ISNULL(STUFF((SELECT ';' + ' Permission [' + sspm3.permission_name + '] is [' +
		CASE 
		WHEN sspm3.state_desc LIKE 'GRANT%' THEN 'Granted]'
		WHEN sspm3.state_desc = 'DENY' THEN 'Denied]'
		END AS PermGrants
		FROM sys.server_principals ssp3 
		INNER JOIN sys.server_permissions sspm3 
		ON ssp3.principal_id = sspm3.grantee_principal_id		
		WHERE sspm3.class = 100 AND 
		sspm3.grantee_principal_id = sp.principal_id AND
		sspm3.permission_name <> 'CONNECT SQL'
		FOR XML PATH(N''), TYPE).value(N'.[1]', 
		N'nvarchar(max)'), 1, 1, N''), 'NoServerPermissions') 
		+ ' in Server::' + @@ServerName + ''					AS PermGrants	
						
FROM sys.server_principals sp 
WHERE sp.type <> 'R' AND sp.name NOT LIKE '##%##' 
ORDER BY ServerPrincipal


Repairing Log Shipping Plan Using Primary Differential Backup

clock May 22, 2024 11:12 by author TopDog

Have you ever had the scenario where your Log Shipping plan was interrupted?

If so, how did you repair the broken Log Shipping Plan?

These issues are common and regularly seen on the SSC forum whereby Log Shipping plans have been interrupted and the recommendation has been to completely re initialise Log Shipping from Full and Log backups. The suggestion to resume the Log Shipping plan with a differential backup from the primary database was met with some scepticism, so this article seeks to address this and explain why this works. Having to re initialise a Log Shipping plan can be a massive task especially when your database is a little on the large side, so any action that can minimise the effort and downtime has to be considered.

Ok, so you know the Scenario;

The junior DBA took a log backup outside of the Log Shipping plan and this backup was not restored to the secondary database, instead it was deleted from its disk location and has been lost. Your pager starts bleeping and already the boss wants to know why no new data is being sent to the secondary reporting database!

You're about to re initialise the whole Log Shipping plan which is a real killer as this database is a monster! Then you stop, you think and remember, didn't I read somewhere that a differential backup will bridge that LSN gap? Well, you're right and here's why it works.

 

Why Does the Differential Restore Work?

So, why does the differential restore work even when there are continual full backups occurring on the primary database?

The key here is the Log Shipping plans 3 SQL Server agent jobs that backup, copy and restore the transaction log backups and constantly replicate data from the primary to the secondary. Embedded within this 'Replication' is the Differential Base LSN which is incremented inline with the primary database. Let's look at the following steps involved in the typical Log Shipping process and the Differential Base LSN points that are created.           
                                                       

Step No Step Name Primary Diff BaseLSN Secondary Diff BaseLSN
1 Log shipping implemented via backup and restore 62000000061100036 62000000061100036
2 First log backup runs, is copied and restored 62000000061100036 62000000061100036
3 Subsequent log backups copied and restored 62000000061100036 62000000061100036
4 Full backup occurs on primary 62000000064400043 62000000061100036
5 Log backup taken, copied and restored 62000000064400043 62000000064400043

 

Now, what this means is that, as long as log backups are restored to the secondary, the Differential base LSN remains in step allowing you to bridge LSN gaps between the primary and the secondary databases using a differential backup.

Looking at your typical backup configuration on your Log Shipped database you could well have the following in place

Backup Type Frequency
Full Every Sunday 20:00pm
Differential Every night Monday - Saturday 20:00pm
Log Via LS plan every 15 minutes during the day

 

Given the scenario above, regular log restores will keep the secondary in step with the primary, any break in the Log Shipping plan may be bridged by restoring a differential backup. The only caveat here would be the following scenario;

  • Sunday 19:30pm the Log Shipping plan breaks, a log backup is missing and the restore job is now skipping backup files. No new data is being sent to the secondary
  • Sunday 20:00pm the Full backup occurs and completes successfully
  • Monday 09:00am the DBA comes into work and wishes they'd stayed at home
  • Monday 09:15am the DBA takes a differential backup of the primary and attempts to restore to the secondary but receives the following error message dialog.

 

Why is this?

If the full backup occurs after log shipping has broken but before you take the differential backup, no further logs are restored and the Differential Base LSNs are no longer synchronised, you will need to re initialise Log Shipping!

So, as we have already discovered, it's the transaction logs shipped and restored to the secondary that keeps the differential Base LSNs in step.

The situation above can be avoided to a certain extent by having suitable monitoring and notification to alert you immediately once a Log Shipping plan breaks. BAU DBAs can respond quicker to the alert and effect an immediate repair. You now just need to educate them on the process to be used.

As always, test thoroughly in your sandbox\test environment and if you're still stuck, post back I'll help all I can.



Always Encrypted - The Definitive Guide

clock May 18, 2024 19:46 by author TopDog

Hello and welcome to my latest post detailing the Always Encrypted feature within SQL Server. Much has happened within SQL Server these last few years and Always Encrypted is another big feature in the SQL Server encryption arena.

But what is it, how can it help my organisation?

What is "Always Encrypted"?

For a full and detailed explanation, check the Microsoft documentation at the following link;

Always Encrypted (Database Engine)

Always Encrypted was initially made available in Enterprise Edition of SQL Server 2016 RTM, it was later introduced into Standard, Web and Express Editions during Service Pack 1 for SQL Server 2016.
Always encrypted uses a series of steps utilising symmetric and asymmetric keys, the steps are dependent upon who will create and manage the keys and the tools used to create them. Different organisations have different levels of support and role separation. We'll look at these later but for now let's look at exactly what it does and why?

What does Always Encrypted Do and Why?

Traditionally, if you used Transparent Data Encryption, you would have a Certificate and Database Master Key (DMK) in the master database and then a Database Encryption Key (DEK) in the user database. As the Certificate and DMK are SQL Server instance level objects, it's pretty much available to anyone who has the key. Obviously, this cannot be used to truly encrypt and prevent SQL Server users (including DBAs) from reading the encrypted data.

This is where the Column Master Key (CMK) and Column Encryption Key (CEK) come into play. Always Encrypted uses 2 different types of key and an external encryptor to achieve the secure data encryption. The external key provisioning options are; 

  • Hardware Security Module using CNG/KSP (Cryptographic Next Generation using Key Storage Provider)
  • Hardware Security Module using CAPI/CSP (Cryptographic API using Cryptography Service Provider)
  • Azure Key Vault
  • Windows Certificate Store (Computer)
  • Windows Certificate Store (User)

The first 2 options generate a column master key from an external secure source, the 3rd option uses a certificate in an Azure Key Vault.
The last 2 options use the Windows Certificate key store. There are vast differences between the above options and it all depends on your organisations preferred method of key storage.
Of the first 2 options above, CNG\KSP should be preferred over the legacy CAPI/CSP. Both options require the installation of a client side driver to provide interaction between the key store and the Always Encrypted system.
A typical example of a Windows Certificate key store cert would look similar to the screenshot below;

 

Column Master Key (CMK)

The CMK is an asymmetric key which is stored externally to the SQL Server instance, this is usually in the form of a certificate. If required, DBAs have no access to the CMK at all.
A reference to the CMK is held inside the user database, but it only holds the path (including the thumbprint) to the external key not the key itself, it's created and managed outside of the SQL server system.
 

Column Encryption Key (CEK)

The CEK resides inside the user database and is protected and encrypted by the CMK. The CEK key holds only the encrypted CEK value, no plaintext details are stored.

With the Certificate residing outside of the SQL Server system, The CMK certificate is distributed to only those users who are required to decrypt the data, given security and data best practices this list should be limited.


There are a set of DMVs we can use to track the keys and an example is shown below from a test system;

 

As we can see from the output above, the CMK metadata holds very limited detail, the key path includes the thumbprint which is required to create the CEK. This value is encrypted and then stored in the "encrypted_value" column in the CEK key values DMV. 

Table Schemas

The base column definitions stay the same in that their data types do not change, there's no creating the columns as varbinary as was necessary with traditional column level encryption. However, once encryption has been applied, the table definition does change slightly as shown below;

 

When scripting out the schema we can see that the CardNumber, ExpMonth and ExpYear columns now have an extra clause applied to their definition. The type of encryption selected for each chosen column during configuration, now directly applies on a per column basis. The 2 types of encryption which may be applied are defined as one of the following; 

  • Randomised
  • Deterministic 

Randomised

The Randomised encryption type produces a different value each time even when the data encrypted is the same. This is more secure but also prevents the usage of these columns in indexes, grouping and join criteria.

Deterministic

The Deterministic encryption type produces common encryption values. Whilst this aids the use of the columns in join criteria and indexing, etc it could be possible when analysing the encrypted strings to work out patterns in the encrypted values, so exposing the protected data.

Always Encrypted With Secure Enclaves

New in SQL Server 2019 is the feature Always Encrypted with Secure Enclaves. This feature was added to overcome some of the limitations with the way encrypted data is used. Traditionally, the data encryption operations occur on the client side via the client driver. The client modifies the data unencrypted and then sends the encrypted value(s) back to the server. The data and the encryption keys are never exposed over the link. This affects the way the data is used though and one reason why the choice for Randomised or Deterministic encryption should be carefully made, as in the case of column joins and grouping, etc.

With the feature enhancement, some computations of the unencrypted clear text data are available on the server within secure enclaves. They are protected regions of memory assigned to the SQL Server process so that further computation operations are possible over the standard Always Encrypted functionality. Of course, this would involve extra design and configuration tasks and also development routes would need to be changed too, but the feature is there for those who wish to consume it. This link details further;

https://learn.microsoft.com/en-us/sql/relational-databases/security/encryption/always-encrypted-enclaves?view=sql-server-ver16


Always Encrypted Deployment Routes

Whilst it is possible for a DBA to manage the whole process, this may not fit all organisations, especially where role separation is enforced. So the way in which the encryption is applied depends on whether role separation is employed and whether you fully trust the DBA team or are they supplied by a 3rd party and not authorised to view the sensitive data.

Of course you may have Core employee DBAs who are trusted but the Security team handle all encryption. The answer to these questions would straight away drive a particular route, let's look at some scenarios;

Scenario A

The DBA is a direct employee of the organisation and trusted fully with the database systems. They are either authorised to view the data or do not exercise any elevated privilege to view data they are not required to have access to. They would have access to an Azure Key Vault or Windows Certificate Stores or HSM modules.

Scenario B

The DBA is a resource supplied by a 3rd party support vendor to the organisation. The organisation also has a separate security admin team that manage security keys. Neither team are authorised to have access to the encrypted data but are required to implement encryption and provide BAU management of the database system.

The scenarios above map to the following;

Scenario A (Singularly Managed)

The management via SQL Server Management Studio will be suitable for those who entrust their DBAs with the sensitive data held in the database or where the DBA has a wide range of permissions for their role outside of SQL server. For instance, if selecting a Computer Certificate Store the DBA would require local admin permissions on their machine, otherwise when attempting to create a column master key and selecting the local computer certificate store, an error will be raised.

Allowing the DBAs to create and manage the keys and view the data is fairly straight forward and can be achieved via SQL Server Management Studio, using the [Sales].[CreditCard].[CardNumber] column in the AdventureWorks database as an example, a typical deployment would be as follows;

First, right click the table in SSMS and select "Encrypt Columns" as shown below;

Note: The CMKs and CEKs can be created and managed from SQL Server Management Studio and requires v17.0 and above.

  

Click "Next" through the Introduction screen and you will see the "Column Selection" dialog, select the columns and the encryption type. For this I have selected the CardNumber column and Randomised encryption. A new CEK has been automatically selected since i do not have any existing keys in the database, I also do not have the option of changing the CEK name. Select your options and click "Next";

 

 Landing at the CMK configuration dialog we see the following, since no keys exist the system will auto generate with default names. Click "Next" to continue;

 

 At this point you may script the configuration, leave as "Proceed to Finish Now" and click "Next";

 

Clicking "Finish" will apply the Always Encrypted configuration.

 

At this point you are left with a configuration that may not meet naming standards, this is one of the downsides of using the Always Encrypted wizard to deploy your configuration.
Ideally, you would deploy your CMK certificate and create your CEK before you invoke the "Encrypt Columns" wizard. The CMK would be created on a secure asset within the organisation.
 In this scenario I have used a cert store on the same computer as the SQL server instance, this is for demonstration purposes only and would not meet best practice, this would likely breach any security restrictions imposed by your organisation too.

For the purposes of this demonstration, you should have a basic idea of what is required for a single role deployment.

 

Scenario B (Role Separated)

For scenarios where role separation is employed, the process will need to be broken down and actioned by the appropriate agent. The process employs PowerShell cmdlets in the first steps, which are used by the Security or Infrastructure admins. The DBAs are then passed the relevant thumbprint values and may complete the process via PowerShell or T-SQL.

To use a scenario, this could be where an organisation has a separate security administration team and a team of Database Administrators provided by a 3rd party support vendor.

Neither party are permitted to view encrypted data in the salary column of the employee table. The security operative is to manage the key but no permission to select the column, the DBA is to manage the SQL Server configuration but have no ability to read the table. Without access to the security key itself, selecting the column returns the encrypted data.

The configuration steps for "role separated" organisations are different and are broken down into subsets for each role to perform. The key values provided by the security team to the DBA are the CMK type and path\location and the CEK encrypted value. In this example we will use a user based certificate in the local Windows Certificate store. The Always Encrypted deployment via a role separated route is as follows;

Security Admin

  • Generate a user based certificate on a secure asset within the organisation. Typically the PowerShell cmdlet "New-SelfSignedCertificate" could be used.
  • Create a CMK setting using the PowerShell cmdlet "New-SqlCertificateStoreColumnMasterKeySettings". This cmdlet connects to the SQL PowerShell provider but performs no actual database connectivity.
  • Create a CEK setting using the PowerShell cmdlet "New-SqlColumnEncryptionKeyEncryptedValue". Again, this cmdlet connects to the SQL PowerShell provider but performs no actual database connectivity.
  • Provide the DBA with the CMK path\location and the CEK encrypted value, this can be done via secure sharing of a file generated by the previous steps.

 DBA

  • Connect to the SQL Server instance and database via PowerShell.
  • Create a column master key setting from the settings previously provided by the Security Admin. This uses the PowerShell cmdlet "New-SqlColumnMasterKeySettings". No actual database connectivity is performed at this point.
  • Create a CMK in the user database by using the PowerShell cmdlet "New-SqlColumnMasterKey". This performs database connectivity and creates the new CMK. This key would be available during the column encryption wizard steps as previously seen in the single role deployment.
  • Create a CEK in the user database by using the PowerShell cmdlet "New-SqlColumnEncryptionKey". This also performs database connectivity and creates the new CEK based on the CMK details. Again, this key would be available during the column encryption wizard steps as previously seen in the single role deployment.
  • Complete the column encryption using the newly created keys

After completing the steps above, when first querying the data the encrypted version is returned.

 

After ensuring access to the certificate in my current user Windows Certificate store and reconfiguring the SSMS connection to allow Always Encrypted as shown below;



Re running the query now shows the unencrypted data

 

To remove the encryption simply re run the "Encrypt Columns" wizard and set the columns to "PlainText", then complete and Finish the wizard.

It should be understood from the outset that the configuration and implementation of Always Encrypted requires the skills of an experienced Security Admin and an experienced DBA. Where necessary, Proof Of Concept environments can be used to drive the High Level Designs and Low Level Designs. Configurations outside the basics of the Windows Certificate Store will require deep knowledge of all associated subsystems and PowerShell. It's important that the data security be paramount at all times and that all options are fully evaluated to ensure the correct configurations are applied, observing any rules the organisation follows or are regulated to follow.



Fixing Orphaned Users-The new way

clock May 20, 2022 11:03 by author TopDog

We've all been used to using the stored procedure "sp_change_users_login" for some time now. This SP is no longer the route to take when fixing orphaned users. Now we have the following TSQL command:

ALTER USER xxxx WITH LOGIN = xxxx

 

Admittedly this does not resolve the issue of a missing login, in this situation you simply need a check to ensure that the login exists before attempting to alter the users SID, the following can be used:

IF SUSER_SID('someuser') IS NULL

BEGIN

CREATE LOGIN [xxxx] WITH PASSWORD = 'xxxx' --sql

CREATE LOGIN [xxx\xxxx] FROM WINDOWS --Windows

END

ALTER USER xxxx WITH LOGIN = xxxx

 

 



AlwaysOn Availability Groups

clock April 17, 2012 20:53 by author pezzar

 

Anybody who has worked with High availability for SQL server over the years, will be fully aware of Microsoft Cluster Services (Windows 2003) and Windows Server Failover Clusters (Windows 2008). With a clustered environment we are able to fully mitigate failures by utilising 

  • multiple network cards for network redundancy
  • new Majority node set quorum models (available in Windows server 2003 SP1 onwards) to remove the disk dependency and increase support for multi site clusters.
  • multiple computer nodes to negate core node hardware failure (i.e. motherboard, etc)

 

The only real weak link in the chain is the shared storage. There are many ways to achieve redundancy here, but it usually comes at significant cost and it is often difficult to setup and maintain. Of course, as previously mentioined, a fail over cluster only mitigates the server hardware, it does not provide a single or even multiple secondary databases. We have database mirroring in SQL Server versions prior to SQL Server 2012, but these only provide scope for a single nonreadable (with the exception of snapshots), secondary database. You can combine other SQL server availability technologies with fail over clusters, but think of the increased complexity. 

Well, AlwaysOn is the new high availability feature from Microsoft in SQL Server 2012. AlwaysOn takes the old database mirroring concept and creates a whole new level of HA, this encompasses the following;

  • No shared storage, each server\instance has localised storage and removes the storage Single Point Of Failure.
  • AlwaysOn listener service to accept centralised requests to HA database groups.
  • Multiple availability databases instead of the traditional Principal\Mirror scenario.
  • Better failover functionality leveraging Microsoft Windows Server Failover Clustering.
  • The ability to suspend data movements at the primary level or individual secondary level.
  • Support for multiple IP subnets.
  • Offload backup and maintenance operations to secondary databases.

 

By offloading backup operations to read only replicas, you can reduce the I\O requirements on your production systems. Multiple secondaries can also provide DR and reporting replicas.

AlwaysOn uses the concept of Availability Groups and these may contain one or more databases that you wish to enable for HA. There are a set of strict pre requesites that must be satisfied before a database may participate in an Availability Group, these are fully detailed at the following link.

http://msdn.microsoft.com/en-us/library/ff878487

 

AlwaysOn still uses the familiar SQL server endpoints for instance to instance communication, you may also create a highly available listener service which you will use to accept incoming connections to the availability group. The listener consists of a unique IP address and a unique virtual network name, this is by far one of the most significant changes in making the groups databases highly available. By providing a centralised access point into the availability group, clients are removed from the issues that are generated during failover of a database. Availability group replicas may also be configured for read-only routing, this allows online readable secondaries to handle read only requests removing concurrency issues from the primary replicas.

 

With AlwaysOn you still also have the traditional synchronous and asynchronous modes that were used in database mirroring. Asynchronous replicas support manual failover only while Synchronous replicas support automatic or manual failover.

 

When attempting to setup AO groups there are a range of options for troubleshooting any errors. You have the AlwaysOn dashboard and also the SQL Server and Windows logs. The wizard driven deployment offers the easiest deployment route, whereas manual deployment requires a lot of manual interraction. Despite this, AO groups are still extremely easy to deploy and configure and offer a level of HA that was previously not available without resorting to complicated levels of feature integration.

 

Creating Your Own AlwaysOn Group 

The availability group may be created and configured via the wizard or manually. Manual interraction is not as streamlined as the wizard, for this reason you may want to use the wizard as your main deployment method. When using the wizard you have the option of scripting the process much in the same way as you can when performing other actions within SQL Server.

 

To setup and configure a virtualised SQL Server 2012 Always On complete system you will need the following

  • A physical host machine with your favourite Hypervisor installed (I use VMWare Server 2.0.2).
  • Windows 2008 R2 Enterprise
  • SQL Server 2012 Enterprise

Optional: You may use Windows 2008 R2 Standard edition for your Domain Controller instead of Enterprise edition.

 

The first tasks are to create the required virtual machines, these are 

1 x Windows 2008 R2 domain controller (Standard or Enterprise)

2 x Windows 2008 R2 cluster nodes

 

If you require any help in creating these, please see my previous guide starting at the following link for help with VMWare server and creating the VMs, etc.

http://www.sqlservercentral.com/articles/virtual/72682/

 

For my VMs I have used the following network configurations 

Node Network IP Address
Windows 2008 R2 DC Public 192.168.0.40
Windows 2008 R2 SQLHANode1 Public 192.168.0.42
  Private 192.168.93.42
Windows 2008 R2 SQLHANode2 Public 192.168.0.43
  Private 192.168.93.43

 

 

 

 

 

 

 

All of the VMs will use local virtual disks, no shared storage, this will replicate the scenario required for the AlwaysOn high availability. Once you have all the VMs created, networked and updated you may now install your stand alone SQL Server instances to each of the 2 nodes. SQLHANODE1 has a named instance "INST1" and SQLHANODE2 has a named instance "INST2". Once the instances have been installed and are operational, proceed to the instructions below.

Create the Windows Server Cluster.

Before going any further it is necessary to install the Failover Cluster feature. Add the fail over cluster feature by opening the Server Manager console and select "Features" and then "Add features" as shown below. Step through the wizard to complete the installation, do this for each of the 2 nodes and close server manager once you have completed.

  

 

Once the feature has been installed to both nodes the cluster installation is performed as follows;

Open the failover cluster manager console and click "Create a cluster". 

 

Click "Next" at the first screen and then at the following screen below, select the servers that will participate in the new cluster and click "Next"

 

Select the option to run the tests and return to the create cluster process, then click "Next"

 

Step through the wizard until you see the screen below, select "Run all tests (recommended)" and click "Next".

The wizard will find all local drives and mark them as non shared disks.

 

You should now be asked to confirm and start the validation, Click "Next" 

 

Once the tests have completed successfully, click "Finish" as shown below. Review the validation report if necessary. 

 

You will then be taken to the "Create Cluster" wizard, supply the virtual networkname and IP and click "Next"

 

Confirm the cluster entry point details and click "Next" to continue at the confirmation screen. 

 

When the wizard completes click "Finish" to create the cluster. 

 

You have now created the base Windows Server Failover Cluster. 

 

Any attempt to access the AlwaysOn features in Management Studio with first completing the configuration will result in the error below. 

 

Open the SQL Server Configuration manager tool and under "SQL Server Services", right click the service you wish to configure for AlwaysOn. You will see the dialog below, check the "Enable AlwaysOn Availability Groups" checkbox and click "OK". You will need to restart the service. 

Deploy A New Availability Group Using the Wizard

Now we have AlwaysOn enabled we can start to configure a new group, righ click "Availability Groups" and select to create a new group. The new availability group wizard starts as shown below, supply your group name and click "Next".

 

 

Select the database(s) to participate in the AlwaysOn group and click "Next". 

 

 

Specify the Availability Replicas and move to the "Endpoints" tab. Check the endpoint configuration and move to the "Backup Preferences" tab. 

 

 

Specify the backup preferences and move to the listener tab. 

 

 

Provide details of the Listener IP, TCP port and network name, then click "Next". 

 

Select your synchronisation preference and click "Next" 

 

Check the validation screen and click "Next". 

 

Review the summary and click "Finish". 

 

 Once the wizard finishes, click close.

 

New Availability Group deployed, configured and synchronised. 

 

 

Manually Failover The Availability Group 

To manually failover the new group, right click the group and select "Failover".

 

Click "Next" at the failover wizard welcome screen. 

 

Select the new Primary replica and click "Next". 

 

Connect to the new Primary replica. 

 

Connected so click "Next". 

 

Review the summary screen and click "Finish". 

 

 

When the wizard finishes click close.

 

 

 Manually Deploy A New Availability Group

If this is the first time you are creating an AlwaysOn availability group on your instance and you are performing the action manually you need to perform the following first.

CREATE ENDPOINT MyEndpoint
    STATE = STARTED
    AS TCP ( LISTENER_PORT = 5022 )
    FOR DATA_MIRRORING (AUTHENTICATION = WINDOWS NEGOTIATE,
       ENCRYPTION = SUPPORTED, ROLE=PARTNER);
GO

GRANT CONNECT ON ENDPOINT::MyEndpoint TO [domain\account]
GO


Failure to create the endpoints first will stop the specified replicas from joining to the availability group.


To manually deploy an Availability Group without using the wizard, right click "Availability Groups" and select "New availability group" then use the following steps. 

Provide a group name and also add in any databases you wish to include. Alongside your Primary replica (the instance you are adding the group on), enter the details of one or more secondary replicas (SQL Server instances you wish to act as secondaries), including the mirroring endpoint URL. Once you have populated these details switch to the "Backup Preferences" page.

 

 Select your backup preferences and any replica backup priorities and click "OK". 

  

Expand the new AO group and right click availability group listeners and select "Add Listener". 

  

Right click the partner node (identified by the red stop icon) and select "Join to availability group". 

  

Connect to the instance 

  

Click "OK" to join the group 

  

The secondary replica has been joined to the AO group. 

  

Open the secondary instance and move to the availability group details. Right click the secondary database (with yellow warning triangle icon) and select "Join to availability group". 

  

Click "OK" to continue, 

 

If successful, the secondary database should change to the familiar Microsoft green facing right arrow icon. 

 

Checking Failover Cluster manager we now see the service and its resources created to support the AlwaysOn Availability group. Upon failover of the primary, the cluster group\application moves to the new primary instances node.

 

 

AlwaysOn availability groups offer a range of possibilities not only to provide redundancy but for offloading backup and reporting I\O too. As always, plan carefully for your requirements and implement the correct solution first time. Building out to test systems not only verifies your configuration but helps you to prepare for your Live deployment.



How To Create a Corrupt SQL Server Database for Test Purposes

clock April 16, 2012 13:52 by author TopDog

It's quite possible that at some point you may want to have the use of a corrupted SQL Server database for test or DR practice purposes. In fact, this can also aid in the recovery steps for a corrupted database once the file and object architectures are fully understood.
Don't ever be tempted to just go run the "Repair_allow_data_loss" clause of DBCC CHECKDB, it does what it says on the tin. I was recently involved on a forum post where a user had done exactly that, without understanding the ramifications or understanding the output of DBCC CHECKDB.

To understand more we will create and corrupt our own test database, this is very easy to achieve as I will detail below. 

For this exercise we merely need a Hex editor and the use of a SQL Server instance.

Note: do not use a Production SQL Server instance!

I have chosen XVI32 as this editor is free of charge and requires no installation to take place, simply place the files into a folder and create a shortcut to the program.

The core database will be created using the following simple script. We’ll go through the process in stages with diagrams to see exactly what’s happening. Start with the code below;

 

Don't forget to modify any drive letters and paths before executing the script ;-)

USE [master]

CREATE DATABASE [Corrupt2K8] ON PRIMARY

( NAME =N'Corrupt2K8', FILENAME=N'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\Corrupt2K8.mdf',

SIZE = 524288KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )

LOG ON

( NAME = N'Corrupt2K8_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\Corrupt2K8_log.ldf',

SIZE = 262144KB , MAXSIZE = 2048GB , FILEGROWTH = 1024KB)

GO

USE [Corrupt2K8]

GO

CREATE TABLE dbo.NoddyTable(

NoddyID UNIQUEIDENTIFIER NOT NULL DEFAULT NEWID()

, NoddyName VARCHAR(128) NULL

, NoddyInt BIGINT NULL

, NoddyDate DATETIME NULL

) 

GO

INSERT INTO dbo.NoddyTable

SELECT NEWID(), name, ROUND(RAND(object_id)*856542, 0), GETDATE() FROM sys.columns

UNION ALL

SELECT NEWID(), name, ROUND(RAND(object_id)* 1048576, 0), GETDATE() FROM sys.columns

 

ALTER TABLE dbo.NoddyTable ADD CONSTRAINT PK_NoddyID

PRIMARY KEY CLUSTERED (NoddyID)

WITH (IGNORE_DUP_KEY=OFF)

 

CREATE NONCLUSTERED INDEX IDX_NoddyName_NoddyDate

ON dbo.NoddyTable(NoddyName, NoddyDate)

WHERE NoddyName IN ('password','length','created','crtype','offset','intprop')

 

CREATE NONCLUSTERED INDEX IDX_NoddyDate

ON dbo.NoddyTable(NoddyDate)

 

Once you have the database created, take a full backup followed by a differential and then a transaction log backup, you may then use these in future testing scenarios.

We now want to choose an object as the target of our corruption exercise. I am going to choose a non clustered index on the table 'dbo.NoddyTable', to get a list of indexes on this table use the following query;

SELECT OBJECT_NAME(object_id), name, index_id, type_desc FROM sys.indexes

ORDER BY 1

I will be using the non-clustered index 'IDX_NoddyDate', this has an index id of 3. To find details of the page numbers in use by this index we now need to turn to an undocumented DBCC command 'DBCC IND'. Full details of how to use this command may be found at the links below but basically this is used as follows;

DBCC IND (DatabaseName,'tablename', index_id)

So, I have

DBCC IND (Corrupt2K8,'dbo.NoddyTable', 3)

Below is the output from the command above

 

 

I'm going to pick a page of page type 2 (an index page), my chosen page number here is 174.

Next I need to go view a dump of this page just to have a look at the records it contains. This requires the use of another undocumented DBCC command called DBCC PAGE. Again full details of this are in the links below but basically it's used as follows;

DBCC PAGE (DatabaseName, filenumber, pagenumber, printoption)

So, I have the following code

 --switch on client output first

DBCC TRACEON(3604)

--now read the page

DBCC PAGE (Corrupt2K8, 1, 174, 1)

This is the page header;

 

I'm going to home in on slot7 or record 7. I'll use the Hex editor to modify this record in the page which will then generate an error when DBCC CHECKDB is run. The detail for slot 7 looks as follows;

 

So, to hack the record at slot 7 on page 174, I first need to work out some figures to find the address locations within the file. Convert the record offset (indicated in the screenshot above) from hex to decimal first and then the address for slot 7 is calculated as follows

page number x num of bytes per page + record offset

This equates to 174 x 8192 + 292 = 1425700

Take the database offline and now open the primary data file using XVI32. From the File menu select open and then browse to the MDF file.

 

Now, from the File menu select "Address" > "Goto". In the dialog box which appears ensure you select the decimal radio button and enter the address which was calculated above, in my case 1425700. As you can see from the screenshot below, the editor has placed me at the start of my chosen record in page 174.

 

This record has a length of 28 bytes which was detailed in the page dump we did earlier, now to modify the record. First switch the editor to Text and Overwrite Mode if it isn't already. From the File menu ensure "Tools" > "Text Mode" and "Tools" > "Overwrite" are selected. Now I'll mark the blocks I wish to mangle. To do this, from the File menu select "Edit" > "Block <n> chars". Switching to decimal, I enter the record length of 28. The blocks have now been marked in red as shown below 

 

Now, to overwrite the record in slot 7 as shown below, in text mode type a simple string

 

Now click "File" > "Exit" and save the file when prompted to do so. In SQL Server you may now bring the database back online. We'll re run the page dump and check the results which are shown below;

 

Well, as we can see above the record was modified in the anticipated location, of course the only part hosed here is the non clustered index which is easily fixed by dropping and re creating it. What does DBCC CHECKDB show us?

 

Now you have a corrupt database which you may use for your DR and script tests. Give this a go in your test systems and by all means post back if you're stuck. 

Credits

Information on these undocumented procedures was digested from Paul Randal's blogs at the following links

http://blogs.msdn.com/b/sqlserverstorageengine/archive/2006/12/13/more-undocumented-fun_3a00_-dbcc-ind_2c00_-dbcc-page_2c00_-and-off_2d00_row-columns.aspx

http://blogs.msdn.com/b/sqlserverstorageengine/archive/2006/08/09/692806.aspx

http://blogs.msdn.com/b/sqlserverstorageengine/archive/2006/06/10/625659.aspx

Warnings

Obviously you should not perform this on your production databases\servers, complete all tests in your offline environments. I will not be held responsible for those who wish to apply this in online environments Wink



Using sp_change_users_login to fix orphaned logins

clock April 14, 2012 13:25 by author pezzar

Picture the scenario, you've moved the enterprises mission critical database from one SQL Server instance to another. You bring the database online, all looks good. Alas, however, you have not created the server logins to provide the necessary database access. The boss wants to know why the application server cannot connect to the database, she's wanting heads to roll unless the application is back up and available in the next 2 minutes!!

 

Luckily, up to and including SQL Server 2012 you have a system stored procedure which handles the mapping of new and existing logins to database users, this procedure is deprecated in future versions of SQL Server but still useful in older versions, How does it work?

Firstly, the stored procedure sp_change_users_login is only used for resolving SQL Server orphaned logins, it takes a number of parameters which differ based upon the action to be performed. The syntax is as follows;

sp_change_users_login [ @Action = ] 'action'
[ , [ @UserNamePattern = ] 'user' ]
[ , [ @LoginName = ] 'login' ]
[ , [ @Password = ] 'password' ]

Valid actions are;

Auto_Fix - this will take the username and password parameters only. If the procedure finds an existing login with the same name the password will be ignored and the user account mapped. If the login does not exist it will be created with the specified parameters.

Report - this action takes no parameters, execute the procedure under your chosen database context and it will report any orphaned users within that database. The accounts are determined as orphaned if the unique SIDs do not match between the server level login and the database user, the name is not used here. You can check this by comparing the catalogs sys.server_principals in the Master database for the server login and sys.database_principals inside each user database for the database user. The report returns the ophaned user name and SID.

Update_One - this action is used to specifically map a database user back to an existing server level login, the login must exist.

  

Back to our scenario, you've contacted the application administrator and he's given you the account password from the web.config file, how do you proceed? We have the password and we know the login name which is the same as our database user, we issue the following query,

exec sp_change_users_login 'Auto_Fix', 'Jon', NULL, 'P@ssw0rd1'

 

The login is supplied as a NULL parameter as this will be created from the user name parameter, If the new login already existed the password parameter would have been ignored. Now, the more astute amongst you would have noticed that the server login and the database user don't have to have the same name, although possibly they should do for clarity. The default for the procedure using Auto_Fix will create a login of the same name, to map the user to a login of a different name, create the login  manually using CREATE LOGIN and then map the acccount. For example, to map the database user Jon to the new server login Bob use the following query,

 exec sp_change_users_login 'Update_One', 'Jon', 'Bob'

 

As an alternative to our scenario, the junior DBA in their panic, has manually created a new server login and is having trouble mapping the login to the database as the user already exists. Fortunately you catch him right before he drops the database user and avoid losing all granted permissions. You do this using the following query

exec sp_change_users_login 'Update_One', 'App_User', 'App_User'

 

Great, the application is back online, the boss declared you a hero and your peers are holding you high on a pedastool. And to think, you very nearly panicked and started dropping users from the database. Next time, you'll script the accounts from the source server and ensure they exist on the target server, hopefully! Wink