
February 21, 2025 15:57 by
TopDog
Welcome to this post on file growths and shrinking in SQL Server. In this post I would like to discuss the file growths, sizes,
shrinking and how they impact daily use.
First the elephant in the room
File Shrinking
Not much to say here but don't, just don't!
It's very well documented that shrinking files, whether data or log, is a bad practice and totally unnecessary.
Typically, what I have seen over the years in my career, is when an administrator or DBA (junior or cross platform) sees
free space in a file(s) and shrinks the file(s) as it's viewed as an unnecessary use of space.
We all know that for data files, it will reintroduce internal fragmentation in any indexes, which is not ideal especially
when your maintenance jobs spend a lot of time keeping them healthy. When it comes to log files, I've seen
scenarios where administrators have set up daily jobs to shrink the log ( it will only grow again! ).
If proper capacity and planning has been performed, you will naturally have a set amount of free space within your files
to prevent constant growth placing undue stress on the storage sub systems. The files should be right-sized and grown
out to your expected 3 year size to provide sufficient data storage for the incoming data.
For the transaction log, testing will provide you with see the sweet spot size that is required to prevent constant growth
and an unnecessary increase in log VLFs, if you know the space will be used why remove it?
This leads us nicely onto file growths and sizes.
File Sizes and Growths
Where large transaction logs are encountered, it's important to realise that the transaction log has grown to that size
as that is the space it requires at busy periods to handle the transaction rate between log backups. To help maintain a
more controlled log size consider increasing the log backup frequency during busy times.
For example you may have your log backups run every 20 mins throughout the day, but if you know that from
11pm to 1am you have a large daily import it would make sense to add a new schedule that takes log backups every
5 minutes or so during that time window.
The mileage will vary on this and you should test to find the sweet spot but the flexibility is there if required.
There is no need to continually shrink log files only for them to grow rapidly again when an import job or
index maintenance job launches. Furthermore, your storage admins will likely thank you for reducing the stress on the
storage subsystem and may even be impressed that you have taken steps to ensure that the storage system suffers
no severe consequences.
What's the way forward
A positive start can be achieved by right sizing the storage and the database files by understanding the possible sizes
required for each. When implementing any application deployment, engaging with the vendor to obtain expected database
size information is the best route available. Microsoft also provide a database sizing tool to help with size estimations.
The Microsoft sizing calculations can be found at the following link:
Microsoft Database Size Calculations
For in house developed applications, the sizing calculations are even more valuable and provide an opportunity to ratify
database design by spotting columns that may benefit from a reduced\different data type to that which is proposed.
As an example, if a column is spotted as AddressLine1 and it's data type is defined as varchar(MAX), there's a valid
reason to challenge this data type usage for something a little more appropriate.
Be mindful of the database file sizes and track usages over periods of time and where auto grow is used set realistic
file growths to prevent regular unwanted growths.