The disk is the slowest part of the server.
Proper storage configuration is critical to the optimal performance and operation of SQL Server.
Below are some of the most common best practices for SQL Server disk configuration that Microsoft recommends to improve database performance, reliability, and security.
Disk hardware
Over the past five decades, Hard Disk Drives (HDD) have made regular improvements in reliability, capacity, and speed.
Solid-state drives (SSD) are faster and use less energy than HDDs.
They have been available at capacities and prices suitable for server use in the last decade.
To improve the reliability of SSDs, use enterprise-class drives, which provide significantly better IO performance and are significantly less prone to errors.
File system
SQL Server does the majority of I/O operations in 8K or 64KB chunks and performs the best when storage, where DB files live, is formatted using 64KB (system cluster size).
You can check the disk cluster size using the code below on command prompt as administrator. Just change the Letter “E” to your Drive.
fsutil fsinfo ntfsinfo
E:
It is not easy to fix after the fact, but if you can get the maintenance window, then it’s worth making the change.

A simple way to do this is to add a new drive, format the drive correctly, move SQL files to it, and delete the old Drive. Proceed to another drive, etc.
If you are using a SAN, you should also look into the alignments against the LUN divisions. In the case of a virtual machine, consider this filesystem layer too.
Data storage
Separate the data (MDF), log (LDFs), and tempdb files into different physical drives.
Make sure that you have it all separate from the operating system drive “C:”, even if it is put into another partition on the same hardware.
Also, never share the same disk with backups and live databases. If the disk fails, you lose both – Databases and the backups.
Data files and tempdb
usually have more random access than log files, but that depends on the workload. So, when possible, use RAID 1+0 to improve write performance.
Filegroups and data files
For starters, you want to have multiple FGs, which contain multiple files.
Ideally, those files all are on separate drives. Which makes reading/writing data and indexing faster. Since read/write operation can now be multithreaded and, multiple physical disks can “help.”
If some data is stale and doesn’t change much (say data that is old), that data can be put into a separate FG, backed up once and then set to read_only.
And now you only have to back up only the latest data. Which reduces the time it takes to backup, backup sizes, restore times, etc.
This strategy also allows us to place older data on slower/cheaper storage.
In case of disaster, you want to have the least amount of data possible inside primary filegroup because you can’t get DB online until it is recovered.
Therefore, it’s a good idea to keep the primary as small as possible.
That way, during the disaster, you could bring primary online, then-current year’s data, and at that point, your DB is live. While in the background, we may be restoring older data without affecting DB availability or users even knowing about this.
This also makes restore to lower environments, such as DEV, QA, UAT, smaller & faster, as now you don’t have to restore the full data.
Other considerations
- Review the database’s autogrowth settings and the internal fragmentation on the transaction log (VLFs).
- Consider to enable instant file initialization and trace flags to control specific behaviors in SQL Server.