SQL Server Tips

How To Use SQL Server FILESTREAM Feature For Large Databases

Updated
12 min read
Written by
Mark Varnas
SQL Server FILESTREAM

Do’s and don’ts

What is the SQL Server FILESTREAM feature?

This feature allows adding unstructured documents into the database. You can store images, PDFs, videos, or whatever files you want in the SQL Server database. It allows you to have transactional consistency of images.

We have worked with large FILESTREAM SQL Server implementation. And I will show how this feature can cause lots of problems. And what you could do about them.

Let’s start with Environment description, so you have a bit of context:

  • Data size – 50TB+
  • Brick-and-mortar business
  • Operating hours M-SAT 8AM-6PM, across all US time zones. Closed on SUN
  • What are we storing in FILESTREAM DB: documents (PDF, MS Word files), pictures (jpg, jpeg, gif, bmp, etc.)
  • What did SQL Server installation look like?
    1. 4 node SQL Server failover geo-distributed cluster
      • Another server for Disaster Recovery (DR)
      • About 15 user databases. Largest one under 2TB
      • All File Stream data resided in a separate database, outside of user databases
    2. Storage: SAN with three levels of disk speeds
      • Fast – SSD
      • Medium – 15,000 RPM drives
      • Slow – 7,500 RPM drives

When MS SQL FILESTREAM is a good solution?

  1. If the data size in 3-5y will grow to less than 1TB – FILESTREAM probably can work.
  2. If you have to need to have POINT IN TIME restore of the documents or images stored.
  3. If you store small files (Under 1MB each). Check this whitepaper to confirm. There is a file size mentioned there.

If all true – go for it!

Not all things you will read below can be blamed on SQL Server alone.

Some were terrible database administration.

Some a business problem – business kept doubling every 9 months and no one planned for this type of data explosion.

Some coding could have been better.

SAN and some networking left a few things too tangled up.

Some tried the best and screwed it up more.

The point is – this solution has sharp edges and if not careful, you will cut yourself.

Problems and some solutions with SQL Server FILESTREAM

Backups

  1. DB backup duration.
    As the FILESTREAM database grew, we could no longer take full database backup fast enough. Backup duration started to run longer than 24h, then soon 48h were not enough. In the end, the FULL backup would start at FRI 6 PM, and on Mon morning, it would still be going.
    • Fix. Speeding up SQL backup. Splitting the backup into 32 files helped. Also, changing BUFFERCOUNT, BLOCKSIZE and MAXTRANSFERSIZE – helped. Check more info here.
    • How do you tune your backups for speed and how do you find the best numbers for your situation for backup file count, BUFFERCOUNT, BLOCKSIZE and MAXTRANSFERSIZE? Testing. We ran many.
      We tested a very low number for one parameter (BUFFERCOUNT), and went high. I measured and recorded the data. Found an optimal parameter.
      Then moved on to the next parameter (BLOCKSIZE). Found the optimal one. Moved on the next. You get the point.
      When all parameters had optimal values, we just combined them all. I can’t remember exact numbers, but it cut backup duration significantly. By 30-40% or so.
  2. Backup size.
    Backup size became hard to manage. Because of the way things were designed, we had to use FULL backup.
  3. Dumb backups.
    That’s my “technical term”. Which means each full backup included data that went 10 years back. Its dumb to keep backing up data that does not change? But to get to the place where backing up only recent data would be possible that needed some work – a little more about that later.
  4. SAN storage.
    Since this was a cluster with shared storage and limited SAN space, we started running into SAN capacity problems.
    You not only have to store the large DB. Then you need at least two FULL backups. And data doesn’t compress well. Because data being stored is all PDFs/jpgs, etc – which are already compressed. So, now we need DB size times three… At least.
    Add differential and transactional backups. And you can see that 50TB DB easily requires 150-200TB of SAN storage. Not a luxury that everyone has floating around.
  5. Lower environments.
    Restoring FILESTREAM database to DEV, QA, UAT environments – became impossible. Not only because time it takes is close to “forever”. But also, because lower environments often do not have that many terabytes just waiting around for you.
    • Fix – instead of backup and restore to refresh DEV, QA and UAT – we would deploy a DB built from T-SQL scripts.
    • And then run another script that would populate 100,000 records into each table with documents. That way instead of 50TB, we needed only 1GB.
  6. SAN speed.
    This is not a SQL Server issue. But SAN was grossly misconfigured. The throughput numbers were terrible. At few points transfer speed was slower than USB2 speed (less than 45MB/s.) Yeah…

If you are still here, you may still be considering FILESTREAM for your project. Keep reading, as by the end you may not like this feature much.

Partitioning

  1. Backing up data 10y old.
    At some point, we realized that the only data that gets modified are documents stored in the last 90 days. And if we could partition the data into separate file groups (FG) we could potentially ONLY back up the last 90d. And not the 10 years of data. Now we talking!
  2. Hello SQL Partitioning!
    • We partitioned data and split File Group (FG) into each year. Set those file groups to READ_ONLY. Backed up READ_ONLY FGs once. This contained the majority of the storage.
    • Left a live partition of the last 90 days. And could now move this FG onto a faster SAN layer, because we no longer need an insane amount of space.

Few partitioning minuses and pluses:

  • Minus: More complex.
  • Minus: Now need to pay double attention that READ_ONLY backups do not get corrupt. Because if they do, we may lose FULL backup.
  • Minus: Have to use SQL Enterprise edition. But that’s the version we used already. No problem there.
  • Plus: Can use SQL Server database partial availability (ignore that doc is about SQL 2005 – most of it relevant and shows how things still work today). Partial Availability is awesome. If we ever have to restore PROD from backup, we can restore PRIMARY file group (FG), plus FG with last 90 days of data and DB is now available for taking transactions. Restoring older data, can continue in the background into those READ_ONLY FGs, while PROD is humming along.

But before we could implement partitioning, we had to UNDO someone else failed partitioning attempt.

Antivirus and DB corruption

When we started working on this FILESTREAM DB, one of the first things was to run DBCC CHECKDB.

Which found about 40 corruption errors. Which could not be fixed with SQL commands.

We ended up calling Microsoft Support and their suggestion was to export all data out into a new DB. And throw old one out after.

– Umhhh… Yeah… that would take about umhh… like forever… Thanks, but let’s leave as final option.

I restored this DB with corruption onto another server, and tested couple things. And was able to fix corruption!! HA!

Basically, the issue was that FS stores images on disk. In weird directory structures. And when records were deleted in DB, files on disk did not. Which made SQL Server think – DB is corrupt.

All I had to do is to figure out how to convert corrupted record into a directory path (no, I don’t remember how to do it now), and go on disk and delete that path. Re-ran DBCC CHECKDB, and bam! I got one less error.

After doing it another 39 times, all gone!

There was a strong suspicion antivirus was involved, therefore reconfigured antivirus exceptions.

RAM usage

Another problem with FILESTREAM was how application retrieves the data.

If app runs a “SELECT” statement to get a document and file size is 5MB, all 5MB go into SQL buffer, which means RAM memory.

Do that for a ton of queries, (5MB times 500 request) and you talking about a ton of RAM wasted. Not only that, each 5MB document kicks out really useful data from the RAM.

Recommendation: DO NOT USE SELECT to get your image. Use Win32 APIs instead from the app. That bypasses this problem and it works faster!

Some of the data in FILESTREAM DB was pure junk

This is not a SQL problem. It’s a poor app dev.

Turns out app did not restrict users from uploading only allowed data type, like images or PDFs. And the app did not have size limit.

So, we found EXE files uploaded, 300MB files, mp3, other junk files.

AlwaysOn

Midway through all changes, we had to migrate to a new data center. Few things were done not to waste the opportunity.

  • We moved away from pure Win Failover Cluster with shared storage to using AlwaysOn.
  • Upgraded SQL version.
  • Used all best practices for Hardware, OS and SQL setup.

For AlwaysOn, we could not put this file streamed DB into the same availability group as user databases.

It had to be its own AG Group. Which means that user DBs will not always be on the same server. This creates a problem. Because we had queries reaching out from user DB into file stream DB.

Not going to get into how we solved this, but we did. Point is that – it required a workaround. Because of damn FILESTREAM monster.

AlwaysOn can use local storage. That’s cool. What’s not cool, I now have to have disk space for this large sucker on each server. 4 replicas, plus a DR site. That’s 5X the file stream DB space. Plus, the backups. Plus disks are RAID’ed. Which all times, a TON of space is required.

 Popularity of FILESTREAM

  1. This is not a very well-adopted SQL Server feature. Finding answers online – was challenging.
  2. There is only a single FILESTREAM whitepaper.
  3. There is only a single book about FILESTREAM. Look how many reviews book has. One! And it’s a good book. But that should tell you only a very few companies ever adopted this feature!

DB growth projections

At some point, we decided to look at growth projections. DB was doubling in size every 9-11mo.

Umhhh…. Wait…what? You saying that next year, this problem will be twice as big?

Damn… I better leave this ticking bomb for someone else. Yes, that thought, crossed my mind! 😊

The only way to deal with this is to re-architect.

Conclusion

There are very few uses cases for SQL Server FILESTREAM feature usage.

There are a lot of ways how to storing files inside SQL Server can hurt.

The only way to have unlimited growth is to NOT to store files inside the SQL Server.

In our case – the only way was to re-architect.

Article by
Mark Varnas
Founder | CEO | SQL Veteran
Hey, I'm Mark, one of the guys behind Red9. I make a living performance tuning SQL Servers and making them more stable.

Leave a Comment

Managed SQL Server services, consulting, and emergency support from expert DBAs to improve performance, predictability, and cost.

Get started with Red9 today.

Contact us

Discover More

SQL Server Health Check SQL Server Migrations & Upgrades SQL Server Performance Tuning SQL Server Security SQL Server Tips

Discover what clients are saying about Red9

Red9 has incredible expertise both in SQL migration and performance tuning.

The biggest benefit has been performance gains and tuning associated with migrating to AWS and a newer version of SQL Server with Always On clustering. Red9 was integral to this process. The deep knowledge of MSSQL and combined experience of Red9 have been a huge asset during a difficult migration. Red9 found inefficient indexes and performance bottlenecks that improved latency by over 400%.

Rich Staats 5 stars
Rich Staats
Cloud Engineer
MetalToad

Always willing to go an extra mile

Working with Red9 DBAs has been a pleasure. They are great team players and have an expert knowledge of SQL Server database administration. And are always willing to go the extra mile to get the project done.
5 stars
Evelyn A.
Sr. Database Administrator

Boosts server health and efficiency for enhanced customer satisfaction

Since adding Red9 to the reporting and DataWarehousing team, Red9 has done a good job coming up to speed on our environments and helping ensure we continue to meet our customer's needs. Red9 has taken ownership of our servers ensuring they remain healthy by monitoring and tuning inefficient queries.
5 stars
Andrew F.
Datawarehousing Manager
See more testimonials