It's a bad thing to do. Brent Ozar says so. And I believe him,. Really, it's a bad idea
Imagine the stock control database.
You buy widgets, you sell widgets
100 widgets into stock
80 widgets out
No need to free up the disk space, because the day after, you are going to add another hundred widgets.
And SQL Server doesn't free up the space. It just sits there waiting for more widgets
There's a but coming.
But suppose you need to free up space? In my situation, I purged the database of two million old records into an archive DB. Now I have an archive DB taking up 150 GB. The original DB contains only 10,000 records (the last 15 days) but it still takes up 150GB, with 140GB free. There's no room on the disk, and the Infrastructure guys aren't keen on giving me any more because it is an old physical box and it would mean them actually doing some work for a change.
No problem, I can do this
I set up a maintenance plan:
Doing it as a maintenance plan lets me specify a time out for each step, which saves me the task of coding it.
Step 1 - Shrink it in manageable chunks
Rather than try and shift all that 140GB empty space in one go, I want to shift a bit at a time, rather like a batch Anil Kumar sets out a way to do it, and I unashamedly stole his code
How big a chunk? Doesn't really matter - 512MB seemed like a good figure. The point is that it shrinks. then shrinks, then shrinks again. If it fails for some reason e.g. hits the time out, then you have done at least a few of the chunks.
Step 2 - Release Space
Just in case it hasn't gone already, belt and braces
DBCC SHRINKFILE (N'FileName' , 0, TRUNCATEONLY)
Step 3 - Smart Rebuild
All that shrinking is going to screw up your fragmentation levels something 'orrible.
Bad, really bad. for performance.
If Brent Ozar catches you, he will not be a happy bunny
So we need to Rebuild or Reorganize the indexes.
Right then. So I fire it off, it runs, step 1 times out as expected, the other two steps run, all well and good.
Except that the database hasn't shrunk. Not one MB less.
I spent a lot of time with my friend Google, and I see lots of people are having similar issues, hence this blog, which I hope will help others.
I set up a bare minimum job to shrink the database a little bit:
DBCC SHRINKFILE (N'FileName' , 180176)
I set it running, and went home. Came in next day, and it had succeeded - Two hours 45 minutes, and it shaved 6.5GB off the database! (That IS a little bit, when you have a 190GB database). And that is a much longer time than I allowed for in my timeout period. I reduced the target and ran it again - again it succeeded, this time removing 10GB in 03:37:40
I'm not sure why shrink in the manageable chunks code didn't work; it's on my list to troubleshoot that. But in the meantime I will be able to live with a rough and ready shrink that works.