Tuesday, 23 October 2018

In Search of a Smarter Maintenance Plan

I wrote a blog post about Implementing Maintenance Plans a while back.  In it I suggested that if you have time to rebuild the indexes once a week, you should go ahead and do it. 

But what if you DON'T have time?

Indexes

I suggested rebuilding the indexes once a week.  And the bog standard maintenance plan rebuilds every index, whether it needs it or not.  But if things are heavily fragmented, or there are a lot of indexes with millions of rows each, that might take an awfully long time.  I would expect heavily fragmented indexes to adversely affect performance, so set up a Smart Rebuild / Reorganise job to run as part of the weekly maintenance plan.  



Instead of the usual Rebuild Index task, grab the Execute T-SQL Statement and add it to the maintenance plan.  You can restrict the run time with the Execution Time Out parameter - specifying 3600 seconds means that after an hour the job fails, so that it does not over-run the maintenance window.  Note that if it hits that time out, the job will fail, so I have put this step last.  It will also send an error message email if you have it set up to do so.  




Here's the script - I swiped it from Artemakis Artemiou - poli efcharisto!  
The job identifies all the indexes with fragmentation levels >15%
It starts with the worst and works down
Above 30% it rebuilds them
Between 15% and 30% it re-organises them

If the database is badly fragmented, you might want to run this daily, at least until they are under control.  


Statistics
I also set up a job to update statistics.  Statistics can get out of date, and queries may be less efficient because of this. 

Updating statistics is a bit of a funny one.  If you rebuild the indexes, you get a nice new set of statistics.  But if we are not rebuilding the indexes, you have to rely on Microsoft to do it as it thinks fit, according to its algorithm.  Sod's Law says that it won't update the ones you want it to update.  The SQL execution plan can ignore a perfectly good index because the statistics are not up to date, and may take far longer to run a query because of this.



As before, I added a T-SQL task to the Daily tasks,  last in the line.  
I've given it a timer limit of 5400 seconds (an hour and a half, to save you getting your calculator out)
And it runs a stored procedure which I found here...
Thanks to Bill Graziano!



Note that Bill's script sets a time limit too, so I don't really need to set an execution timeout.

There's an alternative to the above - the well regarded and reputable scripts from Ola Hallengren.
A lot of people swear by them.







Wednesday, 17 October 2018

The Madness Continues


I have a script which identifies MDF files which do not appear in my list of databases.

The purpose of this is to find databases, usually on development boxes, which have been detached and forgotten, but are taking up space on the disk.  A detached database is of course invisible, unless you go poking around looking for it.

What I usually do is:
i)  attach them again
ii) take a final backup just in case they actually are needed
     (not very likely, but sod's law says if you don't do this, they will suddenly become critical)
iii)  drop the thing

But today, I failed at the first hurdle:



Create file encountered operating system error 32

The process cannot access the file because it is being used by another process

SQL Server error 5123 tells me it's a problem attaching the database

Huh?  It's a detached database file, isn't it?  
What could possibly be using the detached file?

Some utter maniac renamed the .ldf file as .mdf

name fileid filename filegroup size maxsize growth usage
GenDraft 1 F:\User Databases\GenDraft_20180925.mdf PRIMARY 40960 KB Unlimited 8192 KB data only
GenDraft_log 2 G:\Logs\GenDraft_20180925.mdf NULL 12352 KB 2147483648 KB 4096 KB log only