I've been asked to write a few words about this thrillingly sexy topic. Why? Because I've been looking at some existing servers and finding quite a few issues which are Bill's Fault.
Microsoft thinks that a good initial size for a database is 1MB, and it should grow by 1MB every time it grows. Logs should grow by 10%, which isn't utterly mad, but isn't a good idea either. Since Microsoft sets these defaults, it's easy to set up a database. Which is good. But since they are lousy defaults, it's bad.
I suppose these figures are there because there is no right answer to the question "How big should these settings be then, Mr Smartypants DBA?".
So what's the problem?
Well, I recently found a database which had grown from 500MB to 11071MB in 1MB increments. That's a LOT of growth events. Each time it grows, it has to stop and think about it. And if it has to grow ten times, that's a lot of hanging around, twiddling your thumbs, waiting for it to think.
Hindsight is a wonderful thing. I set that database to grow in 512MB increments. Because I now know how big it is, and how big it was when it started.
In a perfect world, you would carefully calculate how big the database is likely to be after a year or two of adding data, and size it to that size, plus maybe 10% for luck. In the real world, you rely on SQL Server's capacity to grow when it needs to.
So what's wrong with growing by a percentage?
First answer: Small database, growing by 10% each time, not a big issue. Big database, growing by 10% each time, maybe 10GB every time it grows, is going to take a long time!
Second answer: Log files are composed of Virtual Log Files (VLFs) and ideally you don't want huge numbers of them, all different sizes - which they will be if they keep growing by 10% at a time.
And if you forced me to set a default growth rate without knowing anything about the database, I would choose 16MB for the data, 8MB for the logs. Better than Microsoft's default values, anyway. That's what I put in the MODEL database.
Post a Comment