Thursday 16 August 2018

Database Sizes and Growth Rates

I've been asked to write a few words about this thrillingly sexy topic.  Why?  Because I've been looking at some existing servers and finding quite a few issues which are Bill's Fault.

Microsoft thinks that a good initial size for a database is 1MB, and it should grow by 1MB every time it grows.   Logs should grow by 10%, which isn't utterly mad, but isn't a good idea either.   Since Microsoft sets these defaults, it's easy to set up a database.  Which is good.  But since they are lousy defaults, it's bad.

I suppose these figures are there because there is no right answer to the question  "How big should these settings be then, Mr Smartypants DBA?". 

So what's the problem?

Well, I recently found a database which had grown from 500MB to 11071MB in 1MB increments.  That's a LOT of growth events.  Each time it grows, it has to stop and think about it.  And if it has to grow ten times, that's a lot of hanging around, twiddling your thumbs, waiting for it to think. 

Hindsight is a wonderful thing.  I set that database to grow in 512MB increments.  Because I now know how big it is, and how big it was when it started. 

In a perfect world, you would carefully calculate how big the database is likely to be after a year or two of adding data, and size it to that size, plus maybe 10% for luck.  In the real world, you rely on SQL Server's capacity to grow when it needs to. 

So what's wrong with growing by a percentage? 

First answer:  Small database, growing by 10% each time, not a big issue.  Big database, growing by 10% each time, maybe 10GB every time it grows, is going to take a long time!
Second answer: Log files are composed of Virtual Log Files (VLFs) and ideally you don't want huge numbers of them, all different sizes - which they will be if they keep growing by 10% at a time. 

And if you forced me to set a default growth rate without knowing anything about the database, I would choose 16MB for the data, 8MB for the logs.  Better than Microsoft's default values, anyway.  That's what I put in the MODEL database. 

Friday 10 August 2018

Login Failed on Named Instance



I have a SQL Server 2016 server – call it Gandalf

There are three instances – Default, Frodo, and Sam

Every day I check the error logs.  Every day I see that someone or something is trying to log in at 0200. 

Error: 18456, Severity: 14, State: 5. 
Login failed for user ‘agent_default'. Reason: Could not find a login matching the name provided. [CLIENT: <local machine>]  -

And on Sam, Login failed for user ‘agent_default' and Login failed for user ‘agent_Frodo'

Microsoft's syspolicy_purge_history job runs every night at 0200 on every instance.  It always succeeds.  But it is the only SQL Server agent job that runs at 0200.  So it must be that, surely?  I change it to run at 0215.  Next day, the error log says that our mysterious friend attempted to log in at 0200. 

Hmm, not that then.  Is it an automated scan from our white hat penetration testing server?  I wasted a lot of time trying to pin this down until I was able to find confirmation that Nope - it is not set up to look on Gandalf. 

And then suddenly I noticed that the login attempt had happened at 0215, not 0200.  It IS syspolicy_purge_history !  But it is the job on the OTHER instance which is causing the problem. 



1)      When the job   runs on the Frodo instance, no issues
2)      When the job runs on the default instance, the error log on Frodo says Default agent account failed to log in -
Error: 18456, Severity: 14, State: 5. 
Login failed for user ‘agent_default'. Reason: Could not find a login matching the name provided. [CLIENT: <local machine>]  -
3)      When the job runs on the Sam instance, the error logs on Default and Frodo says
Login failed for user ‘agent_default' and
Login failed for user ‘agent_Frodo'


If you check out that job, Step 3 is running a chunk of Powershell.  It gets the Instance name and runs a task to erase records (older than the specified number of days).

if ('$(ESCAPE_SQUOTE(INST))' -eq 'MSSQLSERVER') {$a = '\DEFAULT'} ELSE {$a = ''};
(Get-Item SQLSERVER:\SQLPolicy\$(ESCAPE_NONE(SRVR))$a).EraseSystemHealthPhantomRecords()

Except that it also looks on all the other instances.  And because I have a different service account for each instance, it fails to find a login, which is why the job succeeds on each instance, but the other instance records a failed login attempt!
It's a known bug with Powershell, apparently.  And that link gives me the Powershell code to fix it!

$applicationName = "SQLPS ($env:USERNAME@$env:COMPUTERNAME)"
$SQLServerConnection = New-Object System.Data.SqlClient.SqlConnection
$SQLServerConnection.ConnectionString = "Data Source=$(ESCAPE_NONE(SRVR));Initial Catalog=master;Integrated Security=SSPI;Application Name=$applicationName"
$PolicyStoreConnection = New-Object Microsoft.SqlServer.Management.Sdk.Sfc.SqlStoreConnection($SQLServerConnection)
$PolicyStore = New-Object Microsoft.SqlServer.Management.Dmf.PolicyStore ($PolicyStoreConnection)
$PolicyStore.EraseSystemHealthPhantomRecords()

Just cut and paste that into Step 3 of the syspolicy_purge_history job, and the mysterious errors stop.

How long does it take to create a database?


How long does it take to create a database? 
Well, depends on the piece of string on your pocket, of course – how big is the database, how zippy your server is, and so on. 
But ideally, you want to create it ASAP. 

Does the database ever need to grow? 
Well, if you have sized it perfectly, it will never need to grow.  <Howls of derisive laughter>
OK, seriously, of course it does, and you want it to grow as quickly as possible too. 


By default, before it creates or grows a file, SQL Server pauses for a moment, maybe quite a few moments, to wipe out any existing data by writing zeroes all over the space it needs.  Which is a sensible precaution if you want to make absolutely sure that nobody can hack into your server and read stuff that you had previously deleted.  But if you are writing data over it anyway, that might be a bit excessively paranoid.  Or maybe you ARE excessively paranoid?  Perhaps you have enough security in place that you would like to consider trading that precaution for speed and performance.  Only you can decide that.  



Find a development server, and check the file location of an existing database, like this:

use dbadmin
go
sp_helpfile


Now create a database.  See how long it takes.  In my Dev environment, this ran in 3 minutes, 17 seconds:

USE [master]
GO


CREATE DATABASE [Fred] ON  PRIMARY
( NAME = N'Fred_Data', FILENAME = N'G:\data\Fred_Data.mdf' , SIZE = 102596608KB , MAXSIZE = UNLIMITED, FILEGROWTH = 0)
LOG ON
( NAME = N'Fred_Log', FILENAME = N'H:\logs\Fred_Log.ldf' , SIZE = 2621440KB , MAXSIZE = 2048GB , FILEGROWTH = 0)
GO


Okay,  three minutes for 100GB, not totally shabby. 

Let’s try it without writing the zeroes.  It's called Instant File Initialization, if you want to do any further reading - I like Brent Ozar's write-up


It isn’t a box to tick or a script to run, I’m afraid.   you need to restart the SQL Server service, so it will have to be done out of hours on Production.


First thing you need is the name of your service account:


-- 2008r2 onwards

SELECT service_account, * FROM sys.dm_server_services;


For example - e.g. GANDALF\sqldev_frodo


Open a Remote Desktop Connection to the server
Run SECPOL.MSC (Local Security Policy)
Open up the Local Policies
Click on User Rights Assignment
Scroll right down to Perform Volume Maintenance Tasks
Open it up and add the service account
Click on OK


Now you need to restart the SQL Server Service
Run SERVICES.MSC
Scroll down to the SQL Server services – there are probably a few of them
Stop the SQL Server Agent (MSSQLSERVER) service first
Then stop SQL Server (MSSQLSERVER)

Then start SQL Server (MSSQLSERVER)
And finally start SQL Server Agent (MSSQLSERVER)


Why in that order?  If you stop SQL Server (MSSQLSERVER), the Agent service should stop automatically
If you restart SQL Server (MSSQLSERVER), it should stop, stop the agent, and restart (though you have to remember to restart the agent).


So why bother with the extra steps?
Because that is what SHOULD happen.  Sometimes the restart thing gets stuck.  This may be me remembering something that used to happen and has now been fixed, senile old fool, stuck in his ways.  So feel free to try restart.  Just don’t come whingeing to me if it doesn’t do what it should do.  Because I shall say I Told You So



Back to that script.  Remember the script to create the database?


USE [master]
GO


DROP DATABASE [Fred]
GO



CREATE DATABASE [Fred] ON  PRIMARY
( NAME = N'Fred_Data', FILENAME = N'G:\data\Fred_Data.mdf' , SIZE = 102596608KB , MAXSIZE = UNLIMITED, FILEGROWTH = 0)
LOG ON
( NAME = N'Fred_Log', FILENAME = N'H:\logs\Fred_Log.ldf' , SIZE = 2621440KB , MAXSIZE = 2048GB , FILEGROWTH = 0)
GO


Before, this ran in 3 minutes, 17 seconds.

Now?  18 seconds. 

That’s a pretty good improvement


Tuesday 7 August 2018

Tables with Spaces in the Name

Someone has created a table called Complaint Log
Not ComplaintLog or even Complaint_Log

I want to drop an index, so enclosing it in square brackets does the trick

DROP INDEX [IX_Complaint_Log_SNP_Number] ON [dbo].[Complaint Log];
And of course I could do it with the GUI, just by pointing and clicking, all that stuff

But, I want to do it with a script and run it at 0300, and I want to check whether the index actually exists before my script tries to drop it. 

Normal thing would be to do something like this:

IF EXISTS(SELECT * FROM sys.indexes

       WHERE object_id = object_id('dbo.Complaint Log')

       AND NAME ='IX_Complaint_Log_SNP_Number')

DROP INDEX [IX_Complaint_Log_SNP_Number] ON [dbo].[Complaint Log];

But because of that pesky space, it doesn't work.  Oh, it says it does - it says
Command(s) completed successfully.
But it is lying - the index is not found, and it is not deleted

It would be very satisfying to hunt down the guy who created Complaints Log instead of Complaints_Log, but in these namby pamby days of political correctness gone mad, torturing and killing Developers is frowned upon.

Stefan Spraakman gave me the answer - thanks Stefan!  I need to join sys.indexes with sys.objects and test on the name, like this:


IF EXISTS(SELECT * FROM sys.indexes AS i

          INNER JOIN sys.objects AS o ON o.object_id = i.object_id

          WHERE o.name = 'Complaint Log'

          AND i.NAME ='IX_Complaint_Log_SNP_Number')


DROP INDEX [IX_Complaint_Log_SNP_Number] ON [dbo].[Complaint Log];