Tuesday, 19 July 2022

Cycling The Error Logs



Error Logs? Surely you don't have errors? :)

Well, SQL Server records pretty much everything, so it can get a bit unmanageable. If there are a million errors in your log file, it takes a long time to open that file and tell you what is happening. So I have a two step agent job that runs each week - DBAdmin Cycle Error Logs

Step 1 : execute dbo.sp_cycle_errorlog
Step 2: execute dbo.sp_cycle_agent_errorlog

SQL Server by default retains 6 logs, and although you can configure it to retain more, I can't imagine a reason for wanting to look at an error from more than six weeks ago.  

However, on some of my servers, my job fails - Step 1 works, Step 2 fails with an access denied message:

Executed as user: Domain\Server_Agent. SQLServerAgent Error: Access is denied. [SQLSTATE 42000] (Error 22022). The step failed.

I noticed that the error log is actually being cycled. 
The error message relates to the system attempting to write to the log to say what it has done
In my case, the system is logging SQL Server messages in file: 
'E:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\Log\ERRORLOG'.

So, as so often with weird stuff in SQL Server, it's a permissions issue

I set list permission for the SQL agent account all the way to:
E:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL
then set Modify on the Log folder

I ran the job again, it failed again, argh!

And here’s the crucial bit that I didn’t do at first:
Restart SQL Server Agent Service

I ran the job again, and it worked

Friday, 15 May 2020

How to Kill a Database

It's not very likely that you will need to get rid of databases on the production side, but certainly development boxes can be full of junk - test databases, experiments that didn't work out, projects that got cancelled... So you just drop the junk DB, right?

Wrong...

Get the business owner to confirm in writing before you do this stuff, and be prepared to retrieve it in case they change their mind.  This is pretty much the opposite of business as usual, so beware.

1) You can make a database read-only.  It is no longer in use, but the users might want to look at historical data sometimes.  The users can still see their data, but they can't change anything.  But if an application tries to record something like date_last_accessed, it won't work.  Consider running optimization processes, such as full UPDATE STATISTICS before changing the status to read only .




--update statistics
use Vanilla
Go

exec sp_updatestats



-- to make a database read only
alter  database Vanilla set read_only with rollback immediate;
go

-- and back to normal
alter  database Vanilla set read_write;
go

The rollback immediate thing will cope with anyone who is in the middle of working on the database (the one they told you is no longer in use).  Of course, it may be a process rather than a person.  You may need to make the database single-user

-- if other people are using it, take control
alter database Vanilla set single_user with rollback immediate;

--and give it back
alter database Vanilla set multi_user;





2) You can take a database offline.  This is a step beyond read only; the users can't see it but you can.  If it turns out that they need it after all, it's usually the work of a moment to bring it back online.  If not this may help.

Taking a database offline is a wise precaution to take before deleting it.  Tell the helpdesk to let you know if anyone screams.  Wait for a month or two, maybe three, depending..  Put a note in Outlook to remind you in three months, otherwise it will stay offline forever.  If you are using SQL Server 2005, you may need to tinker with your maintenance plans - the option to ignore offline dbs only came in with 2008.

-- Take the Database Offline
alter database Vanilla set offline with rollback immediate;
go

-- Bring the Database back Online
alter database Vanilla set online;

go




3) You can detach a database.  It's still there, but you can't see it or get it back without a bit of effort.   If the file naming isn't routine, it might be very fiddly indeed.

Out of sight, out of mind - once you have detached a database "temporarily" you can't see it unless you go on the box and browse to the folder(s) where the files live.  So before you detach the Vanilla database, run this:

execute sp_helpdb Vanilla

 This will show you where the files live; copy the results into an Outlook task and mark it for attention a couple of weeks hence. Something like this:


name  fileid  filename  filegroup  size   usage  maxsize  growth
Vanilla 1 E:\MSSQL.1\MSSQL\DATA\Vanilla.MDF PRIMARY 7808 KB Unlimited 10% data only
Vanilla_Log 2 F:\MSSQL\Logs\Vanilla_Log.LDF NULL 1024 KB Unlimited 10% log only

There's a fair chance that you or your predecessor forgot to take this precaution some time in the last 10 years.  This might mean that the server (especially, as I say, if it's a dev box) is cluttered with long forgotten detached databases.

The script below looks for databases which are not attached to the current instance.  Unfortunately if you have other instances on the box, it will detect those databases too.  (If you can tell me how to fix that, I'll be very grateful). But apart from that, it's very good at finding potentially forgotten files which are taking up space that you can use.

It uses xp_cmdshell, so assuming you probably have this disabled by default, enable it before and disable it afterwards for security reasons.  (Be careful though - if it isn't disabled by default, you might end up disabling something important. )  Then it creates a temporary table,  populates it with a list of .mdf files, and compares the mdf files it finds with a list of those that are attached.


---- enable these jobs
---- show advanced options
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO

---- enable xp_cmdshell
sp_configure 'xp_cmdshell', 1;
GO
RECONFIGURE;
GO

-- based on http://www.sqlservercentral.com/Forums/Topic6166-5-1.aspx
-- create temporary table
create table #temp_mdf_files
(
      full_filename varchar(200)
)

-populate the temp table with any MDF files found
insert #temp_mdf_files
       exec xp_cmdshell 'dir c:\*.mdf /s/b'

insert #temp_mdf_files
       exec xp_cmdshell 'dir d:\*.mdf /s/b'

insert #temp_mdf_files
       exec xp_cmdshell 'dir e:\*.mdf /s/b'


select
       -- exclude the subdirectory name
       upper(reverse(substring(reverse(full_filename ), 1,charindex('\', reverse(full_filename ) )-1) )) As MDF_FileName,
       full_filename
from
       #temp_mdf_files
where
       --exclude rows which contain system messages or nulls
       full_filename like '%\%'
      
      
and --exclude system databases
       upper(reverse(substring(reverse(full_filename ), 1,charindex('\', reverse(full_filename ) )-1) ))
not in
       ('DISTMDL.MDF', 'MASTER.MDF', 'MODEL.MDF', 'MSDBDATA.MDF' , 'MSSQLSYSTEMRESOURCE.MDF', 'TEMPDB.MDF' )


and    -- MDF filename excluding the subdirectory name
       full_filename
not in
       (select Upper(FILEname) from sys.SYSdatabases)

-- -- SQL Server 2000
--     not in (select Upper(FILEname) from sysdatabases) 
order by MDF_FileName



-- Housekeeping
drop table #temp_mdf_files



-- disable these jobs
-- show advanced options
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO

-- disable xp_cmdshell
sp_configure 'xp_cmdshell', 0;
GO
RECONFIGURE;
GO

-- hide advanced options
sp_configure 'show advanced options', 0;
GO
RECONFIGURE;
GO



4) You can drop a database

You're sure?   Really sure?  


Not if it's offline, you can't.  You need to bring it back online before you do that.  It might disappear from SSMS, but the physical files are still lurking in there. In fact I suspect that the reason I am housekeeping a list of detached databases is that people have taken them offline


Someone dropped the database but because it was offline it never got deleted – it just stayed on as a detached DB




And take a backup before you drop it.  Zip up the backup and save it in your Final Backups folder. 

Monitoring Mirroring

Since Mirroring is now deprecated in favour of Always On, I think it is a minority sport these days

But it works perfectly well, especially if you haven't got Always On.  In a perfect world, we would all upgrade to the latest version as soon as it comes out, and take advantage of all the whizzy new features.  But we don't, of course

Anyway, monitoring mirroring.  A bit of a tongue twister, that.

Assuming you have set up mirroring already, right click on the database you want to monitor, select Tasks, and choose Launch Database Mirroring Monitor

Click on the Warnings tab.  Do this for each of the mirrored databases

Initially, it doesn't monitor anything.  Tucked away in the bottom right of the screen is a button marked Set Thresholds.  Click on it and tick all the boxes

Microsoft sets these defaults - 
Unsent log exceeds 1KB
Unrestored log exceeds 1KB
Oldest unsent transaction exceeds 1 minute
Mirror commit overhead exceeds 1millisecond

I would recommend starting with these and seeing how it goes.  You will almost certainly end up upping those defaults, but it probably isn't my place to tell you what to do

‌Remember to go to the SQL Agent jobs - find Database Mirroring Monitor Job - and set it to Notify you if it finds a problem.  

Be sure to set up the alerts and the Agent Job at both servers, so that it continues to alert you in the event of a failover.