Thursday, 12 February 2015

Modern Jobs - the Process Development Coordinator

Our company Intranet had a job post today.  I have not the faintest idea what a Process Development Coordinator does. 

So I read the Job Description

This is a varied role, working on a premium brand which is still in growth phase. The account is open 7 days per week between the hours of 0800 -2100 and you will be rostered rotationally across those hours, however the majority of your hours will be worked 9-5:30 Monday to Friday. This is a high profile role with lots of client contact and requires the personal qualities of self-motivation, learning orientation, analytical thinking, process orientation and patience.

Candidates must have a demonstrable understanding of training.

The main purpose of this role is:
 
·           Undertake ongoing root cause analysis to identify opportunities for process improvement
·           To work  with the clients to ensure consistency of process and approach across both sites
·           To provide recommendations around achieving a demonstrable improvement in challenging external quality measures
·           Manage the Training Associate team workflow
·           support operations to achieve high standards to enhance the customer experience and increase productivity
·           Share in the operational workflow and customer facing duties.
 
 
Required abilities
·         Committed to the delivery of an exceptional level of customer service
·         Excellent communication skills
·         Ability to pay close attention to detail
·         Natural ability to inspire, motivate and energise others
·         Shows respect to others in a positive manner and builds strong working relationships
·         Strong team player and role model
·         Enthusiastic, positive, resourceful and resilient
·         PC literate


I'm still none the wiser.  Am I getting old?

Tuesday, 10 February 2015

The Missing Sundays Mystery

One of our key customers sends us a datafile every day, and it automatically gets loaded.  Except sometimes - it doesn't...

<Cue theme from "The Twilight Zone">

So I wrote some code to see when data was being loaded.  Sure enough, it works every day of the week, but on the Sunday between Christmas and New Year - it failed to load.  The following Sunday - it worked fine!  But since then, it failed every Sunday.  Only on Sunday.  Then on Monday, a double dose of data gets loaded. 

Now - I didn't tinker with the database - it's an Oracle system, running under a Cron job  under Unix.  So a long way over to the Dark Side.  I don't even have access to the box it runs on - even if I wasn't afraid to touch it, I couldn't tinker. 

And the developers swear blind that they haven't mucked about with the application for months...

I looked in the FTP site - there was a litter of .tmp files, apparently some sort of by-product of the loading, renaming and moving process.  No clues there though, and removing them made no difference. 

Luckily the client was able to send us a log showing us what files were sent and at what date and time.  Helpfully, he highlighted the missing Sundays in yellow.  and all became clear.  The ones that worked were sent to us at various times ranging from about 0500 to 0700.  The ones that failed were sent to us at various times between 0700 and 0800. 
Guess when the Cron job runs?

The job to send us the data is automated - but runs when other things finish, hence the variety of times.  And Sundays?  "That's the day we bounce our servers..."

Wednesday, 15 January 2014

Untangle Database Names



Generally, when you set up a database, you call it 'SALESDB' or whatever, and that's the end of it.  The logical file names will default to SALESDB.MDF AND SALESDB_log.LDF, and so will the physical names on disk.  But over the years those names can get changed.  In my case I am in the middle of migrating databases from one system to another, and part of that involves changing the names which were created by a company which was taken over - you could consider it as airbrushing traces of the old system out of existence, I couldn't possibly comment. 

I found a good explanation in this MSSQLTip.    In their case, the project had changed and the old name was no longer relevant.  It could be a typo that irritates you enough to do something about it.  It could be that you are anally retentive enough that, like me, you like your database names to be consistent.  

Consistent?  Well, consider - the database name that you see every day isn't necessarily the underlying logical name of the database files.  Still less can you be sure that it is the name of the physical .mdf and .ldf files on disk.   Here's the Official word.  But what this means is that there can be three names - the name of the database, the logical name of the database files, and the physical name of the files on the disk.  They can all be different if you want them to be.

Caution - that way lies madness...

What I'll try to do in this post is untangle these various names and explain how to change them to be consistent.  SQL Server is reasonably easygoing as to what you call your database - but be sensible about it.  

First of all, get details of the database. The stored procedure sp_helpdb provides quite a lot of information about the database, (more than is shown here) but the results below are what we are interested in.  The name in square brackets is the name of the database.  Column 1 contains the logical names of the two database files.  Column 3 contains the physical names of the .mdf and .ldf files as they are found on the disk. Save this somewhere handy like an Outlook Task - you'll want to refer to it later. 


exec sp_helpdb [DQTHL]


DQTHL                 1              E:\MSSQL\Data\DQTHL.mdf
DQTHL_log          2              F:\MSSQL\Log\DQTHL_log.ldf


Let's all dance around singing halleluiah! Everything is consistent - the logical and physical names all match the database name.

If it was always so, there would be no point to this blog.  But it ain't like that all the time.  In a sense, it doesn't really matter - SQL Server would tolerate my next example, even though the database name doesn't match the logical file names, and the physical file names are something entirely unrelated.   It keeps track of what things are called and where they are.  The problem comes when a human being gets involved - maybe you want to do some housekeeping, and discover that the names don't tie up.  Humans are not good at telling which file belongs where, especially if there are lots of them, all with similar names. 

exec sp_helpdb [DQTHL]
  
DQ_Data                1              E:\MSSQL\Data\OldSales.mdf
DQ_log                  2              E:\MSSQL\Log\OldSales_log.ldf



OK, first step is to make sure that nobody else is doing something with the database.
You might get away without bothering, but you know what Sod's Law is like...

-- get exclusive access
alter database  [BEC_BASES]
SET SINGLE_USER WITH ROLLBACK IMMEDIATE
Go


It's easy to change the database name - just right click on it and Rename is one of the options.  There's also a stored procedure you can use:


EXEC sp_renamedb 'old_name' , 'new_name'
 

However, I have a feeling that this is deprecated (meaning that Microsoft will wait until the most inconvenient time for you and then drop it).  Use Alter Database instead:

--Change DATABASE name
ALTER DATABASE  [BEC_BASES]
       MODIFY NAME = [BASES]
Go


Then we need to change the logical name of the files.  Remember we did sp_helpdb a few minutes ago?  This will give you the current names.  Almost always you will have an MFD and and LDF file,  but you could also have one or more NDF files as well, and in fact you can use any suffix you want. I came across a database where one of my predecessors had renamed the LDF file as MDF, so that there were two MDF files on the disk and no LDF files. 

Caution - that way lies madness...

-- Change Logical name
ALTER DATABASE [BASES] MODIFY FILE (NAME=N'BEC_BASES', NEWNAME=N'BASES')
GO
ALTER DATABASE [BASES] MODIFY FILE (NAME=N'BEC_BASES_log', NEWNAME=N'BASES_log')
go


Then we want to change the names of the physical files.  I used to Detach the database, change the file names, and then re-attach, and that's fine, but I think this way is actually easier.

Let go of the database (or you won't be able to do the next bit):
use master
go


Take the database offline
ALTER DATABASE BASES SET OFFLINE
GO



Tell it the file names that you are going to use:
ALTER DATABASE BASES MODIFY FILE (NAME = BASES, FILENAME = 'E:\MSSQL\Data\BASES.mdf')
GO
ALTER DATABASE BASES MODIFY FILE (NAME = BASES_log, FILENAME = 'F:\MSSQL\Log\BASES_log.ldf')
GO

The files are modified in the system catalog. The new path will be used the next time the database is started.

Now go to File Explorer or however you like to carry out operating system tasks like Rename.  I suppose in theory that you could do this with XP_CMDShell, but you would have to remember to enable it and then disable it again afterwards - more trouble than it's worth.

Navigate to the data folder and rename the MDF file, then to the log folder and rename the LDF file. 


Put the database back online
ALTER DATABASE BASES SET ONLINE
GO

Open up the access to everyone (remember you made it single user?)
ALTER DATABASE [BASES]
       SET MULTI_USER

And just to be sure that you have done everything right, run sp_helpdb again and check the results against your original:
exec sp_helpdb [BASES]


DQTHL                 1              E:\MSSQL\Data\DQTHL.mdf
DQTHL_log          2              F:\MSSQL\Log\DQTHL_log.ldf


We've untangled it!  Let's all dance around singing halleluiah!  



Monday, 16 December 2013

The Curse of SQL Server Embedded Edition


Help!  The database is writing a log file which has filled up drive C of server XYZMGT02!

Huh?  That isn’t one of our database servers – in fact I’ve never even heard of it!  Not only that, I don’t even have permission to log onto it!  Nuffink to do with me, guv!

 It turned out that there was a database involved, sure enough, which is why the DBA team got called.  But it wasn’t something that we had ever set up.  Windows Server Update Services or WSUS  downloads updates from Microsoft and sends them out to the computers in the corporate network.  It runs under a freebie cut-down version of SQL Server called Embedded Edition  - SSEE for short – and not unlike Express Edition, when you want to manage it, the things you need have more often than not been disabled. 

The underlying problem in this case was that normally, updates get distributed to the network and can then be purged from the WSUS system.  But if for some reason a computer on the network is unavailable, that update cannot be delivered, and therefore it is not purged.  Drive F:\ which contains the WSUS data had filled up.  And then the software writes a message in the log on Drive C to say something like:
“Could not allocate space for object 'dbo.tbXml'.'PK__tbXml__0E6E26BF' in database 'SUSDB' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup."

53,881 error messages – all but a dozen say that. Keep on writing that message for long enough, and you fill up 10 Gb of Drive C, which then grinds to a halt, bringing the whole server down.  
Now in an ideal world I would have configured that log so that it gets located somewhere else - drive D has twice the space on it, and even if it filled right up, it wouldn't give the server heart failure.  But as far as I can tell, there is no way to change the destination drive - the edit option has been disabled.  Alternatively I might get SQL Server to send an email message to the WSUS administrator - but email has been disabled too. 

Hmm, tricky.  Let's think about those error logs for a minute.  By default, SQL Server carries on writing an error log until it gets restarted - which might mean forever.   This can mean that the error log gets very large indeed, and slow to open if you ever want to have a look at the contents.   So on most of the servers I work with, I like to create a new log every month, by setting up an agent job to run this:
exec master..sp_cycle_errorlog

exec msdb..sp_cycle_agent_errorlog

That's one for the error log, and one for the agent error log - which of course doesn't exist in SSEE (duh, because it has been disabled).

Again by default, SQL Server keeps the current log, plus the six previous logs.  This seems very sensible  - you are probably never going to want to check further back than six months.  And you can change that default if you do.  

But in this case we don't have room on the disk to save all that stuff, and since every error message is in effect identical, we don't really care.  So what I did was set up a scheduled task to cycle the error logs daily.   So it retains the error messages for the past seven days, and then slings them.  

A scheduled task is a Windows option, and not nearly as flexible as SQL Server Agent - but if you can't use Agent , it can come in handy.  

So - I created a folder called scripts on drive C.  
I created a text file called Cycle_Errorlog.sql which contains exec master..sp_cycle_errorlog
 
I created a text file called Cycle_Errorlogs.bat which changes to Drive C, goes to the correct directory, and runs SQLCMD with the SQL script above.  Notice that the connection string to the embedded edition is a bit weird - full details here

C:\
cd\Program Files\Microsoft SQL Server\90\Tools\binn\
sqlcmd -E -S \\.\pipe\MSSQL$MICROSOFT##SSEE\sql\query -i "c:\scripts\Cycle_Errorlog.sql"


And I set up a scheduled task to run the batch file daily.

Three months on, WSUS is still filling up Drive F with updates that can't be deployed, the WSUS Administrator is tearing his hair out, but drive C has plenty of room, and the server isn't crashing.