Creating log tables to track running processes

In any SQL Server system, you will have jobs that run on a schedule or at specified intervals. In these cases, it’s always nice to keep track of certain aspects over time, so you can compare when things go wrong or how performance has been affected over time. In my experience these are indepsensible when it comes to troubleshooting, and for running delta jobs. Here we’ll show a small example of the log tables you can create to facilitate this.

First let’s look at some DDL for 2 different tables:

[cc lang=”sql”]
CREATE TABLE [dbo].[ProcessLogMaster](
[process_log_master_id] [int] IDENTITY(1,1) CONSTRAINT PK_process_log_master PRIMARY KEY CLUSTERED NOT NULL,
[process_master_name] [varchar](100) NOT NULL,
[datetime_start] [datetime] NULL DEFAULT (getdate()),
[datetime_end] [datetime] NULL,
[elapsed_ms] [int] NULL,
[rows_updated] [int] NULL,
[rows_inserted] [int] NULL,
[rows_deleted] [int] NULL,
[complete] [tinyint] NULL DEFAULT ((0)),
[success] [tinyint] NULL,
[error_description] [varchar](max) NULL
)

CREATE TABLE [dbo].[ProcessLogDetail](
[process_log_detail_id] [int] IDENTITY(1,1) NOT NULL CONSTRAINT [PK_process_log_detail] PRIMARY KEY,
[process_log_master_id] [int] NOT NULL,
[process_detail_name] [varchar](100) NOT NULL,
[datetime_start] [datetime] NULL,
[datetime_end] [datetime] NULL,
[elapsed_ms] [int] NULL,
[rows_updated] [int] NULL,
[rows_inserted] [int] NULL,
[rows_deleted] [int] NULL,
[complete] [tinyint] NULL,
[success] [tinyint] NULL,
[error_description] [varchar](max) NULL
)
[/cc]

What we have here are two different tables that can be used to describe job steps. The ProcessLogMaster table, is used to specify the Master or Top level of the job. If there are multiple steps to the job, that report back to the master, we would enter their entries into the ProcessLogDetail table. We could then sum up the rows updated / inserted / deleted into the ProcessLogMaster table. You’ll also notice the ProcessLogDetail table contains the foreign key to the ProcessLogMaster table. We did not designate it as such here simply for performance reasons.

AN example would consist of creating an entry into these tables, often within your stored procedure, then update the corresponding id that relates to that portion of the job

[cc lang=”sql”]

CREATE PROCEDURE pr_example_proc_job
AS

/********************************************************************
* declarations
********************************************************************/
DECLARE @process_master_name varchar(100) = ‘refresh_all_clients’;
DECLARE @process_log_master_id int;
DECLARE @now datetime = GETDATE();
DECLARE @datetime_start datetime;
DECLARE @datetime_since datetime;

INSERT INTO ProcessLogMaster
(
process_master_name,
datetime_start
)
SELECT
@process_master_name,
getdate();

SET @process_log_master_id = SCOPE_IDENTITY();

BEGIN TRY
— this will create an entry in the processLogDetail table
EXEC [pr_example_detail_proc_1]
@process_log_master_id = @process_log_master_id;

— this will create an entry in the processLogDetail table
EXEC [pr_example_detail_proc_2]
@process_log_master_id = @process_log_master_id;

— gather detail logging
DECLARE @total_rows_updated int;
DECLARE @total_rows_inserted int;
DECLARE @total_rows_deleted int;

SELECT
@total_rows_updated = ISNULL(SUM(rows_updated),0),
@total_rows_inserted= ISNULL(SUM(rows_inserted),0),
@total_rows_deleted = ISNULL(SUM(rows_deleted),0)
FROM ProcessLogDetail
WHERE process_log_master_id = @process_log_master_id;

END TRY
BEGIN CATCH
— handle error
UPDATE ProcessLogMaster
SET datetime_end = GETDATE(),
elapsed_ms = DATEDIFF(ms, @now, GETDATE()),
rows_updated = @total_rows_updated,
rows_inserted = @total_rows_inserted,
rows_deleted = @total_rows_deleted,
complete = 1,
success = 0,
error_description = ERROR_MESSAGE()
WHERE process_log_master_id = @process_log_master_id;

RETURN;

END CATCH

UPDATE ProcessLogMaster
SET datetime_end = GETDATE(),
elapsed_ms = DATEDIFF(ms, @now, GETDATE()),
rows_updated = @total_rows_updated,
rows_inserted = @total_rows_inserted,
rows_deleted = @total_rows_deleted,
complete = 1,
success = 1
WHERE process_log_master_id = @process_log_master_id;

[/cc]

As you can see, we summarize the updates, inserts, and deletes and insert the total into the ProcessLogMaster table, from the ProcessLogDetail table. Also, if there is an error, the job never gets set to success = true. This being the case, we can always start our job back up from the last successful timestamp if we are doing a delta process.

I’ve found over the years these tables to be a necessary part of keeping track of jobs. Not only will it help you in viewing the duration of a job, but for doing delta processes, you can easily get the last time a job has succeeded and pick up from there.

One comment
Jana 31 Aug 2015 at 10:00 am

Nice trick.For SQL Server 2005 you could similarly legreave the SQLTRACE_BUFFER_FLUSH wait type. In my tests I don’t show it to be nearly that exact, but fairly close.Also, the default trace might have much more information than just the time elapsed use something likeSELECT * FROM ::fn_trace_gettable( ‘, default)WHERE TextData LIKE %DBCC%SQLPERF%’Obviously, this would only work if the default trace has not rolled over more than the threshold since the last wait stats clear.

Featured Articles

 Site Author

  • Thanks for visiting!
css.php