2

We had one or more of our log files increase in size over night, causing our SQL logs disk to almost fill up. I've been asked to look at what could have caused this.

Can anyone suggest where to start please? I have rough times when logs increased.

4
  • 1
    Were any jobs running overnight (either with business logic or maintenance, like index rebuild)? Was there some unusual, one-off activity (e. g. synchronizing a lot of data from elswehere)? Commented Aug 21 at 11:16
  • 1
    Are the databases in FULL or SIMPLE mode? If in FULL, have the backup log schedule worked fine during the period?
    – Ronaldo
    Commented Aug 21 at 11:27
  • Hi - Thanks for your replies. We have several jobs that run over night, but no new jobs. These include: Backups (Azure) DBATools_Stat_Collection (runs during the day also) IndexOptimise for User_databases Syspolicy_purge_history Everything else is either run during the day only or not scheduled. Databases are in FULL recovery mode and full / log backups have been running successfully.
    – AngryDog
    Commented Aug 22 at 14:07
  • Were you able to find the motive of the increased file size?
    – Ronaldo
    Commented Oct 28 at 13:31

5 Answers 5

1

check LOG_REUSE_WAIT_DESC in master.sys.databases

SELECT name, log_reuse_wait_desc FROM sys.databases;

Solution will be based on the reason.

https://www.brentozar.com/archive/2016/03/my-favorite-system-column-log_reuse_wait_desc/

https://techcommunity.microsoft.com/t5/azure-database-support-blog/troubleshooting-high-log-utilization-due-to-active-transaction/ba-p/3955837

2
  • SELECT name, log_reuse_wait_desc FROM sys.databases; All databases (except model, which is on a different disk) display NOTHING.
    – AngryDog
    Commented Aug 22 at 14:15
  • you should query it when the log is growing instead of reusing existing space.
    – SergeyA
    Commented Aug 22 at 14:44
0

You can use fn_dblog() to hunt it down. Group by [Transaction ID] and get first few with highest count. Those transactions with highest count will be the one most likely filling up your transaction log disk.

select * 
from fn_dblog(NULL,NULL)
WHERE [Transaction ID]=''# ID's with top count
and [Operation]='LOP_BEGIN_XACT'

If you are lucky enough you may find the Query itself in 'Description' column also you can utilise the timestamp columns.

NB:This function is undocumented

0

There are many unknowns to answer specificaly. over night term but shows the probable source of the problem is maintenance jobs.

Prevention to problem:

If you have many indexes especially for huge ledger - transaction tables. Rebuilding - Reorganising indexes for the database (all tables) causes transaction logs to increase.

1.) Dont configure Rebuilding and Reorganising indexes in the same job. Split into different dedicated jobs.

For example configure Rebuilding once a week and Reorganising daily.

2.) Schedule full backup or transactional log backup jobs before the Rebuilding and reorganising jobs. It doesnt need to be in the same night. Preceeding day could enough.

3.) If recovery model is Full then configure a transactional log backup job before intense insert, update, delete, bulk load transactional jobs if exist.

4.) Split job processes into modules. Means configure System db's rebuild-reorganise schedule seperately. In addition group customer - your databases in the server into few and define seperate schedule under Normalisation prencible.

Investigation for log files:

1.) Sql Profiler: You can find which Sql commands are running within your sql server by using Sql Profiler. Connect remotely or On site to your Server via Sql Profiler and trace the processes while they are running during the night.

If the time of jobs not suitable to trace then you can Schedule a Trace

2.) Transaction log file content

2.1.) You can use DBCC log (HR, 2) to get the content of HR db transaction log.

enter image description here

2.2.) sys.fn_dblog

SELECT
 [Current LSN],
 [Transaction ID],
 [Operation],
  [Transaction Name],
 [CONTEXT],
 [AllocUnitName],
 [Page ID],
 [Slot ID],
 [Begin Time],
 [End Time],
 [Number of Locks],
 [Lock Information]
FROM sys.fn_dblog(NULL,NULL)
WHERE Operation IN 
   ('LOP_INSERT_ROWS','LOP_MODIFY_ROW',
    'LOP_DELETE_ROWS','LOP_BEGIN_XACT','LOP_COMMIT_XACT')  

You can search the usage details of the above builtin Sql system function.

There are also lots of third party tools for reading transaction log and recovery for accidental operations.

0

please execute the following query et see what is causing the tlog to grow

DECLARE @current_tracefilename VARCHAR(500); 
DECLARE @0_tracefilename VARCHAR(500); 
DECLARE @indx INT; 

SELECT @current_tracefilename = path 
  FROM sys.traces 
 WHERE is_default = 1; 

SET @current_tracefilename = REVERSE(@current_tracefilename); 
SELECT @indx = PATINDEX('%\%', @current_tracefilename); 
SET @current_tracefilename = REVERSE(@current_tracefilename); 
SET @0_tracefilename = 
    LEFT(@current_tracefilename, 
         LEN(@current_tracefilename) - @indx)
    + '\log.trc'; 

SELECT DatabaseName
     , te.name, Filename
     , CONVERT(DECIMAL(10, 3)
     , Duration / 1000000e0) AS TimeTakenSeconds
     , StartTime
     , EndTime
     , (IntegerData * 8.0 / 1024) AS 'ChangeInSize MB'
     , ApplicationName
     , HostName
     , LoginName 
     , *
      FROM ::fn_trace_gettable(@0_tracefilename, DEFAULT) t 
INNER JOIN sys.trace_events AS te 
        ON t.EventClass = te.trace_event_id 
     WHERE(trace_event_id >= 92 AND trace_event_id <= 95) 
        --and name like '%Log%' and StartTime > GETDATE() -30 
ORDER BY t.StartTime desc;

0

This Finding File Growths with Extended Events article might help you find the reason of the unexpected file growth you reported. Here is one of the queries it provides that might give the answer you seek right away:

DECLARE @df bit
SELECT @df = is_default FROM sys.traces WHERE id = 1
IF @df = 0 OR @df IS NULL
BEGIN
  RAISERROR('No default trace running!', 16, 1)
  RETURN
END
SELECT te.name as EventName
, t.DatabaseName
, t.FileName
, t.StartTime
, t.ApplicationName
, HostName
, LoginName
, Duration
, TextData
FROM fn_trace_gettable(
 (SELECT REVERSE(SUBSTRING(REVERSE(path), CHARINDEX('\', REVERSE(path)),256)) + 'log.trc'
 FROM    sys.traces
 WHERE   is_default = 1
 ), DEFAULT) AS t
INNER JOIN sys.trace_events AS te
ON t.EventClass = te.trace_event_id
WHERE 1=1
and te.name LIKE '%Auto Grow'
--and DatabaseName='tempdb'
--and StartTime>'05/27/2014'
ORDER BY StartTime

--

SELECT TOP 1 'Oldest StartTime' as Label, t.StartTime
FROM fn_trace_gettable(
 (SELECT REVERSE(SUBSTRING(REVERSE(path), CHARINDEX('\', REVERSE(path)),256)) + 'log.trc'
 FROM    sys.traces
 WHERE   is_default = 1
 ), DEFAULT) AS t
INNER JOIN sys.trace_events AS te
ON t.EventClass = te.trace_event_id
ORDER BY StartTime

If this query is not enough to give you the answer, there's the configuration of an XE session on the article that should help you catch the culprit next time you have that problem.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.