We had one or more of our log files increase in size over night, causing our SQL logs disk to almost fill up. I've been asked to look at what could have caused this.
Can anyone suggest where to start please? I have rough times when logs increased.
We had one or more of our log files increase in size over night, causing our SQL logs disk to almost fill up. I've been asked to look at what could have caused this.
Can anyone suggest where to start please? I have rough times when logs increased.
check LOG_REUSE_WAIT_DESC in master.sys.databases
SELECT name, log_reuse_wait_desc FROM sys.databases;
Solution will be based on the reason.
https://www.brentozar.com/archive/2016/03/my-favorite-system-column-log_reuse_wait_desc/
You can use fn_dblog()
to hunt it down. Group by [Transaction ID]
and get first few with highest count. Those transactions with highest count will be the one most likely filling up your transaction log disk.
select *
from fn_dblog(NULL,NULL)
WHERE [Transaction ID]=''# ID's with top count
and [Operation]='LOP_BEGIN_XACT'
If you are lucky enough you may find the Query itself in 'Description' column also you can utilise the timestamp columns.
NB:This function is undocumented
There are many unknowns to answer specificaly. over night
term but shows the probable source of the problem is maintenance jobs
.
Prevention to problem:
If you have many indexes especially for huge ledger - transaction tables. Rebuilding - Reorganising indexes
for the database (all tables) causes transaction logs to increase.
1.) Dont configure Rebuilding
and Reorganising
indexes
in the same job. Split into different dedicated jobs.
For example configure Rebuilding once a week
and Reorganising daily
.
2.) Schedule full backup or transactional log backup jobs before the Rebuilding and reorganising jobs
. It doesnt need to be in the same night. Preceeding day could enough.
3.) If recovery model is Full then configure a transactional log backup job before intense insert, update, delete, bulk load transactional jobs if exist.
4.) Split job processes into modules. Means configure System db
's rebuild-reorganise schedule seperately. In addition group customer - your databases
in the server into few and define seperate schedule under Normalisation prencible
.
Investigation for log files:
1.) Sql Profiler:
You can find which Sql commands are running within your sql server by using Sql Profiler. Connect remotely or On site to your Server via Sql Profiler and trace the processes while they are running during the night.
If the time of jobs not suitable to trace then you can Schedule a Trace
2.) Transaction log file content
2.1.) You can use DBCC log (HR, 2)
to get the content of HR db transaction log.
2.2.) sys.fn_dblog
SELECT
[Current LSN],
[Transaction ID],
[Operation],
[Transaction Name],
[CONTEXT],
[AllocUnitName],
[Page ID],
[Slot ID],
[Begin Time],
[End Time],
[Number of Locks],
[Lock Information]
FROM sys.fn_dblog(NULL,NULL)
WHERE Operation IN
('LOP_INSERT_ROWS','LOP_MODIFY_ROW',
'LOP_DELETE_ROWS','LOP_BEGIN_XACT','LOP_COMMIT_XACT')
You can search the usage details of the above builtin Sql system function.
There are also lots of third party tools
for reading transaction log and recovery for accidental operations.
please execute the following query et see what is causing the tlog to grow
DECLARE @current_tracefilename VARCHAR(500);
DECLARE @0_tracefilename VARCHAR(500);
DECLARE @indx INT;
SELECT @current_tracefilename = path
FROM sys.traces
WHERE is_default = 1;
SET @current_tracefilename = REVERSE(@current_tracefilename);
SELECT @indx = PATINDEX('%\%', @current_tracefilename);
SET @current_tracefilename = REVERSE(@current_tracefilename);
SET @0_tracefilename =
LEFT(@current_tracefilename,
LEN(@current_tracefilename) - @indx)
+ '\log.trc';
SELECT DatabaseName
, te.name, Filename
, CONVERT(DECIMAL(10, 3)
, Duration / 1000000e0) AS TimeTakenSeconds
, StartTime
, EndTime
, (IntegerData * 8.0 / 1024) AS 'ChangeInSize MB'
, ApplicationName
, HostName
, LoginName
, *
FROM ::fn_trace_gettable(@0_tracefilename, DEFAULT) t
INNER JOIN sys.trace_events AS te
ON t.EventClass = te.trace_event_id
WHERE(trace_event_id >= 92 AND trace_event_id <= 95)
--and name like '%Log%' and StartTime > GETDATE() -30
ORDER BY t.StartTime desc;
This Finding File Growths with Extended Events article might help you find the reason of the unexpected file growth you reported. Here is one of the queries it provides that might give the answer you seek right away:
DECLARE @df bit
SELECT @df = is_default FROM sys.traces WHERE id = 1
IF @df = 0 OR @df IS NULL
BEGIN
RAISERROR('No default trace running!', 16, 1)
RETURN
END
SELECT te.name as EventName
, t.DatabaseName
, t.FileName
, t.StartTime
, t.ApplicationName
, HostName
, LoginName
, Duration
, TextData
FROM fn_trace_gettable(
(SELECT REVERSE(SUBSTRING(REVERSE(path), CHARINDEX('\', REVERSE(path)),256)) + 'log.trc'
FROM sys.traces
WHERE is_default = 1
), DEFAULT) AS t
INNER JOIN sys.trace_events AS te
ON t.EventClass = te.trace_event_id
WHERE 1=1
and te.name LIKE '%Auto Grow'
--and DatabaseName='tempdb'
--and StartTime>'05/27/2014'
ORDER BY StartTime
--
SELECT TOP 1 'Oldest StartTime' as Label, t.StartTime
FROM fn_trace_gettable(
(SELECT REVERSE(SUBSTRING(REVERSE(path), CHARINDEX('\', REVERSE(path)),256)) + 'log.trc'
FROM sys.traces
WHERE is_default = 1
), DEFAULT) AS t
INNER JOIN sys.trace_events AS te
ON t.EventClass = te.trace_event_id
ORDER BY StartTime
If this query is not enough to give you the answer, there's the configuration of an XE session on the article that should help you catch the culprit next time you have that problem.