WHAT’S A VLF AND WHY DO I CARE?

Some Background

SQL Server ensures that all changes to the database, either data or structures such as indexes, are preserved by first writing to the transaction log file, than then writing to the physical database.

SQL Server only uses one transaction log per database at a single point in time. It starts writing copies of the changes made at the “front” of the log file, and as changes are committed (written to the physical database) it releases space in the transaction log for reuse. This is called a “round robin” architecture.

SQL Server manages space in the transaction log via VLFs – virtual log files. Inside the one tlog file are a bunch of VLFs – smaller chunks of space that are easier for the SQL Server to manage. Despite the size of a single database, there’s no fixed size for a VLF, and no fixed number of VLFs per physical tlog file, although it’s not uncommon for a tlog file to start life with 4 or 8 VLFs.

SQL Server tlog with VLFs

The space in a VLF is reused when all the transactions/changes stored in that VLF have been written to the physical database following COMMIT records that have been issued for these operations.

What causes the number of VLFs to grow?

Lack of tlog backups and open transactions, mostly.

IF the SQL Recovery Model is set to FULL or BULK-LOGGED and transaction log backups are never taken or are taken too infrequently, then SQL Server never has a chance to clear out the VLFs which have committed changes, so it must continue adding new VLFs to the stack.

IF there is an open transaction, i.e. no COMMIT statement has been issued to “close out” the transaction, then SQL Server has no option but to leave those VLFs which contain portions of the open transaction (“active” VLFs) untouched. And, since it must continue writing copies of other changes that are taking place in the database, it must add new VLFs to the stack.

The only way that SQL Server can add new VLFs to the stack is if auto-grow is enabled. Otherwise, SQL Server will report an “out of space in the log” error and stop processing transactions.

Trivia Fact: when the SQL Server tlog file (auto-)grows between 1-64MB, it adds 4 VLFs to the stack. When it grows between 64-9999MB, it adds 8 VLFs to the stack. When it grows more than 1GB, it adds 16 VLFs to the stack. The default SQL Server auto-grow size is 10%, which means that, as the tlog continues to grow, each set of VLFs created will be a slightly larger size than the previous set.

Trivia Question: if a SQL Server database is created with the default values (tlog file = 10MB), by the time the tlog file has auto-grown to 500MB (1/2 GB), how many VLFs will there be?

Trivia Answer: 200 VLFs

Why is this such a big deal?

Today’s databases are usually large, measuring in GBs (1000 MB = 1 GB) rather than MBs. It’s not unusual to have a 50GB tlog file, and if it’s auto-growing at 10MB per growth segment, the VLF count will be on the order of 20,000.

Too many VLFs impact database operations:

  • Inserts, updates and deletes take longer;
  • Database startup takes longer;
  • Database restores take longer;
  • Database backups are larger because all “active” VLFs (from open transactions) must be backed up;
  • Tlog file fragmentation; SQL Server must search for the appropriate VLF whenever writing to the log file or clearing a VLF for reuse.

 

Is there a problem if VLFs get too big?

Yes. Overly large VLFs (4GB, for instance) can lead to performance problems, also. Clearing a very large VLF takes a long time and takes resources away from transaction processing. Very large VLFs also take more time to be released from the “active” status, since they store so many more changes than a regularly-sized VLF, so this may result in SQL Server having to create even more very large VLFs.

What’s the Solution?

There is no hard and fast rule as to how many VLFs any database should have.

  • Too many VLFs result in tlog fragmentation and the impacts listed above;
  • Too few and very large VLFs cause slow performance.

The goal: have the tlog be as large as necessary, with a reasonable autogrow increment (example: 128MB, or some similar number divisible by 4 or 8 or 16).

How to “Fix” a Tlog File With Lots of VLFs…

  1. Figure out how large the log size should be for the database in question:
    1. How much data is being added to the database in one year’s time?
    2. How often are index rebuilds being done?
    3. Are there excessively-long transactions that are running against the database? Can they be shortened?
    4. Is data being bulk-loaded into the database on a regular basis? How much and when?
    5. What’s the disk usage history? (applies only to mature databases…)
  2. Shrink the tlog to the smallest size possible.
    1. Use the dbcc shrinkfile command to shrink the log file, and do it in increments, within a maintenance window.
  3. Re-grow the tlog file to an appropriate size, again in increments, within the same maintenance window.
    1. Growing too fast can result in too-large VLFs
    2. Use the alter database modify file command to incrementally grow the existing log file
  4. Check the auto-grow settings and adjust to the new database tlog sizing.

 

CODE:

— get the file layout, the amount of space used AND the number of VLFs for each database ————

EXECUTE master.dbo.sp_MSforeachdb ‘USE [?];

exec sp_helpfile; exec sp_spaceused; dbcc loginfo;

 

 

— get a full report on file conditions for a database ————

Select DB_NAME() AS [DatabaseName], Name, physical_name,

Cast(Cast(Round(cast(size as decimal) * 8.0/1024.0,2) as decimal(18,2)) as nvarchar) Size,

Cast(Cast(Round(cast(size as decimal) * 8.0/1024.0,2)

as decimal(18,2)) – Cast(FILEPROPERTY(name, ‘SpaceUsed’) * 8.0/1024.0 as decimal(18,2)) as nvarchar) As FreeSpace

From sys.database_files

 

— check the number of VLFs and force a checkpoint ———-

dbcc loginfo

 

checkpoint

 

 

/*

NOTE: if the database is small enough to be backed up twice during the maintenance window, then switching the recovery model from FULL to SIMPLE back to FULL will work. Otherwise, don’t switch, leave at FULL or BULK-LOGGED, and incrementally shrink the log file, running the dbcc shrinkfile command multiple times.

*/

 

— take full db backup —————————-

BACKUP DATABASE model

TO DISK = N’Z:\SSRSbkups\model\model_full_bkup_20151116_before.BAK’

WITH NOFORMAT, INIT, NAME = N’model Full Database Backup’,

SKIP, NOREWIND, NOUNLOAD, STATS = 25

 

— if the recovery model is not SIMPLE then back up the translog —-

BACKUP LOG model

TO DISK = N’Z:\SSRSbkups\model\model\model_log_bkup.TRN’

WITH NOFORMAT, INIT, NAME = N’model Tlog Backup’,

SKIP, NOREWIND, NOUNLOAD, STATS = 25

 

 

— determine the recovery model —————

select databasepropertyex(‘model’, ‘recovery’)

 

 

— set db to simple recovery model —————————-

ALTER DATABASE model

SET RECOVERY SIMPLE

 

— shrink the log file —————————-

— VLFs are created in sets of 4, 8 or 16 ———

DBCC SHRINKFILE (N’modellog’ , 4)

GO

 

 

— check the number of VLFs

dbcc loginfo                — 8

 

 

— set db back to its normal recovery model —————————-

ALTER DATABASE model

SET RECOVERY FULL

 

 

— check the recovery model —————

select databasepropertyex(‘model’, ‘recovery’)

 

 

Increase the size of the transaction log file:

USE [master]

GO

ALTER DATABASE [AdventureWorks2014] MODIFY FILE (NAME = N’AdventureWorks2014_Log’, SIZE = 20480KB )

GO

 

— take full db backup —————————-

— since the recovery model was switched from FULL to SIMPLE back to FULL,

— it’s highly probable that any transaction log backup taken from this point forward

— would not work with the full backup taken prior to the start of this process…

————————————————————————————–

BACKUP DATABASE model

TO DISK = N’Z:\SSRSbkups\model\model_full_bkup_20151116_after.BAK’

WITH NOFORMAT, INIT, NAME = N’model Full Database Backup’,

SKIP, NOREWIND, NOUNLOAD, STATS = 25

 

— if the recovery model is not SIMPLE then back up the translog —-

BACKUP LOG model

TO DISK = N’Z:\SSRSbkups\model\model\model_log_bkup.TRN’

WITH NOFORMAT, INIT, NAME = N’model Tlog Backup’,

SKIP, NOREWIND, NOUNLOAD, STATS = 25

 

/*  an extra bit of code */

———————————————————

— if you suspect an open transaction, identify it

———————————————————

USE master

GO

SELECT * FROM sysprocesses WHERE open_tran = 1 order by last_batch desc

 

USE Model

GO

DBCC OPENTRAN(‘Model’)

GO

To Learn More About SQL Server Transaction Log Files:

Brent Ozar – Jess Borland video (20min): http://brentozar.com/go/vlf

Thomas LaRock: http://thomaslarock.com/2012/08/the-trouble-with-transaction-logs/

Sebastian Meine: http://sqlity.net/en/1805/eight-reasons-transaction-log-files-keep-growing/

Michelle Gutzeit: http://searchsqlserver.techtarget.com/tip/Testing-transaction-log-autogrowth-behavior-in-SQL-Server

Greg Larsen: https://www.simple-talk.com/sql/database-administration/sql-server-transaction-log-fragmentation-a-primer/

Paul S. Randall: https://technet.microsoft.com/en-us/magazine/2009.02.logging.aspx

Ivan Stankovic: http://www.sqlshack.com/top-10-articles-sql-server-transaction-log/