Users: 30k; Pages Per Session: 2.5p; Avg. Session Duration: 1:35mi
Explore Tools
{Featuring Microsoft Technologies + Enterprise Data + Cloud Computing}

Hitting the 16 Terabytes Data File Limit
by BF (Senior MSSQL Operations - Enterprise, Cloud, Strategy)

Msg 1105, Level 17, State 2, Line 9
Could not allocate space for object 'dbo.Config'.'PK_Config' in database 'SALES' because the 'PRIMARY' filegroup is full.
Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup,
or setting autogrowth on for existing files in the filegroup.

As per: Maximum Capacity Specifications for SQL Server

File size (data) 16 terabytes


This error occurred on a Clustered Server with a SQL Server 2014 Database with a single Data File in a single PRIMARY Filegroup. As Transactions were occurring in this SALES database the Applications were faced with the 'PRIMARY filegroup is full'. The Applications were down. Now, since we needed to fix this quickly, and we did not have the approval to delete any of this older data, we implemented the below steps to resolve this issue:

NOTE: The 16 TB limit is on the Data File and not on the Filegroup!!!

1. Enable Database File Initialization as per Database File Initialization

- In SQL Server, data files can be initialized instantaneously.
- Instant file initialization is available only for data files. Log files will always be zeroed when being created, or growing in size.
- Instant file initialization is only available if the SQL Server service startup account has been granted SE_MANAGE_VOLUME_NAME. Members of the Windows Administrator group have this right and can grant it to other users by adding them to the Perform Volume Maintenance Tasks security policy.
- Some feature usage, such as Transparent Data Encryption (TDE), can prevent Instant File Initialization.
- Starting with SQL Server 2016, this permission can be granted to the service account at install time, during setup.
- Starting with SQL Server 2012 SP4, and SQL Server 2016 SP1 through SQL Server 2017, the column instant_file_initialization_enabled in the sys.dm_server_services DMV can be used to identify if instant file initialization is enabled.

2. Add 3 additional Data Files (6 TB each) in the same PRIMARY Filegroup to take advantage of SQL Server Storage Engine Proportional Fill. Each new Data file was created on a separate LUN. Each LUN was on a separate Storage Controller. With this configuration, SQL Server can now balance the data across all the Data Files with it's Proportional Fill Algorithm and also execute Parallel IO operations on all these files and thus have better performance for WRITE operations and READ operations.

- Because of Database File Initialization, we were able to create these 3 6TB Data Files in under 5 minutes
- Proportional Fill only works for Data Files in the same Filegroup
- There is not a direct relation between a Table and a Data File - only between a Table and a File Group. A Filegroup can contain 1-n Data Files. This means one table could be spread over several Data Files by SQL Server. (Table Data = Clustered Index for example)
- Using Files and Filegroups (Parallel IO)

3. Rebuild Table Indexes. With SQL Server Proportional Fill Algorithm, as soon as we start rebuilding Table's Primary Key(Clustered Index), SQL Server will spread that table/index data across all the Data Files in a balanced method. The 16 TB Data File will have data moved off and balanced across all 4 Data Files. This is a great way to avoid Hot Spots and allow for Parallel IO as mentioned above.

- If you rebuild the PK CI on a table, say for a 7TB table, that rebuild is a logged operations and your Transaction Log will grow.
- Best to issue a CHECKPOINT after to flush all dirty pages to disk.
- Reorganize and Rebuild Indexes
- AdaptiveIndexDefrag
- Transaction Log Disk Space for Index Operations
...Large-scale index operations can generate large data loads that can cause the transaction log to fill quickly. To make sure that the index operation can be rolled back, the transaction log cannot be truncated until the index operation has completed
...Therefore, the transaction log must have sufficient room to store both the index operation transactions and any concurrent user transactions for the duration of the index operation. This is true for both offline and online index operations
...Make sure the transaction log has been backed up and truncated before running large-scale index operations online, and that the log has sufficient space to store the projected index and user transactions.
...Consider setting the SORT_IN_TEMPDB option to ON for the index operation. This separates the index transactions from the concurrent user transactions. The index transactions will be stored in the tempdb transaction log, and the concurrent user transactions will be stored in the transaction log of the user database. Tempdb log is not on the same disk as the user database log, the two logs are not competing for the same disk space.


4. Query the 4 Data Files space used & free to monitor how well SQL Sever's Proportional Fill is working.

Now what we have is much more evenly balanced allocation across 4 data files. This will allow SQL Server to even distribute the write I/O across both disk drives. By doing reindex after adding a new data files, you'll help prevent a write hotspot on one of your disks and help SQL Server improve its I/O performance. Keep in mind that Proportional Fill only affects all files in the same filegroup. If we had added the second file to a new filegroup, then we would have to manually move tables to the new filegroup. Tables are per Filegroup.

Problem Solved! The Database is now operational and we have plenty of space to grow. The future for this Database is to delete older data plus archive some data plus look into Table Partitioning to have faster Indexing and much better Maintenance as we can backup Filegroups individually. We may put older data on a specific Filegroup since we do not want to back that up daily as the data is not changing and save on backup time/space/resources.

Additionally, you can enable Trace Flags 1165 and 3605 to see the Skip Targets value on each Data File. A brief explanation below.

SQL Server's Storage Engine uses two Algorithms when allocating Extents for Files in a Filegroup: (1) Round Robin (2) Proportional Fill.

(1) Round Robin means that the Storage Engine will allocate from each Data File in a Filegroup one after the other in a loop. aka Round Robin method.

(2) Proportional Fill means the Storage Engine will allocate based on the amount of Free Space available is in each of the Data Files in the Filegroup and it will actually allocate more extents from the file(s) that has more free space. So it is important to ensure the Free Space is similar across all the Data Files to get data stored in a balanced way across all the Data Files. Technically, Proportional Fill works by assigning a number to each file in the filegroup, called a 'skip target'. The way this works is the the higher the value is above 1, the more times that Data File will be skipped when going round robin. During the round robin process, the skip target value for a Data File is examined, and if it’s equal to 1, an allocation takes place. If the skip target is higher than 1, the Storage Engine will skip writing to that Data File, decremented the skip target value by 1 - thus no allocation takes place and it moves to the next file in the Filegroup via round robin.

- Skip Target for each file is calculated by (# of free extents in file with most free space) / (# of free extents in this file).
- Skip Targets are calculated whenever a file is added to or removed from a filegroup, or at least 8192 extent allocations take place in the filegroup.
- The Data Files in the Filegroup with the least amount of free space will have the highest skip targets
- At least one file in the Filegroup will have a skip target value = 1 - thus each time the round robin occurs at least one extent allocation takes place.

To view the Skip Target values for each Data File enable the Trace Flags and check the Error Log for the Skip Target message.

DBCC TRACEON (1165, 3605);

To View Data File used and free space:



,[FILE_Name] =


,[File_Location] = A.PHYSICAL_NAME





,[AutoGrow] = 'By ' + CASE is_percent_growth WHEN 0 THEN CAST(growth/128 AS VARCHAR(10)) + ' MB -'

WHEN 1 THEN CAST(growth AS VARCHAR(10)) + '% -' ELSE '' END

+ CASE max_size WHEN 0 THEN 'DISABLED' WHEN -1 THEN ' Unrestricted'

ELSE ' Restricted to ' + CAST(max_size/(128*1024) AS VARCHAR(10)) + ' GB' END

+ CASE is_percent_growth WHEN 1 THEN ' [autogrowth by percent, BAD setting!]' ELSE '' END

FROM sys.database_files A LEFT JOIN sys.filegroups fg ON A.data_space_id = fg.data_space_id

order by A.TYPE desc, A.NAME;

To View Filegroups, Files, TotalExtents, UsedExtents:

DBCC showfilestats

To View Index Details:

SELECT * FROM sys.dm_db_index_physical_stats(9, 277576027, 1, NULL , 'DETAILED');

- Returns size and fragmentation information for the data and indexes of the specified table or view in SQL Server.
- For an index, one row is returned for each level of the B-tree in each partition.
- For a heap, one row is returned for the IN_ROW_DATA allocation unit of each partition.
- For large object (LOB) data, one row is returned for the LOB_DATA allocation unit of each partition.
- If row-overflow data exists in the table, one row is returned for the ROW_OVERFLOW_DATA allocation unit in each partition. Does not return information about xVelocity memory optimized columnstore indexes.

SELECT [DatabaseName]
,CONVERT(DECIMAL(16, 1), (SUM([avg_record_size_in_bytes] * [record_count]) / (1024.0 * 1024))) AS [IndexSize(MB)]
,[lastupdated] AS [StatisticLastUpdated]
SELECT DISTINCT DB_Name(Database_id) AS 'DatabaseName'
,Object_Name(Object_id) AS ObjectName
,Index_ID AS IndexId
,Index_Type_Desc AS IndexDescription
,STATS_DATE(object_id, index_id) AS 'lastupdated'
,CONVERT([varchar](512), round(Avg_Fragmentation_In_Percent, 3)) AS 'AvgFragmentationInPercent'
FROM sys.dm_db_index_physical_stats(db_id(), NULL, NULL, NULL, 'detailed')
--WHERE OBJECT_ID IS NOT NULL AND Avg_Fragmentation_In_Percent <> 0
) T
GROUP BY DatabaseName

SELECT tn.[name] AS [Table name], ix.[name] AS [Index name],
SUM(sz.[used_page_count]) * 8 AS [Index size (KB)]
FROM sys.dm_db_partition_stats AS sz
INNER JOIN sys.indexes AS ix ON sz.[object_id] = ix.[object_id]
AND sz.[index_id] = ix.[index_id]
INNER JOIN sys.tables tn ON tn.OBJECT_ID = ix.object_id
GROUP BY tn.[name], ix.[name]
ORDER BY tn.[name]

SELECT convert(char(8),object_name(i.object_id)) AS table_name, AS index_name,
i.index_id, i.type_desc as index_type,
partition_id, partition_number AS pnum, rows,
allocation_unit_id AS au_id, a.type_desc as page_type_desc, total_pages AS pages
FROM sys.indexes i JOIN sys.partitions p
ON i.object_id = p.object_id AND i.index_id = p.index_id
JOIN sys.allocation_units a
ON p.partition_id = a.container_id
order by pages desc

-- List all Objects and Indexes per Filegroup / Partition and Allocation Type
SELECT AS DataSpaceName
,AU.type_desc AS AllocationDesc
,AU.total_pages / 128 AS TotalSizeMB
,AU.used_pages / 128 AS UsedSizeMB
,AU.data_pages / 128 AS DataSizeMB
, AS SchemaName
,OBJ.type_desc AS ObjectType
, AS ObjectName
,IDX.type_desc AS IndexType
, AS IndexName
FROM sys.data_spaces AS DS
INNER JOIN sys.allocation_units AS AU
ON DS.data_space_id = AU.data_space_id
INNER JOIN sys.partitions AS PA
ON (AU.type IN (1, 3)
AND AU.container_id = PA.hobt_id)
(AU.type = 2
AND AU.container_id = PA.partition_id)
INNER JOIN sys.objects AS OBJ
ON PA.object_id = OBJ.object_id
INNER JOIN sys.schemas AS SCH
ON OBJ.schema_id = SCH.schema_id
LEFT JOIN sys.indexes AS IDX
ON PA.object_id = IDX.object_id
AND PA.index_id = IDX.index_id