performance tuning, SQL Server

Fix SQL Server Error 1919 During Index Create

When working on a Microsoft SQL Server database tuning project with a client, I encountered a puzzling error message. Based on performance metrics and analyzing query execution plans I had recommended a couple of new indexes. The index creation scripts looked similar to the following (I redacted the actual object names here):

    ([ColX] ASC) ON [PRIMARY]

The first index went in just fine, while the second failed with an error message similar to this one:

Msg 1919, Level 16, State 1, Line 1
Column 'ColX' in table 'dbo.tblA' is of a type that is invalid for use as a key column in an index.

This error commonly occurs when the key column for the index is of type NVARCHAR(MAX) which is not supported, or if the index key exceeds 900 bytes. Neither of these conditions were true, as the column was defined with something on the order of NVARCHAR(150), well within the constraints for a non-clustered index key.

One fact I was already aware of is that this column had actually started out as an NVARCHAR(MAX). In fact there were many columns like this across the entire database, which I suspect was a result of using Microsoft Entity Framework to develop the application. On my advice the client had converted all of these due to all of the performance issues they were experiencing.

Converting the column type was straightforward using this SQL statement:

ALTER TABLE [dbo].[tblA]
    ALTER COLUMN [ColX] [nvarchar](150) NOT NULL

So how is it that this error message was encountered and we could not create the index? This one nearly had me stumped. I was fairly certain this had something to do with the data type conversion somehow appearing incomplete from the viewpoint of the index creation statement.

I suggested to my client that they run two T-SQL maintenance commands similar to the following:


Followed by:

dbcc updateusage (mydb,'dbo.tblA')

Once these commands were finished, creating the index succeeded.

Evidently what had happened is that either the table column statistics or space allocation had not updated to reflect the column data type change. By default SQL Server maintains this information automatically, but on large, busy databases this should be taken care of proactively. This is typically done with a weekly or even nightly SQL Server Agent job. I subsequently learned that my client up to this point had no such maintenance job in place. They were in startup mode on project that had gone live fairly recently. As is the common case they did not have an in-house DBA and relied on application developers to run the database.

Typically out-of-date statistics results in less-than-optimal query performance. Inaccurate space allocation usually shows up when reporting the size of your tables. Getting an error message when creating an index due to these conditions is extremely rare, and did not turn up in my online search. Fortunately I was able to draw upon my many years of experience as a SQL Server DBA to make an educated guess and root out the problem.

The best defense against this type of issue is to always implement a good scheduled maintenance regimen in your database environment. Automatic backups go without saying regardless of the kind of database you use (SQL or NoSQL). In the case of MS SQL Server this regularly scheduled maintenance also includes index defragmentation and statistics updates. An excellent script for doing this can be found here.

Rick Ciesla owns and operates VenaData, a provider of database administration solutions, services and consulting. Contact Rick for more information.

%d bloggers like this: