EF Core 3: Getting model metadata from dynamically loaded assembly with IL Emit

Yet another Stack Overflow question has sparked a heated discussion and got us thinking whether we can do better.

In a nutshell, the question was about finding a way to query EF Core model metadata without directly referencing the assembly that defines it. Think MsBuild Task that needs to check if your model is following your company standards. Or a test of some sort.

First stab at it

We were able to help the OP by quickly whipping up the following loader code:

var assembly = Assembly.LoadFrom(@"C:\OnlineShoppingStore\bin\Debug\netcoreapp2.2\OnlineShoppingStore.dll");
var contextType = assembly.GetTypes().First(d => d.Name == "OnlineStoreDbContext");
var ctx = Activator.CreateInstance(contextType) as DbContext; // instantiate your context. this will effectively build your model, so you must have all required EF references in your project
var p = ctx.Model.FindEntityType(assembly.GetTypes().First(d => d.Name == "Product")); // get the type from loaded assembly
//var p = ctx.Model.FindEntityType("OnlineStoreDbContext.Product"); // querying model by type name also works, but you'd need to correctly qualify your type names
var pk = p.FindPrimaryKey().Properties.First().Name; // your PK property name as built by EF model

The answer ended up being accepted, but the OP had a bit of an issue with instantiating the Context:

System.InvalidOperationException: 'No database provider has been configured for this DbContext. 
A provider can be configured by overriding the DbContext.OnConfiguring method or by using AddDbContext on the application service provider. 
If AddDbContext is used, then also ensure that your DbContext type accepts a DbContextOptions object in its constructor and passes it to the base constructor for DbContext.

This is kind of expected: when EF creates the context it will invoke OnConfiguring override and set up DB provider with connection strings and so on and so forth. It all is necessary for the actual thing to run, but for the OP it meant having to drag all DB providers into the test harness. Not ideal.

The idea

After a bit back and forth I’ve got an idea. What if we subclass the Context yet again and override the OnConfiguring with a predefined Provider (say, InMemory)?

IL Emit all things

We don’t get to use IL Emit often – it’s meant for pretty specific use cases and I think this is one. The key to getting it right in our case was finding the correct overload of UseInMemoryDatabase. There’s a chance however, that you might need to tweak it to suit your needs. It is pretty trivial once you know what you’re looking for.

public static MethodBuilder OverrideOnConfiguring(this TypeBuilder tb)
        {
            MethodBuilder onConfiguringMethod = tb.DefineMethod("OnConfiguring",
                MethodAttributes.Public
                | MethodAttributes.HideBySig
                | MethodAttributes.NewSlot
                | MethodAttributes.Virtual,
                CallingConventions.HasThis,
                null,
                new[] { typeof(DbContextOptionsBuilder) });

            // the easiest method to pick will be .UseInMemoryDatabase(this DbContextOptionsBuilder optionsBuilder, string databaseName, Action<InMemoryDbContextOptionsBuilder> inMemoryOptionsAction = null)
            // but since constructing generic delegate seems a bit too much effort we'd rather filter everything else out
            var useInMemoryDatabaseMethodSignature = typeof(InMemoryDbContextOptionsExtensions)
                .GetMethods()
                .Where(m => m.Name == "UseInMemoryDatabase")
                .Where(m => m.GetParameters().Length == 3)
                .Where(m => m.GetParameters().Select(p => p.ParameterType).Contains(typeof(DbContextOptionsBuilder)))
                .Where(m => m.GetParameters().Select(p => p.ParameterType).Contains(typeof(string)))
                .Single();
            
            // emits the equivalent of optionsBuilder.UseInMemoryDatabase("test");
            var gen = onConfiguringMethod.GetILGenerator();
            gen.Emit(OpCodes.Ldarg_1);
            gen.Emit(OpCodes.Ldstr, Guid.NewGuid().ToString());
            gen.Emit(OpCodes.Ldnull);
            gen.Emit(OpCodes.Call, useInMemoryDatabaseMethodSignature);
            gen.Emit(OpCodes.Pop);
            gen.Emit(OpCodes.Ret);

            return onConfiguringMethod;
        }

with the above out of the way we now can build our dynamic type and plug it into our test harness!

class Program
    {
        static void Main(string[] args)
        {
            // load assembly under test
            var assembly = Assembly.LoadFrom(@"..\ef-metadata-query\OnlineShoppingStore\bin\Debug\netcoreapp3.1\OnlineShoppingStore.dll");
            var contextType = assembly.GetTypes().First(d => d.Name == "OnlineStoreDbContext");

            // create yet another assembly that will hold our dynamically generated type
            var typeBuilder = AssemblyBuilder
                                .DefineDynamicAssembly(new AssemblyName(Guid.NewGuid().ToString()), AssemblyBuilderAccess.RunAndCollect)
                                .DefineDynamicModule(Guid.NewGuid() + ".dll")
                                .DefineType("InheritedDbContext", TypeAttributes.Public, contextType); // make new type inherit from DbContext under test!

            // this is the key here! now our dummy implementation will kick in!
            var onConfiguringMethod = typeBuilder.OverrideOnConfiguring();
            typeBuilder.DefineMethodOverride(onConfiguringMethod, typeof(DbContext).GetMethod("OnConfiguring", BindingFlags.Instance | BindingFlags.NonPublic));
            
            var inheritedDbContext = typeBuilder.CreateType(); // enough config, let's get the type and roll with it

            // instantiate inheritedDbContext with default OnConfiguring implementation
            var context = Activator.CreateInstance(inheritedDbContext) as DbContext; // instantiate your context. this will effectively build your model, so you must have all required EF references in your project
            var p = context?.Model.FindEntityType(assembly.GetTypes().First(d => d.Name == "Product")); // get the type from loaded assembly
            
            //query the as-built model
            //var p = ctx.Model.FindEntityType("OnlineStoreDbContext.Product"); // querying model by type name also works, but you'd need to correctly qualify your type names
            var pk = p.FindPrimaryKey().Properties.First().Name; // your PK property name as built by EF model
            
            Console.WriteLine(pk);
        }
    }

This is runnable

Source code is available on GitHub in case you want to check it out and play a bit

Monitoring SQL Server: index maintenance

Now that we’ve got basic SQL queries going, we want to collect some actionable intel. A good measure of any SQL database performance is how well indexes are utilised.

This isn’t new

All queries we’re about to share are hardly a revelation. What we however want to achieve is to have an idea of how that information changes over time

Index utilisation

input {
	############################################################# Index Maintenance ############################################################################
	jdbc {
		id => "master_index_maintenance"
		jdbc_driver_library => "path\to\jdbc\lib\mssql-jdbc-7.2.1.jre8.jar"
		jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
		jdbc_connection_string => "jdbc:sqlserver://<your connection string>"
		jdbc_user => nil
		schedule => "*/15 * * * *"
		statement => "	DECLARE @db_id INT
						DECLARE @db_name NVARCHAR(120)
						DECLARE @index_data TABLE(
												[database_name] NVARCHAR(128) NOT NULL,
												table_name sysname NOT NULL, 
												index_name sysname NULL, 
												index_type TINYINT NOT NULL, 
												user_seeks bigint NOT NULL, 
												user_scans bigint NOT NULL, 
												user_lookups bigint NOT NULL, 
												user_updates bigint NOT NULL
											   );

						SET NOCOUNT ON
						DECLARE @dbs table ([db_name] sysname)
						DECLARE @db_query_sql nvarchar(4000)
						SET @db_query_sql='select ''?'' as [db_name] from [?].sys.tables t WHERE t.name = ''Users''';

						INSERT INTO @dbs ([db_name]) EXEC sp_msforeachdb @db_query_sql
						SET NOCOUNT OFF

						DECLARE db_id_cursor CURSOR FOR SELECT DB_ID([db_name]), [db_name] FROM @dbs FOR READ ONLY 
						OPEN db_id_cursor  
						FETCH NEXT FROM db_id_cursor INTO @db_id, @db_name;  
						WHILE @@FETCH_STATUS = 0  
						BEGIN
							DECLARE @sql NVARCHAR(MAX) = CAST(N'USE [' + @db_name + '];
							
							SELECT ''' + @db_name + ''', t.[name], ix.[name], ix.[type], us.user_seeks, us.user_scans, us.user_lookups, us.user_updates
							FROM sys.dm_db_index_usage_stats us
							INNER JOIN sys.indexes ix ON us.object_id = ix.object_id and ix.index_id = us.index_id
							INNER JOIN sys.tables t ON ix.object_id = t.object_id	
							WHERE us.database_id = ' + CAST(@db_id AS NVARCHAR(10)) AS NVARCHAR(MAX));

							INSERT INTO @index_data EXEC sys.sp_executesql @sql;
							FETCH NEXT FROM db_id_cursor INTO @db_id, @db_name;  
						END
						CLOSE db_id_cursor
						DEALLOCATE db_id_cursor
						SELECT * FROM @index_data"
		add_field => {
						"sql_instance" => "SQL2"
					}
	}
	################################################################################################################################################################	
}
output {
	elasticsearch {
		hosts => "elasticsearch:9200"
		index => "sql-index-stats-%{+YYYY.MM}"            
	}
}

Missing indexes

This one is a bit more interesing in a sense that we will get actual index hints. It is not a silver bullet though – this still needs to be analysed by humans for best results.

input {
	############################################################# Master sp_WhoIsActive ############################################################################
	jdbc {
		id => "master_missing_indexes"
		jdbc_driver_library => "path\to\jdbc\lib\mssql-jdbc-7.2.1.jre8.jar"
		jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
		jdbc_connection_string => "jdbc:sqlserver://<your connection string>"
		jdbc_user => nil
		schedule => "1 */2 * * *"
		statement => "	DECLARE @db_id INT
						DECLARE @db_name NVARCHAR(120)
						DECLARE @index_data TABLE(
												[database_name] NVARCHAR(128) NOT NULL,						
												table_name sysname NOT NULL, 
												unique_compiles bigint NOT NULL,
												user_seeks bigint NOT NULL, 
												user_scans bigint NOT NULL, 
												avg_total_user_cost float NULL, 
												avg_user_impact float NULL, 
												overall_impact float NOT NULL,
												sql_code NVARCHAR(MAX) NOT NULL
											   );

						SET NOCOUNT ON
						DECLARE @dbs table ([db_name] sysname)
						DECLARE @db_query_sql nvarchar(4000)
						SET @db_query_sql='select ''?'' as [db_name] from [?].sys.tables t WHERE t.name = ''Users''';

						INSERT INTO @dbs ([db_name]) EXEC sp_msforeachdb @db_query_sql
						SET NOCOUNT OFF

						DECLARE db_id_cursor CURSOR FOR SELECT DB_ID([db_name]), [db_name] FROM @dbs FOR READ ONLY 
						OPEN db_id_cursor  
						FETCH NEXT FROM db_id_cursor INTO @db_id, @db_name;  
						WHILE @@FETCH_STATUS = 0  
						BEGIN
							DECLARE @sql NVARCHAR(MAX) = CAST(N'USE [' + @db_name + '];
							
							WITH index_definitions (table_name, unique_compiles, user_seeks, users_scans, avg_total_user_cost, 
							avg_user_impact, [overall_impact], column_names, included_columns) 
							AS (
								SELECT TOP(600) object_name(c.object_id) AS table_name, 
								a.unique_compiles, a.user_seeks, a.user_scans, a.avg_total_user_cost, a.avg_user_impact, 
								a.avg_total_user_cost * a.avg_user_impact * (a.user_seeks + a.user_scans) AS [overall_impact],
								case when c.equality_columns is not null and c.inequality_columns is not null then c.equality_columns + '', '' + c.inequality_columns
								when c.equality_columns is not null and c.inequality_columns is null then c.equality_columns
								when c.inequality_columns is not null then c.inequality_columns
								END AS column_names,
								c.included_columns
								FROM sys.dm_db_missing_index_group_stats a
								inner join sys.dm_db_missing_index_groups b ON a.group_handle = b.index_group_handle
								inner join sys.dm_db_missing_index_details c on c.index_handle = b.index_handle
								where database_id = ' + CAST(@db_id AS NVARCHAR(10)) + '
								and equality_columns is not null
							)
							SELECT '''+ @db_name +''' AS [database], table_name, unique_compiles, user_seeks, users_scans, avg_total_user_cost, avg_user_impact, overall_impact, ''CREATE NONCLUSTERED INDEX IX_'' + REPLACE(REPLACE(REPLACE(index_definitions.column_names, ''], ['', ''_''), ''['', ''''), '']'', '''') 
									+ '' on '' + index_definitions.table_name + '' ('' + index_definitions.column_names + '') INCLUDE ('' + index_definitions.included_columns + '')''  AS [sql_code]
							FROM index_definitions WHERE index_definitions.included_columns IS NOT NULL
							UNION 
							SELECT '''+ @db_name +''' AS [database], table_name, unique_compiles, user_seeks, users_scans, avg_total_user_cost, avg_user_impact, overall_impact, ''CREATE NONCLUSTERED INDEX IX_'' + REPLACE(REPLACE(REPLACE(index_definitions.column_names, ''], ['', ''_''), ''['', ''''), '']'', '''') 
									+ '' on '' + index_definitions.table_name + '' ('' + index_definitions.column_names + '')'' AS [sql_code]
							FROM index_definitions WHERE index_definitions.included_columns IS NULL' AS NVARCHAR(MAX));

							INSERT INTO @index_data EXEC sys.sp_executesql @sql;
							FETCH NEXT FROM db_id_cursor INTO @db_id, @db_name;  
						END
						CLOSE db_id_cursor
						DEALLOCATE db_id_cursor
						SELECT * FROM @index_data;"
		add_field => {
						"sql_instance" => "SQL2"
					}
	}
	################################################################################################################################################################	
}
output {
	elasticsearch {
		hosts => "elasticsearch:9200"
		index => "sql-missing-indexes-%{+YYYY.MM}"            
	}
}

With data collection out of the way it’s time to move on to plotting graphs. Grafana is quite easy to pick up so instead of repeating official documentation we’d share a few dashboards we have in place

{
  "annotations": {
    "list": [
      {
        "builtIn": 1,
        "datasource": "-- Grafana --",
        "enable": true,
        "hide": true,
        "iconColor": "rgba(0, 211, 255, 1)",
        "name": "Annotations & Alerts",
        "type": "dashboard"
      }
    ]
  },
  "editable": true,
  "gnetId": null,
  "graphTooltip": 0,
  "id": 33,
  "iteration": 1584410507264,
  "links": [],
  "panels": [
    {
      "collapsed": false,
      "datasource": null,
      "gridPos": {
        "h": 1,
        "w": 24,
        "x": 0,
        "y": 0
      },
      "id": 74,
      "panels": [],
      "title": "Index usage",
      "type": "row"
    },
    {
      "columns": [],
      "datasource": "Elasticsearch [index-stats]",
      "fontSize": "100%",
      "gridPos": {
        "h": 5,
        "w": 24,
        "x": 0,
        "y": 1
      },
      "id": 72,
      "interval": "1h",
      "links": [],
      "maxPerRow": null,
      "options": {},
      "pageSize": null,
      "repeat": "Database",
      "repeatDirection": "v",
      "scopedVars": {
        "Database": {
          "selected": true,
          "text": "All",
          "value": "All"
        }
      },
      "scroll": true,
      "showHeader": true,
      "sort": {
        "col": 4,
        "desc": true
      },
      "styles": [
        {
          "alias": "Table",
          "align": "auto",
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "pattern": "table_name.keyword",
          "type": "string"
        },
        {
          "alias": "Index",
          "align": "auto",
          "colorMode": "value",
          "colors": [
            "rgba(50, 172, 45, 0.97)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(245, 54, 54, 0.9)"
          ],
          "decimals": 2,
          "pattern": "index_name.keyword",
          "thresholds": [
            "60",
            "80",
            "90"
          ],
          "type": "string",
          "unit": "percent"
        },
        {
          "alias": "Scans",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "Average user_scans",
          "sanitize": false,
          "thresholds": [],
          "type": "number",
          "unit": "none"
        },
        {
          "alias": "Seeks",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "Average user_seeks",
          "thresholds": [],
          "type": "number",
          "unit": "none"
        },
        {
          "alias": "Lookups",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "Average user_lookups",
          "thresholds": [],
          "type": "number",
          "unit": "none"
        },
        {
          "alias": "Updates",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "Average user_updates",
          "thresholds": [],
          "type": "number",
          "unit": "none"
        },
        {
          "alias": "Total index usages",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "bucket",
          "thresholds": [],
          "type": "number",
          "unit": "none"
        }
      ],
      "targets": [
        {
          "alias": "",
          "bucketAggs": [
            {
              "fake": true,
              "field": "table_name.keyword",
              "id": "4",
              "settings": {
                "min_doc_count": 1,
                "order": "desc",
                "orderBy": "9",
                "size": "0"
              },
              "type": "terms"
            },
            {
              "fake": true,
              "field": "index_name.keyword",
              "id": "3",
              "settings": {
                "min_doc_count": 1,
                "order": "desc",
                "orderBy": "9",
                "size": "0"
              },
              "type": "terms"
            }
          ],
          "hide": true,
          "metrics": [
            {
              "field": "user_scans",
              "id": "5",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "user_seeks",
              "id": "6",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "user_lookups",
              "id": "7",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "user_updates",
              "id": "8",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "select field",
              "id": "9",
              "meta": {},
              "pipelineVariables": [
                {
                  "name": "var1",
                  "pipelineAgg": "5"
                },
                {
                  "name": "var2",
                  "pipelineAgg": "6"
                },
                {
                  "name": "var3",
                  "pipelineAgg": "7"
                }
              ],
              "settings": {
                "script": "params.var1+params.var2+params.var3"
              },
              "type": "bucket_script"
            }
          ],
          "query": "database_name.keyword:$Database",
          "refId": "A",
          "timeField": "@timestamp"
        },
        {
          "bucketAggs": [
            {
              "fake": true,
              "field": "table_name.keyword",
              "id": "3",
              "settings": {
                "min_doc_count": 1,
                "order": "desc",
                "orderBy": "4",
                "size": "0"
              },
              "type": "terms"
            },
            {
              "fake": true,
              "field": "index_name.keyword",
              "id": "9",
              "settings": {
                "min_doc_count": 1,
                "order": "desc",
                "orderBy": "4",
                "size": "10"
              },
              "type": "terms"
            }
          ],
          "metrics": [
            {
              "field": "user_scans",
              "id": "4",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "user_seeks",
              "id": "5",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "user_lookups",
              "id": "6",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "user_updates",
              "id": "7",
              "meta": {},
              "settings": {},
              "type": "avg"
            }
          ],
          "query": "database_name.keyword:$Database",
          "refId": "B",
          "timeField": "@timestamp"
        }
      ],
      "timeFrom": null,
      "timeShift": null,
      "title": "Index usage - $Database",
      "transform": "table",
      "type": "table"
    },
    {
      "collapsed": false,
      "datasource": null,
      "gridPos": {
        "h": 1,
        "w": 24,
        "x": 0,
        "y": 6
      },
      "id": 76,
      "panels": [],
      "title": "Underutilised indexes",
      "type": "row"
    },
    {
      "cacheTimeout": null,
      "columns": [],
      "datasource": "Elasticsearch [index-stats]",
      "fontSize": "100%",
      "gridPos": {
        "h": 7,
        "w": 24,
        "x": 0,
        "y": 7
      },
      "id": 58,
      "links": [],
      "options": {},
      "pageSize": null,
      "repeat": "Database",
      "repeatDirection": "h",
      "scopedVars": {
        "Database": {
          "selected": true,
          "text": "",
          "value": ""
        }
      },
      "showHeader": true,
      "sort": {
        "col": 3,
        "desc": true
      },
      "styles": [
        {
          "alias": "Table",
          "align": "auto",
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "pattern": "table_name.keyword",
          "type": "string"
        },
        {
          "alias": "Index",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "decimals": 2,
          "pattern": "index_name.keyword",
          "thresholds": [],
          "type": "string",
          "unit": "short"
        },
        {
          "alias": "Lookups",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "Average user_lookups",
          "thresholds": [],
          "type": "number",
          "unit": "short"
        },
        {
          "alias": "Scans",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "Average user_scans",
          "thresholds": [],
          "type": "number",
          "unit": "short"
        },
        {
          "alias": "Seeks",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "Average user_seeks",
          "thresholds": [],
          "type": "number",
          "unit": "short"
        },
        {
          "alias": "Total Usage",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 3,
          "mappingType": 1,
          "pattern": "Bucket Script",
          "thresholds": [],
          "type": "number",
          "unit": "short"
        },
        {
          "alias": "Updates",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "Average user_updates",
          "thresholds": [],
          "type": "number",
          "unit": "short"
        }
      ],
      "targets": [
        {
          "alias": "{{table_name.keyword}}.{{index_name.keyword}}",
          "bucketAggs": [
            {
              "fake": true,
              "field": "table_name.keyword",
              "id": "5",
              "settings": {
                "min_doc_count": 1,
                "order": "desc",
                "orderBy": "6",
                "size": "5"
              },
              "type": "terms"
            },
            {
              "fake": true,
              "field": "index_name.keyword",
              "id": "3",
              "settings": {
                "min_doc_count": 1,
                "order": "desc",
                "orderBy": "6",
                "size": "10"
              },
              "type": "terms"
            }
          ],
          "metrics": [
            {
              "field": "user_lookups",
              "hide": true,
              "id": "1",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "user_scans",
              "hide": true,
              "id": "6",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "user_seeks",
              "hide": true,
              "id": "7",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "select field",
              "id": "8",
              "meta": {},
              "pipelineVariables": [
                {
                  "name": "var1",
                  "pipelineAgg": "1"
                },
                {
                  "name": "var2",
                  "pipelineAgg": "6"
                },
                {
                  "name": "var3",
                  "pipelineAgg": "7"
                },
                {
                  "name": "var4",
                  "pipelineAgg": "9"
                }
              ],
              "settings": {
                "script": "params.var4/(params.var1+params.var2+params.var3)"
              },
              "type": "bucket_script"
            },
            {
              "field": "user_updates",
              "hide": true,
              "id": "9",
              "meta": {},
              "settings": {},
              "type": "avg"
            }
          ],
          "query": "database_name.keyword:$Database AND user_lookups:[0 TO 100] AND user_scans:[0 TO 100] AND user_seeks:[0 TO 100]",
          "refId": "A",
          "timeField": "@timestamp"
        }
      ],
      "timeFrom": null,
      "timeShift": null,
      "title": "$Database - Underutilised indexes",
      "transform": "table",
      "type": "table"
    },
    {
      "collapsed": false,
      "datasource": null,
      "gridPos": {
        "h": 1,
        "w": 24,
        "x": 0,
        "y": 14
      },
      "id": 91,
      "panels": [],
      "title": "Missing Indexes",
      "type": "row"
    },
    {
      "columns": [],
      "datasource": "Elasticsearch [missing-indexes]",
      "fontSize": "100%",
      "gridPos": {
        "h": 5,
        "w": 24,
        "x": 0,
        "y": 15
      },
      "id": 89,
      "interval": "30m",
      "links": [],
      "maxPerRow": 2,
      "options": {},
      "pageSize": null,
      "repeat": "Database",
      "repeatDirection": "v",
      "scopedVars": {
        "Database": {
          "selected": true,
          "text": "",
          "value": ""
        }
      },
      "scroll": true,
      "showHeader": true,
      "sort": {
        "col": 6,
        "desc": true
      },
      "styles": [
        {
          "alias": "Table",
          "align": "auto",
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "pattern": "table_name.keyword",
          "type": "string"
        },
        {
          "alias": "Index",
          "align": "auto",
          "colorMode": "value",
          "colors": [
            "rgba(50, 172, 45, 0.97)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(245, 54, 54, 0.9)"
          ],
          "decimals": 2,
          "pattern": "sql_code.keyword",
          "thresholds": [
            ""
          ],
          "type": "string",
          "unit": "percent"
        },
        {
          "alias": "Impact",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 2,
          "mappingType": 1,
          "pattern": "Average",
          "sanitize": false,
          "thresholds": [],
          "type": "number",
          "unit": "none"
        },
        {
          "alias": "Seeks",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "Average user_seeks",
          "thresholds": [],
          "type": "number",
          "unit": "none"
        }
      ],
      "targets": [
        {
          "alias": "",
          "bucketAggs": [
            {
              "fake": true,
              "field": "table_name.keyword",
              "id": "11",
              "settings": {
                "min_doc_count": 1,
                "order": "desc",
                "orderBy": "10",
                "size": "0"
              },
              "type": "terms"
            },
            {
              "fake": true,
              "field": "sql_code.keyword",
              "id": "4",
              "settings": {
                "min_doc_count": 1,
                "order": "desc",
                "orderBy": "10",
                "size": "0"
              },
              "type": "terms"
            }
          ],
          "metrics": [
            {
              "field": "overall_impact",
              "id": "10",
              "meta": {},
              "settings": {},
              "type": "avg"
            }
          ],
          "query": "database_name.keyword:$Database",
          "refId": "A",
          "timeField": "@timestamp"
        }
      ],
      "timeFrom": null,
      "timeShift": null,
      "title": "Missing indexes - $Database",
      "transform": "table",
      "type": "table"
    }
  ],
  "refresh": false,
  "schemaVersion": 22,
  "style": "dark",
  "tags": [],
  "templating": {
    "list": [
      {
        "allValue": null,
        "current": {
          "text": "",
          "value": [
            ""
          ]
        },
        "datasource": "Elasticsearch [index-stats]",
        "definition": "{\"find\": \"terms\",\"field\": \"database_name.keyword\"}",
        "hide": 0,
        "includeAll": true,
        "label": null,
        "multi": true,
        "name": "Database",
        "options": [],
        "query": "{\"find\": \"terms\",\"field\": \"database_name.keyword\"}",
        "refresh": 2,
        "regex": "",
        "skipUrlSync": false,
        "sort": 0,
        "tagValuesQuery": "",
        "tags": [],
        "tagsQuery": "",
        "type": "query",
        "useTags": false
      }
    ]
  },
  "time": {
    "from": "now-24h",
    "to": "now"
  },
  "timepicker": {
    "refresh_intervals": [
      "5s",
      "10s",
      "30s",
      "1m",
      "5m",
      "15m",
      "30m",
      "1h",
      "2h",
      "1d"
    ],
    "time_options": [
      "5m",
      "15m",
      "1h",
      "6h",
      "12h",
      "24h",
      "2d",
      "7d",
      "30d"
    ]
  },
  "timezone": "",
  "title": "Index Maintenance",
  "uid": "OQVK9BSWk",
  "version": 27
}

Monitoring SQL Server: fiddling with sp_whoisactive output

Building up on from basic telegraf inputs we now want some actual SQL data. And this time around it’s going to be not just numeric telemetry, we’d gain some insights into actual queries that get run.

Our choice of Elastic

Up to this point our choice of data back-end has been unjustified. Telegraf is intended to work with time series DBs like Influx. These are heavily optimised based on assumption what kind of data they store and query (i.e. timed series of events). Elastic is however a full text search. So it’s about the time we started to use it for what it was designed for, right?

Finding active sessions with queries and wait reasons

Taking a step back, we must acknowledge work of Adam Machanic and call out sp_whoisactive as probably the most useful tool every DBA should be aware of. This script provides great overview of what the system is doing right now, how long the session have been running for and what they are waiting on. It even allows for dumping data into predefined tables out of the box! Again, if we were looking to utilise SEL server for monitoring we could have easily gone this way and with a bit of SQL Agent scheduling we’d be done in no time.

There’s one slight issue with sp_whoisactive – it’s not meant to work with anything else but SQL table as data store. For our case it is not ideal . It also does not really know anything about SQL Server Query Store which we think is a nice to have.

It’s a known fact we can’t join results of a stored proc with other tables unless we store results in an interim temp table first. So we’d use sp_whoisactive‘s own temp table scripting facility to store results while we loop through query stores on all our databases to grab some extra bits of data we think might be useful:

DECLARE @table_schema NVARCHAR(MAX);
DECLARE @temp_table NVARCHAR(MAX);
DECLARE @result_table NVARCHAR(MAX);
SET NOCOUNT ON;
/*EXEC sp_WhoIsActive @get_outer_command = 1,
					@find_block_leaders = 1,
					@return_schema = 1,
					@format_output = 0,
					@schema = @table_schema OUTPUT;*/
SET @table_schema = N'CREATE TABLE <table_name> ( [session_id] smallint NOT NULL,[sql_text] nvarchar(max) NULL,[sql_command] nvarchar(max) NULL,[login_name] nvarchar(128) NOT NULL,[wait_info] nvarchar(4000) NULL,[CPU] int NULL,[tempdb_allocations] bigint NULL,[tempdb_current] bigint NULL,[blocking_session_id] smallint NULL,[blocked_session_count] smallint NULL,[reads] bigint NULL,[writes] bigint NULL,[physical_reads] bigint NULL,[used_memory] bigint NOT NULL,[status] varchar(30) NOT NULL,[open_tran_count] smallint NULL,[percent_complete] real NULL,[host_name] nvarchar(128) NULL,[database_name] nvarchar(128) NULL,[program_name] nvarchar(128) NULL,[start_time] datetime NOT NULL,[login_time] datetime NULL,[request_id] int NULL,[collection_time] datetime NOT NULL)';
SET @temp_table = REPLACE(@table_schema, '<table_name>', '#whoisactive_monitoring');
SET @result_table = REPLACE(@table_schema, '<table_name>', '#result');
SET @result_table = REPLACE(@result_table, 'NOT NULL)', 'NOT NULL,[duration] bigint NOT NULL DEFAULT 0)'); -- add placeholder for duration column 
SET @result_table = REPLACE(@result_table, 'DEFAULT 0)', 'DEFAULT 0,[query_text_id] bigint NULL)'); -- add placeholder for query_text_id from Query Store
---
DECLARE @db_name NVARCHAR(120);
DECLARE @enhance_sql NVARCHAR(MAX);

SET NOCOUNT ON
-- here we just build a list of databases on the server using known table Users as an anchor. Your databases will have other well known tables
DECLARE @dbs table ([db_name] sysname)
DECLARE @db_query_sql nvarchar(4000)
SET @db_query_sql='select ''?'' as [db_name] from [?].sys.tables t WHERE t.name = ''Users'''; 
INSERT INTO @dbs ([db_name]) EXEC sp_msforeachdb @db_query_sql
SET NOCOUNT OFF

DECLARE db_name_cursor CURSOR FOR
SELECT [db_name]
FROM @dbs
FOR READ ONLY;
OPEN db_name_cursor;
FETCH NEXT FROM db_name_cursor
INTO @db_name;
WHILE @@FETCH_STATUS = 0
BEGIN
	SET @enhance_sql
		= CONCAT(
					@enhance_sql,
					CHAR(13),
					CHAR(10),
					CAST(N'USE [' + @db_name + ']
						IF EXISTS(SELECT 1 FROM sys.database_query_store_options WHERE actual_state_desc <> ''OFF'')
						UPDATE #result 
						SET query_text_id = qsqt.query_text_id 
						FROM #result wm 
						LEFT JOIN sys.query_store_query_text qsqt ON wm.sql_text COLLATE DATABASE_DEFAULT = qsqt.query_sql_text COLLATE DATABASE_DEFAULT 
						WHERE wm.database_name = ''' + @db_name + ''';' AS NVARCHAR(MAX))
				);
	FETCH NEXT FROM db_name_cursor
	INTO @db_name;
END;
CLOSE db_name_cursor;
DEALLOCATE db_name_cursor;
---
DECLARE @main_script NVARCHAR(MAX);
SET @main_script
	= CAST(CONCAT(
						@temp_table, N';',
						CHAR(13),
						CHAR(10),
						'USE [master]; EXEC [master].[dbo].sp_WhoIsActive @get_outer_command=1, @find_block_leaders=1,@format_output=0, @destination_table=''#whoisactive_monitoring'';',
						CHAR(13),
						CHAR(10),
						@result_table, ';',
						CHAR(13),
						CHAR(10),
						N'INSERT INTO #result SELECT	wm.*, 
													CASE
														WHEN DATEDIFF(hour, wm.start_time, wm.collection_time) > 576 THEN
															DATEDIFF(second, wm.collection_time, wm.start_time)
														ELSE DATEDIFF(ms, wm.start_time, wm.collection_time)
													END AS duration, 
													NULL 
											FROM #whoisactive_monitoring wm;',
						CHAR(13),
						CHAR(10),
						@enhance_sql,
						CHAR(13),
						CHAR(10),
						N'DROP TABLE #whoisactive_monitoring;',
						CHAR(13),
						CHAR(10),
						N'SELECT * FROM #result;DROP TABLE #result;'
					) AS NVARCHAR(MAX));
--PRINT @main_script
EXEC (@main_script);

Now that we’ve got the data

We want Logstash to push the output of this query into Elastic. This way we will always have some idea what’s been going on the DB server. One way to run queries against SQL is JDBC driver – download .jre8.jar, unpack and remember the location.

Defining pipeline

After we’ve got the driver, everything else is easy: the pipeline is literally one input and one output:

input {
	############################################################# Master sp_WhoIsActive ############################################################################
	jdbc {
		id => "master_spWhoIsActive"
		jdbc_driver_library => "path\to\jdbc\lib\mssql-jdbc-7.2.1.jre8.jar"
		jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
		jdbc_connection_string => "jdbc:sqlserver://<your connection string>"
		jdbc_user => nil
		schedule => "* * * * *" ## runs each minute
		statement => "-- literally paste sql code from above inside quotes here"
		add_field => {
						"database" => "master" ## optionally add more fields to distinguish where data comes from
						"instance" => "MY-SQL-Server"
					}
	}
	################################################################################################################################################################	
}
output {
	elasticsearch {
		hosts => "elasticsearch:9200"
		index => "whoisactive-%{+YYYY.MM}"            
	}
}

Restart Logstash and watch Kibana updating with new events!