Running https with Docker

It’s not a secret we love Docker. And with recent changes to how Chrome treats SameSite cookies it’s become a bit of a pain to develop any sort of oAuth solutions with containers: these have to go over SSL so the browser takes it.

Tools like dotnet dev-certs do provide some relief by generating self-signed certs and adding those to trusted store on host machine. In short – most of the time, host-to-container development is not an issue.

What if we need more than one domain?

Sometimes there will be cases where we’d like to access the same service by two domain names. It might be useful if Host header is required:

we can opt for what’s known a SAN certificate. It’s an extension to x.509 that allows us to reuse the same cert for multiple domain names. We can then trust certificate on our dev machine and make Docker use the same cert for HTTPS:

#create a SAN cert for both server.docker.local and localhost
$cert = New-SelfSignedCertificate -DnsName "server.docker.local", "localhost" -CertStoreLocation cert:\localmachine\my

#export it for docker container to pick up later
$password = ConvertTo-SecureString -String "123123" -Force -AsPlainText
Export-PfxCertificate -Cert $cert -FilePath C:\https\aspnetapp.pfx -Password $password

# trust it on our host machine
$store = New-Object System.Security.Cryptography.X509Certificates.X509Store "TrustedPublisher","LocalMachine"
$store.Open("ReadWrite")
$store.Add($cert)
$store.Close()

More containers?

Sometimes we want one container to talk to another while retaining the ability to check up on things from localhost. Consider the following docker-compose:

version: '3'
services:
  client: # client process that needs to talk to server
    depends_on:
      - server
  server: # server that we'd also like to access from the outside
    image:     
    ports:
      - "8443:443"

This would roughtly translate to the following network layout:

Problems start

When one container needs to talk to another container it’s a slightly different story: dev tools don’t have control over containers and cannot magically trust certificates inside there. We can try opt for properly signed certificates (from Let’s Encrypt for example), but that’s a whole different story and is likely not worth it for development machines.

The above powershell script is also going to fall short as it’s only adding the cert onto developmant machine – containers will keep failing to validate the cert. The solution would build on the same principles: generate a self-signed cert and trust it everywhere. Since most Docker containers run Linux we might have better luck going the opposite direction and generating certs in PEM format using a well known tool OpenSSL. Then we’d use Dockerfiles to inject this cert into all our containers and finally we’d convert it into format suitable for our host Windows machine (PKCS#12).

$certPass = "password_here"
$certSubj = "host.docker.internal"
$certAltNames = "DNS:localhost,DNS:host.docker.internal,DNS:identity_server" # we can also add individual IP addresses here like so: IP:127.0.0.1
$opensslPath="path\to\openssl\binaries" #aOpenSSL needs to be present on the host, no installation is necessary though
$workDir="path\to\your\project"
$dockerDir=Join-Path $workDir "ProjectApi"

#generate a self-signed cert with multiple domains
Start-Process -NoNewWindow -Wait -FilePath (Join-Path $opensslPath "openssl.exe") -ArgumentList "req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ",
                                          (Join-Path $workDir aspnetapp.key),
                                          "-out", (Join-Path $dockerDir aspnetapp.crt),
                                          "-subj `"/CN=$certSubj`" -addext `"subjectAltName=$certAltNames`""

# this time round we convert PEM format into PKCS#12 (aka PFX) so .net core app picks it up
Start-Process -NoNewWindow -Wait -FilePath (Join-Path $opensslPath "openssl.exe") -ArgumentList "pkcs12 -export -in ", 
                                           (Join-Path $dockerDir aspnetapp.crt),
                                           "-inkey ", (Join-Path $workDir aspnetapp.key),
                                           "-out ", (Join-Path $workDir aspnetapp.pfx),
                                           "-passout pass:$certPass"

$password = ConvertTo-SecureString -String $certPass -Force -AsPlainText
$cert = Get-PfxCertificate -FilePath (Join-Path $workDir "aspnetapp.pfx") -Password $password

# and still, trust it on our host machine
$store = New-Object System.Security.Cryptography.X509Certificates.X509Store [System.Security.Cryptography.X509Certificates.StoreName]::Root,"LocalMachine"
$store.Open("ReadWrite")
$store.Add($cert)
$store.Close()

Example: Running Identity Server

Now we have our certs (for example, located in %USERPROFILE%.aspnet\https). Here’s a quick howto tell asp.net core -base containers to pick them up:

docker pull your_docker_image
docker run --rm -it -p 8000:80 -p 8001:443 -e ASPNETCORE_URLS="https://+;http://+" -e ASPNETCORE_HTTPS_PORT=8001 -e ASPNETCORE_Kestrel__Certificates__Default__Password="123123" -e ASPNETCORE_Kestrel__Certificates__Default__Path=\https\aspnetapp.pfx -v %USERPROFILE%\.aspnet\https:C:\https\ your_docker_image

docker run <your image> --rm -it -p 8000:80 -p 8001:443 -e ASPNETCORE_URLS="https://+;http://+" -e ASPNETCORE_HTTPS_PORT=8001 -e ASPNETCORE_Kestrel__Certificates__Default__Password="123123" -e ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx

Or in docker-compose format:

version: '3'
services:
  identity_server:
    image: mcr.microsoft.com/dotnet/core/samples:aspnetapp    
    environment: 
      - ASPNETCORE_URLS=https://+:443;http://+:80
      - ASPNETCORE_Kestrel__Certificates__Default__Password=password_here
      - ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
    volumes:
      - ~/.aspnet/https/:/https/:ro 
    container_name: identity_server
    ports:
      - "8443:443"
      - "8080:80"

Data visualisation with Vega

We love nice dashboards. And if you see a chart somewhere on a webpage – chances are it runs D3.js. D3.js is a JavaScript library for manipulating documents based on data. It allows you to do a great deal of visualisation but comes with a bit of a learning curve. Even though the data can come in any shape and form, plotting and transformations are JavaScript.

Declarative approach

This is where Vega comes forward. Everything is now a JSON therefore we can literally build and ship visualisations without touching JavaScript at all!

Step by step

Suppose we’ve got hierarchical animal data represented by following JSON:

"values": [
        {"id": "1", "parent": null, "title": "Animal"},
        {"id": "2", "parent": "1", "title": "Duck"},
        {"id": "3", "parent": "1", "title": "Fish"},
        {"id": "4", "parent": "1", "title": "Zebra"}
      ]

What we can then do is to lay the nodes out in a tree-like shape (stratify does the job):

"transform": [
        {
          "type": "stratify",
          "key": "id",
          "parentKey": "parent"
        },
        {
          "type": "tree",
          "method": "tidy",
          "separation": true,
          "size": [{"signal": "width"}, {"signal": "height"}]
        }
      ]

having laid out the nodes, we need to generate connecting lines, treelinks + linkpath combo does exactly that:

{
      "name": "links",
      "source": "tree", // take datasource "tree" as input
      "transform": [
        { "type": "treelinks" }, // apply transform 1
        { "type": "linkpath", // follow up with next transform
          "shape": "diagonal"
          }
      ]
    }

now that we’ve got our data sources, we want to draw actual objects. In Vega these are called marks. For simplicity I’m only drawing one rectangle with a title for each data point and some basic lines to connect:

"marks": [
    {
      "type": "path",
      "from": {"data": "links"}, // dataset we defined above
      "encode": {
        "enter": {
          "path": {"field": "path"} // linkpath generated a dataset with "path" field in it - we just grab it here
        }
      }
    },
    {
      "type": "rect",
      "from": {"data": "tree"},
      "encode": {
        "enter": {
          "stroke": {"value": "black"},
          "width": {"value": 100},
          "height": {"value": 20},
          "x": {"field": "x"},
          "y": {"field": "y"}
        }
      }
    },
    {
      "type": "text",
      "from": {"data": "tree"}, // use data set we defined earlier
      "encode": {
        "enter": {
          "stroke": {"value": "black"},
          "text": {"field": "title"}, // we can use data fields to display actual values
          "x": {"field": "x"}, // use data fields to draw values from
          "y": {"field": "y"},
          "dx": {"value":50}, // offset the mark to appear in rectangle center
          "dy": {"value":13},
          "align": {"value": "center"}
        }
      }
    }
  ]

All in all we arrived at a very basic hierarchical chart. It looks kinda plain and can definitely be improved: the rectangles there should probably be replaced with groups and connection paths will need some work too.

{
  "$schema": "https://vega.github.io/schema/vega/v5.json",
  "width": 800,
  "height": 300,
  "padding": 5,

  "data": [
    {
      "name": "tree",
      "values": [
        {"id": "1", "parent": null, "title": "Animal"},
        {"id": "2", "parent": "1", "title": "Duck"},
        {"id": "3", "parent": "1", "title": "Fish"},
        {"id": "4", "parent": "1", "title": "Zebra"}
      ],
      "transform": [
        {
          "type": "stratify",
          "key": "id",
          "parentKey": "parent"
        },
        {
          "type": "tree",
          "method": "tidy",
          "separation": true,
          "size": [{"signal": "width"}, {"signal": "height"}]
        }
      ]      
    },
    {
      "name": "links",
      "source": "tree",
      "transform": [
        { "type": "treelinks" },
        { "type": "linkpath",
          "shape": "diagonal"
          }
      ]
    }, 
    {
      "name": "tree-boxes",
      "source": "tree",
      "transform": [
          { 
            "type": "filter",
            "expr": "datum.parent == null"
          }
        ]
    },
    {
      "name": "tree-circles",
      "source": "tree",
      "transform": [
        {
          "type": "filter",
          "expr": "datum.parent != null"
        }
      ]
    }
  ],
  "marks": [
    {
      "type": "path",
      "from": {"data": "links"},
      "encode": {
        "enter": {
          "path": {"field": "path"}
        }
      }
    },
    {
      "type": "rect",
      "from": {"data": "tree-boxes"},
      "encode": {
        "enter": {
          "stroke": {"value": "black"},
          "width": {"value": 100},
          "height": {"value": 20},
          "x": {"field": "x"},
          "y": {"field": "y"}
        }
      }
    },
    {
      "type": "symbol",
      "from": {"data": "tree-circles"},
      "encode": {
        "enter": {
          "stroke": {"value": "black"},
          "width": {"value": 100},
          "height": {"value": 20},
          "x": {"field": "x"},
          "y": {"field": "y"}
        }
      }
    },
    {
      "type": "rect",
      "from": {"data": "tree"},
      "encode": {
        "enter": {
          "stroke": {"value": "black"},
          "width": {"value": 100},
          "height": {"value": 20},
          "x": {"field": "x"},
          "y": {"field": "y"}
        }
      }
    },
    {
      "type": "text",
      "from": {"data": "tree"},
      "encode": {
        "enter": {
          "stroke": {"value": "black"},
          "text": {"field": "title"},
          "x": {"field": "x"},
          "y": {"field": "y"},
          "dx": {"value":50},
          "dy": {"value":13},
          "align": {"value": "center"}
        }
      }
    }
  ]
}

Getting a bit fancier

Suppose, we would like to render different shapes for root and leaf nodes of our chart. One way to achieve this will be to add two filter transformations based on your tree dataset and filter them accordingly:

    {
      "name": "tree-boxes",
      "source": "tree", // grab the existing data
      "transform": [
          { 
            "type": "filter",
            "expr": "datum.parent == null" // run it through a filter defined by expression
          }
        ]
    },
    {
      "name": "tree-circles",
      "source": "tree",
      "transform": [
        {
          "type": "filter",
          "expr": "datum.parent != null"
        }
      ]
    }

then instead of rendering all marks as rect we’d want two different shapes for respective transformed datasets:

{
      "type": "rect",
      "from": {"data": "tree-boxes"},
      "encode": {
        "enter": {
          "stroke": {"value": "black"},
          "width": {"value": 100},
          "height": {"value": 20},
          "x": {"field": "x"},
          "y": {"field": "y"}
        }
      }
    },
    {
      "type": "symbol",
      "from": {"data": "tree-circles"},
      "encode": {
        "enter": {
          "stroke": {"value": "black"},
          "width": {"value": 100},
          "height": {"value": 20},
          "x": {"field": "x"},
          "y": {"field": "y"}
        }
      }
    }

Demo time

Play with Vega in live editor here.

T-SQL syntax analysis

It is nice to be able to analyze the code for either code review automation or rule enforcement. While Roslyn is an extremely useful platform for C# code analysis, more often than not, .net applications come backed by SQL Server. And this means one thing…

We need Roslyn for SQL!

We probably are not getting full blown SQL dependency tracking system just yet. But apparently Microsoft does offer a SqlParser library that is very capable of handling most of static code analysis on SQL. Let us run through a reasonably common scenario. Developers rely on feature flags for all new functionality. Product Owner explicitly enables features are they become ready. States are kept in the database and managed via simple scripts. Since we release code multiple times a day, unfinished features can ship to production as long as respective flags are safely turned off. Now our task is to ensure that change scripts only create feature flag definitions but never turn them on by default.

A bit of theory

Parsing language (usually) happens in stages:

  1. Lexical analysis (convert character stream to basic building blocks – tokens)
  2. Syntactic analysis (group tokens into constructs according to selected grammar – parse tree)
  3. Semantic analysis (convert parse tree into abstract syntax tree)

Once we’ve got the Abstract Syntax Tree (AST), we can interact with it and perform certain operations depending on node type and context. Probably the easiest way to traverse such tree would be to implement a Visitor. Both Roslyn and SQL Parser offer stubs to aid implementing this pattern.

Sample scripts

This particular example revolves around two workflows:

-- case 1 - definе new feature flag
INSERT INTO dbo.FeatureFlag ( [Key], Value ) VALUES ( @featureFlagId, N'true')
-- case 2 - update existing flag
UPDATE dbo.FeatureFlag SET Value = 'true' WHERE [Key] = @featureFlagId -- enable/disable flag state

Let’s imagine we’ve got a directory where developers put new scripts. We need to scan it and figure out whether any scripts attempt to update the Value in our table:

using Microsoft.SqlServer.Management.SqlParser.Parser;
......	
	foreach (var file in Directory.EnumerateFiles(@"C:\ChangeScriptsFolder"))
	{
		var result = Parser.Parse(File.ReadAllText(file)); // parse SQL. That's all code we need to get an Abstract Syntax Tree
		var visitor = new FeatureFlagStateVisitor(file); // prep our visitor
		result.Script.Accept(visitor); // run
		if (!visitor.Result.IsSuccess)
		{
			string errorList = visitor.Result.Errors.Aggregate(new StringBuilder(), (sb, error) => sb.AppendLine(error)).ToString();
			throw new Exception($"FeatureFlag state must be disabled on build. Sql text analysis indicates following issues: {errorList}");
		}
	}
.......

And the visitor itself can look something like this:

    public class SqlAnalysisResult
    {
        public bool IsSuccess { get; set; }
        public IList<string> Errors { get; set; }
    }
    public class FeatureFlagStateVisitorContext
    {
        public bool IsFtUpdating { get; set; }
        public bool IsFtInserting { get; set; }
        public string StatementLocation { get; set; }
        public int InsertColumnRef { get; set; }
    }
    public class FeatureFlagStateVisitor: SqlCodeObjectRecursiveVisitor
    {
        public SqlAnalysisResult Result { get; set; }
        private readonly FeatureFlagStateVisitorContext _context;
        private readonly string _fileName;

        public FeatureFlagStateVisitor(string fileName)
        {
            Result = new SqlAnalysisResult
            {
                Errors = new List<string>(),
                IsSuccess = true
            };
            _context = new FeatureFlagStateVisitorContext();
            _fileName = fileName;
        }

        public override void Visit(SqlInsertSpecification codeObject)
        {
            if (codeObject.Target.Sql.CaseInsensitiveContains("FeatureFlag"))
            {
                _context.InsertColumnRef = codeObject.TargetColumns.IndexOf(codeObject.TargetColumns.FirstOrDefault(c => c.ColumnName.Sql.Contains("Value")));
                if (_context.InsertColumnRef >= 0)
                {
                    _context.IsFtInserting = true;
                    _context.StatementLocation = $"L{codeObject.StartLocation.LineNumber}:{codeObject.StartLocation.Offset} - L{codeObject.EndLocation.LineNumber}:{codeObject.EndLocation.Offset}";
                }
            }
            base.Visit(codeObject);
            _context.IsFtInserting = false;
        }

        public override void Visit(SqlRowConstructorExpression codeObject)
        {
            if (_context.IsFtInserting && codeObject.Values[_context.InsertColumnRef].Sql.CaseInsensitiveContains("true"))
            {
                Result.IsSuccess = false;
                Result.Errors.Add($"INSERT {_fileName} - {_context.StatementLocation}");
            }
            base.Visit(codeObject);
        }

        public override void Visit(SqlUpdateSpecification codeObject)
        {
            if (codeObject.Target.Sql.CaseInsensitiveContains("FeatureFlag"))
            {
                _context.IsFtUpdating = true;
                _context.StatementLocation = $"L{codeObject.StartLocation.LineNumber}:{codeObject.StartLocation.Offset} - L{codeObject.EndLocation.LineNumber}:{codeObject.EndLocation.Offset}";
            }
            base.Visit(codeObject);
            _context.IsFtUpdating = false;
        }

        public override void Visit(SqlColumnAssignment codeObject)
        {
            if (_context.IsFtUpdating && codeObject.Column.Sql.CaseInsensitiveContains("Value") && codeObject.Value.Sql.CaseInsensitiveContains("true"))
            {
                Result.IsSuccess = false;
                Result.Errors.Add($"UPDATE {_fileName} - {_context.StatementLocation}");
            }
            base.Visit(codeObject);
        }
    }

The idea is pretty simple – we keep track where we are at the moment and refer back to this context when making final decision on SqlColumnAssignment/SqlRowConstructorExpression level.

Walking code with Roslyn

It was going to happen sooner or later – our research on C# dynamic features eventually ended up with an attepmt to parse bits of source code. There are quite a few solutions on the market, with NRefactory being our preferred tool over the years. There are however a few limitations: it does not support .net core and C# 6.

It is a big deal

It might seem, that support for newer language spec is not critical. But in fact, it gets problematic very quickly even in more established projects. Luckily for us, Microsoft has chosen to open source Roslyn – the very engine that powers their compiler services. Their official documentation covers the platform pretty well, and goes in great detail of writing Visual Studio code analysers. We however often have to deal with writing MSBuild tasks that load the whole solution and run analysis on class hierarchies (for example, to detect whether a single select statement is being called inside a foreach loop – we would fail the build and suggest to replace it with bulk select)

Installing

Roslyn is available via NuGet as a number of Microsoft.CodeAnalysis.* packages. We normally include these four:

Install-Package Microsoft.CodeAnalysis.Workspaces.MSBuild
Install-Package Microsoft.CodeAnalysis
Install-Package Microsoft.CodeAnalysis.CSharp
Install-Package Microsoft.Build.Locator # this is a helper to locate correct MSBuild toolchain (in case the machine has more than one installed)

Sometimes the environment gets confused as to what version MSBuild to use, and this is why starting a project with something like this is pretty much a must since VS2015:

if (!MSBuildLocator.IsRegistered) MSBuildLocator.RegisterDefaults(); // ensures correct version is loaded up
var _ = typeof(Microsoft.CodeAnalysis.CSharp.Formatting.CSharpFormattingOptions); // this ensures library is referenced so the compiler would not try to optimise it away (if dynamically loading assemblies or doing other voodoo that can throw the compiler off) - probably less important than the above but we prefer to follow cargo cult here and leave it be for 

After initial steps, simplistic solution traversal would look something along these lines:

async Task AnalyseSolution()
{
	using (var w = MSBuildWorkspace.Create())
	{
		var solution = await w.OpenSolutionAsync(@"MySolution.sln");		
		foreach (var project in solution.Projects)
		{			
			var docs = project.Documents; // allows for file-level document filtering
			var compilation = await project.GetCompilationAsync(); // allows for assembly-level analysis as well as SemanticModel 
			foreach (var doc in docs)
			{
				var walker = new CSharpSyntaxWalker(); // CSharpSyntaxWalker is an abstract class - we will need to define our own implementation for this to actually work
				walker.Visit(await doc.GetSyntaxRootAsync()); // traverse the syntax tree
			}
		}
	}
}

Syntax Tree Visitor

As with pretty much every single mainstream syntax analyser, the easiest way to traverse syntax trees is by using a Visitor Pattern. It allows to decouple tree nodes and processing logic. Which will allow room for expansion on either sides (easy to add new logic, easy to add new tree node types). Roslyn has stub CSharpSyntaxWalker that allows us to only override required nodes for processing. Everything else is then taken care of.

With basics out of the way, let’s look into classes that make up our platform here. Top of the hierarchy is MSBuild Workspace followed by Solution, Project and Document. Roslyn makes a distinction between parsing code and compiling it. Meaning some analytics will only be available in Compilation class that is available for project as well as for individual documents down the track.

Traversing the tree

Just loading the solution is kinda pointless though. We’d need to come up with processing logic – and the best place to do it would be a CSharpSyntaxWalker subclass. Suppose, we’d like to determine whether class constructor contains if statements that are driven by parameters. This might mean we’ve got overly complicated classes and could benefit from refactoring these out:

public class ConstructorSyntaxWalker : CSharpSyntaxWalker
{
	public List<ISymbol> Parameters { get; set; }
	public int IfConditions { get; set; }

	SemanticModel sm;

	public ConstructorSyntaxWalker(SemanticModel sm)
	{
		this.sm = sm;
		Parameters = new List<ISymbol>();
	}

	public override void VisitIfStatement(IfStatementSyntax node)
	{
		Parameters.AddRange(sm.AnalyzeDataFlow(node).DataFlowsIn); // .AnalyzeDataFlow() is one of the most commonly used parts of the platform: it requires a compilation to work off and allows tracking dependencies. We could then check if these parameters are supplied to constructor and make a call whether this is allowed 
		IfConditions++; // just count for now, nothing fancy
		base.VisitIfStatement(node);
	}
}

Monitoring Mikrotik with InfluxDb

With unlimited fibre plans it is less important to keep track on who is the biggest content consumer in the office, but it still is interesting to find out.

Mikrotik

We’ve got a Mikrotik box serving as our gateway. It might be a total overkill, but it is rock solid and offers a ton of feaures with easy to follow configuration UI (they call it WinBox). One feature we’ll be particularly interested in is ip accounting. Despite intimidating wiki page, the set up is actually pretty simple:

  1. tick a couple checkboxes
  2. enable web access
  3. and optionally supply ip of host that will be allowed to collect data from the router:

With data source out of the way it’s time to look the receiving side.

InfluxDB

The endgame goal was to feed the data into nice Grafana dashboards (we touched on it here). Luckily we found this dashboard that promised to be pretty much a turnkey solution. Unfortunately the exporter didn’t want to play nicely. The issue was with the exporter assuming that local network would be 192.168.0.0/16 (as well as hardcoded influxdb address). This appears to be correct, but we still couldn’t get it to parse our network of 192.168.1.0/24 (lack of Go knowledge might have been a contributing factor here too). What can we do then?

Build one!

Ladies and gentlemen, I give you mikrotik-acct! This collector is written with .net core 3.1 so should run on Docker with no real issues. It offers more knobs to play with through config file (which is hopefully all self-explanatory). One thing of interest here is actually a hepler method that tells us whether address is in our subnet or not: it appears, .net core doesn’t offer this functrionality out of the box, so we had to resort to the mighty Stack Overflow:

public static bool IsInSubnet(this IPAddress address, string subnetMask)
        {
            var slashIdx = subnetMask.IndexOf("/");
            if (slashIdx == -1)
            {
                // We only handle netmasks in format "IP/PrefixLength".
                throw new NotSupportedException("Only SubNetMasks with a given prefix length are supported.");
            }

            // First parse the address of the netmask before the prefix length.
            var maskAddress = IPAddress.Parse(subnetMask.Substring(0, slashIdx));

            if (maskAddress.AddressFamily != address.AddressFamily)
            {
                // We got something like an IPV4-Address for an IPv6-Mask. This is not valid.
                return false;
            }

            // Now find out how long the prefix is.
            int maskLength = int.Parse(subnetMask.Substring(slashIdx + 1));

            if (maskAddress.AddressFamily == AddressFamily.InterNetwork)
            {
                // Convert the mask address to an unsigned integer.
                var maskAddressBits = BitConverter.ToUInt32(maskAddress.GetAddressBytes().Reverse().ToArray(), 0);

                // And convert the IpAddress to an unsigned integer.
                var ipAddressBits = BitConverter.ToUInt32(address.GetAddressBytes().Reverse().ToArray(), 0);

                // Get the mask/network address as unsigned integer.
                uint mask = uint.MaxValue << (32 - maskLength);

                // https://stackoverflow.com/a/1499284/3085985
                // Bitwise AND mask and MaskAddress, this should be the same as mask and IpAddress
                // as the end of the mask is 0000 which leads to both addresses to end with 0000
                // and to start with the prefix.
                return (maskAddressBits & mask) == (ipAddressBits & mask);
            }

            if (maskAddress.AddressFamily == AddressFamily.InterNetworkV6)
            {
                // Convert the mask address to a BitArray.
                var maskAddressBits = new BitArray(maskAddress.GetAddressBytes());

                // And convert the IpAddress to a BitArray.
                var ipAddressBits = new BitArray(address.GetAddressBytes());

                if (maskAddressBits.Length != ipAddressBits.Length)
                {
                    throw new ArgumentException("Length of IP Address and Subnet Mask do not match.");
                }

                // Compare the prefix bits.
                for (int i = 0; i < maskLength; i++)
                {
                    if (ipAddressBits[i] != maskAddressBits[i])
                    {
                        return false;
                    }
                }

                return true;
            }

            throw new NotSupportedException("Only InterNetworkV6 or InterNetwork address families are supported.");
        }

Grafana

After having gone through all the trouble we’ve finally got ourselves nice dashboards:

SQL Server computed columns: not just mathematics

Computed columns in SQL server are very handy when we’ve got a formula that we can rely on and some CPU cycles to spare. There’s heaps of articles online about what these are, so we’d not repeat it here. Let us rather look at one peculiar case…

What if I’ve got two bit columns?

A very plausible scenario indeed. Let’s create a table like so:

CREATE TABLE Test (
Id INT PRIMARY KEY,
Check1 BIT NOT NULL,
Check2 BIT NOT NULL,
CheckAll AS (Check1 = 1 OR Check2 = 1) -- Incorrect syntax near '='.
)

that didn’t seem to work and the error message is a bit unhelpful: we know it’s wrong but why?

Reading the documentation once again

computed_column_expression
Is an expression that  defines the value  of a computed column. A computed column is a virtual column that is not physically stored in the table but is computed from an expression that uses other columns in the same table. For example, a computed column could have the definition: cost AS price * qty. The  expression can be a noncomputed column name, constant, function, variable , and any combination of these connected by one or more operators. The expression cannot be a subquery or include an alias data type.

Okay, this kinda hints at the fact we need to return a value. So a simple search condition does not cut it. Let us try one more time:

CREATE TABLE Test (
Id INT PRIMARY KEY,
Check1 BIT NOT NULL,
Check2 BIT NOT NULL,
CheckAll AS (CASE WHEN Check1 = 1 OR Check2 = 1 THEN 1 ELSE 0 END) -- this time is better! we can roll with it
)

Take it up another notch

The above example works but looks a bit too verbose and explicit. Is there a better way? If we deal with bit columns, we might be in luck and opt for t-sql bitwise operators:

CREATE TABLE Test (
Id INT PRIMARY KEY,
Check1 BIT NOT NULL,
Check2 BIT NOT NULL,
CheckAll AS (Check1|Check2) 
)

Splitting strings with MySQL

Suppose we’ve got a string that we’d like to parse it into a table and run DISTINCT over.

A613; A613; A095; A095; A095; A095; A095; A095; A095; A095; A095; A095; A095; A095; A095; A095; A095; A095; A095; A613;

MySQL would not automatically convert a string into a table and there’s no library function to do the job for us. But we can leverage a classic SUBSTRING_INDEX(SUBSTRING_INDEX(newString, ‘;’, N), ‘;’, -1) expression to extract Nth item from the string: 

NSUBSTRING_INDEX(SUBSTRING_INDEX(newString, ‘;’, N), ‘;’, -1)
1A613
2A613
3A095

Generating numeric sequence

For the SUBSTRING_INDEX trick to work we need to generate ourselves a long enough sequence of integers so we can cross join it onto our source string. Check out the folowing sequence generator:

select (th*1000+h*100+t*10+u+1) x from
(select 0 th union select 1 union select 2 union select 3 union select 4 union
select 5 union select 6 union select 7 union select 8 union select 9) A,
(select 0 h union select 1 union select 2 union select 3 union select 4 union
select 5 union select 6 union select 7 union select 8 union select 9) B,
(select 0 t union select 1 union select 2 union select 3 union select 4 union
select 5 union select 6 union select 7 union select 8 union select 9) C,
(select 0 u union select 1 union select 2 union select 3 union select 4 union
select 5 union select 6 union select 7 union select 8 union select 9) D
order by x;

yes, this particular example is limited to 10k items but it can be expanded upon. It will come down to knowing your data and ensuring you’ve got enough of these to feed to the splitter. Or use alternatives.

Stopping enumeration

After we’ve got the sequence going we need to find a number of meaningful elements in the source string. We basically need to count the delimiters in our source string. Problem is, MySQL doesn’t have a built in function for that. Mathematics to the rescue though:

SELECT  (
            CHAR_LENGTH(val)
            - CHAR_LENGTH( REPLACE ( val, ";", "") ) 
        ) / CHAR_LENGTH(";")        
   AS count    
FROM "source string"

Putting it together

SELECT distinct SUBSTRING_INDEX(SUBSTRING_INDEX(val, '; ', x), '; ', -1) FROM
(
  select (t*10+u+1) x, val from
  (select 0 t union select 1 union select 2 union select 3 union select 4 union
  select 5 union select 6 union select 7 union select 8 union select 9) A,
  (select 0 u union select 1 union select 2 union select 3 union select 4 union
  select 5 union select 6 union select 7 union select 8 union select 9) B
	CROSS JOIN (SELECT 'A613; A613; A095; A095; A095; A095; A095; A095; A095; A095; A095; A095; A095; A095; A095; A095; A095; A095; A095; A613' as val
               ) as seq
	ORDER BY x
) as raw_data
WHERE x <= (
            CHAR_LENGTH(val)
            - CHAR_LENGTH( REPLACE ( val, ";", "") ) 
        ) / CHAR_LENGTH(";")        
    

EF Core 3: Getting model metadata from dynamically loaded assembly with IL Emit

Yet another Stack Overflow question has sparked a heated discussion and got us thinking whether we can do better.

In a nutshell, the question was about finding a way to query EF Core model metadata without directly referencing the assembly that defines it. Think MsBuild Task that needs to check if your model is following your company standards. Or a test of some sort.

First stab at it

We were able to help the OP by quickly whipping up the following loader code:

var assembly = Assembly.LoadFrom(@"C:\OnlineShoppingStore\bin\Debug\netcoreapp2.2\OnlineShoppingStore.dll");
var contextType = assembly.GetTypes().First(d => d.Name == "OnlineStoreDbContext");
var ctx = Activator.CreateInstance(contextType) as DbContext; // instantiate your context. this will effectively build your model, so you must have all required EF references in your project
var p = ctx.Model.FindEntityType(assembly.GetTypes().First(d => d.Name == "Product")); // get the type from loaded assembly
//var p = ctx.Model.FindEntityType("OnlineStoreDbContext.Product"); // querying model by type name also works, but you'd need to correctly qualify your type names
var pk = p.FindPrimaryKey().Properties.First().Name; // your PK property name as built by EF model

The answer ended up being accepted, but the OP had a bit of an issue with instantiating the Context:

System.InvalidOperationException: 'No database provider has been configured for this DbContext. 
A provider can be configured by overriding the DbContext.OnConfiguring method or by using AddDbContext on the application service provider. 
If AddDbContext is used, then also ensure that your DbContext type accepts a DbContextOptions object in its constructor and passes it to the base constructor for DbContext.

This is kind of expected: when EF creates the context it will invoke OnConfiguring override and set up DB provider with connection strings and so on and so forth. It all is necessary for the actual thing to run, but for the OP it meant having to drag all DB providers into the test harness. Not ideal.

The idea

After a bit back and forth I’ve got an idea. What if we subclass the Context yet again and override the OnConfiguring with a predefined Provider (say, InMemory)?

IL Emit all things

We don’t get to use IL Emit often – it’s meant for pretty specific use cases and I think this is one. The key to getting it right in our case was finding the correct overload of UseInMemoryDatabase. There’s a chance however, that you might need to tweak it to suit your needs. It is pretty trivial once you know what you’re looking for.

public static MethodBuilder OverrideOnConfiguring(this TypeBuilder tb)
        {
            MethodBuilder onConfiguringMethod = tb.DefineMethod("OnConfiguring",
                MethodAttributes.Public
                | MethodAttributes.HideBySig
                | MethodAttributes.NewSlot
                | MethodAttributes.Virtual,
                CallingConventions.HasThis,
                null,
                new[] { typeof(DbContextOptionsBuilder) });

            // the easiest method to pick will be .UseInMemoryDatabase(this DbContextOptionsBuilder optionsBuilder, string databaseName, Action<InMemoryDbContextOptionsBuilder> inMemoryOptionsAction = null)
            // but since constructing generic delegate seems a bit too much effort we'd rather filter everything else out
            var useInMemoryDatabaseMethodSignature = typeof(InMemoryDbContextOptionsExtensions)
                .GetMethods()
                .Where(m => m.Name == "UseInMemoryDatabase")
                .Where(m => m.GetParameters().Length == 3)
                .Where(m => m.GetParameters().Select(p => p.ParameterType).Contains(typeof(DbContextOptionsBuilder)))
                .Where(m => m.GetParameters().Select(p => p.ParameterType).Contains(typeof(string)))
                .Single();
            
            // emits the equivalent of optionsBuilder.UseInMemoryDatabase("test");
            var gen = onConfiguringMethod.GetILGenerator();
            gen.Emit(OpCodes.Ldarg_1);
            gen.Emit(OpCodes.Ldstr, Guid.NewGuid().ToString());
            gen.Emit(OpCodes.Ldnull);
            gen.Emit(OpCodes.Call, useInMemoryDatabaseMethodSignature);
            gen.Emit(OpCodes.Pop);
            gen.Emit(OpCodes.Ret);

            return onConfiguringMethod;
        }

with the above out of the way we now can build our dynamic type and plug it into our test harness!

class Program
    {
        static void Main(string[] args)
        {
            // load assembly under test
            var assembly = Assembly.LoadFrom(@"..\ef-metadata-query\OnlineShoppingStore\bin\Debug\netcoreapp3.1\OnlineShoppingStore.dll");
            var contextType = assembly.GetTypes().First(d => d.Name == "OnlineStoreDbContext");

            // create yet another assembly that will hold our dynamically generated type
            var typeBuilder = AssemblyBuilder
                                .DefineDynamicAssembly(new AssemblyName(Guid.NewGuid().ToString()), AssemblyBuilderAccess.RunAndCollect)
                                .DefineDynamicModule(Guid.NewGuid() + ".dll")
                                .DefineType("InheritedDbContext", TypeAttributes.Public, contextType); // make new type inherit from DbContext under test!

            // this is the key here! now our dummy implementation will kick in!
            var onConfiguringMethod = typeBuilder.OverrideOnConfiguring();
            typeBuilder.DefineMethodOverride(onConfiguringMethod, typeof(DbContext).GetMethod("OnConfiguring", BindingFlags.Instance | BindingFlags.NonPublic));
            
            var inheritedDbContext = typeBuilder.CreateType(); // enough config, let's get the type and roll with it

            // instantiate inheritedDbContext with default OnConfiguring implementation
            var context = Activator.CreateInstance(inheritedDbContext) as DbContext; // instantiate your context. this will effectively build your model, so you must have all required EF references in your project
            var p = context?.Model.FindEntityType(assembly.GetTypes().First(d => d.Name == "Product")); // get the type from loaded assembly
            
            //query the as-built model
            //var p = ctx.Model.FindEntityType("OnlineStoreDbContext.Product"); // querying model by type name also works, but you'd need to correctly qualify your type names
            var pk = p.FindPrimaryKey().Properties.First().Name; // your PK property name as built by EF model
            
            Console.WriteLine(pk);
        }
    }

This is runnable

Source code is available on GitHub in case you want to check it out and play a bit

Monitoring SQL Server: index maintenance

Now that we’ve got basic SQL queries going, we want to collect some actionable intel. A good measure of any SQL database performance is how well indexes are utilised.

This isn’t new

All queries we’re about to share are hardly a revelation. What we however want to achieve is to have an idea of how that information changes over time

Index utilisation

input {
	############################################################# Index Maintenance ############################################################################
	jdbc {
		id => "master_index_maintenance"
		jdbc_driver_library => "path\to\jdbc\lib\mssql-jdbc-7.2.1.jre8.jar"
		jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
		jdbc_connection_string => "jdbc:sqlserver://<your connection string>"
		jdbc_user => nil
		schedule => "*/15 * * * *"
		statement => "	DECLARE @db_id INT
						DECLARE @db_name NVARCHAR(120)
						DECLARE @index_data TABLE(
												[database_name] NVARCHAR(128) NOT NULL,
												table_name sysname NOT NULL, 
												index_name sysname NULL, 
												index_type TINYINT NOT NULL, 
												user_seeks bigint NOT NULL, 
												user_scans bigint NOT NULL, 
												user_lookups bigint NOT NULL, 
												user_updates bigint NOT NULL
											   );

						SET NOCOUNT ON
						DECLARE @dbs table ([db_name] sysname)
						DECLARE @db_query_sql nvarchar(4000)
						SET @db_query_sql='select ''?'' as [db_name] from [?].sys.tables t WHERE t.name = ''Users''';

						INSERT INTO @dbs ([db_name]) EXEC sp_msforeachdb @db_query_sql
						SET NOCOUNT OFF

						DECLARE db_id_cursor CURSOR FOR SELECT DB_ID([db_name]), [db_name] FROM @dbs FOR READ ONLY 
						OPEN db_id_cursor  
						FETCH NEXT FROM db_id_cursor INTO @db_id, @db_name;  
						WHILE @@FETCH_STATUS = 0  
						BEGIN
							DECLARE @sql NVARCHAR(MAX) = CAST(N'USE [' + @db_name + '];
							
							SELECT ''' + @db_name + ''', t.[name], ix.[name], ix.[type], us.user_seeks, us.user_scans, us.user_lookups, us.user_updates
							FROM sys.dm_db_index_usage_stats us
							INNER JOIN sys.indexes ix ON us.object_id = ix.object_id and ix.index_id = us.index_id
							INNER JOIN sys.tables t ON ix.object_id = t.object_id	
							WHERE us.database_id = ' + CAST(@db_id AS NVARCHAR(10)) AS NVARCHAR(MAX));

							INSERT INTO @index_data EXEC sys.sp_executesql @sql;
							FETCH NEXT FROM db_id_cursor INTO @db_id, @db_name;  
						END
						CLOSE db_id_cursor
						DEALLOCATE db_id_cursor
						SELECT * FROM @index_data"
		add_field => {
						"sql_instance" => "SQL2"
					}
	}
	################################################################################################################################################################	
}
output {
	elasticsearch {
		hosts => "elasticsearch:9200"
		index => "sql-index-stats-%{+YYYY.MM}"            
	}
}

Missing indexes

This one is a bit more interesing in a sense that we will get actual index hints. It is not a silver bullet though – this still needs to be analysed by humans for best results.

input {
	############################################################# Master sp_WhoIsActive ############################################################################
	jdbc {
		id => "master_missing_indexes"
		jdbc_driver_library => "path\to\jdbc\lib\mssql-jdbc-7.2.1.jre8.jar"
		jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
		jdbc_connection_string => "jdbc:sqlserver://<your connection string>"
		jdbc_user => nil
		schedule => "1 */2 * * *"
		statement => "	DECLARE @db_id INT
						DECLARE @db_name NVARCHAR(120)
						DECLARE @index_data TABLE(
												[database_name] NVARCHAR(128) NOT NULL,						
												table_name sysname NOT NULL, 
												unique_compiles bigint NOT NULL,
												user_seeks bigint NOT NULL, 
												user_scans bigint NOT NULL, 
												avg_total_user_cost float NULL, 
												avg_user_impact float NULL, 
												overall_impact float NOT NULL,
												sql_code NVARCHAR(MAX) NOT NULL
											   );

						SET NOCOUNT ON
						DECLARE @dbs table ([db_name] sysname)
						DECLARE @db_query_sql nvarchar(4000)
						SET @db_query_sql='select ''?'' as [db_name] from [?].sys.tables t WHERE t.name = ''Users''';

						INSERT INTO @dbs ([db_name]) EXEC sp_msforeachdb @db_query_sql
						SET NOCOUNT OFF

						DECLARE db_id_cursor CURSOR FOR SELECT DB_ID([db_name]), [db_name] FROM @dbs FOR READ ONLY 
						OPEN db_id_cursor  
						FETCH NEXT FROM db_id_cursor INTO @db_id, @db_name;  
						WHILE @@FETCH_STATUS = 0  
						BEGIN
							DECLARE @sql NVARCHAR(MAX) = CAST(N'USE [' + @db_name + '];
							
							WITH index_definitions (table_name, unique_compiles, user_seeks, users_scans, avg_total_user_cost, 
							avg_user_impact, [overall_impact], column_names, included_columns) 
							AS (
								SELECT TOP(600) object_name(c.object_id) AS table_name, 
								a.unique_compiles, a.user_seeks, a.user_scans, a.avg_total_user_cost, a.avg_user_impact, 
								a.avg_total_user_cost * a.avg_user_impact * (a.user_seeks + a.user_scans) AS [overall_impact],
								case when c.equality_columns is not null and c.inequality_columns is not null then c.equality_columns + '', '' + c.inequality_columns
								when c.equality_columns is not null and c.inequality_columns is null then c.equality_columns
								when c.inequality_columns is not null then c.inequality_columns
								END AS column_names,
								c.included_columns
								FROM sys.dm_db_missing_index_group_stats a
								inner join sys.dm_db_missing_index_groups b ON a.group_handle = b.index_group_handle
								inner join sys.dm_db_missing_index_details c on c.index_handle = b.index_handle
								where database_id = ' + CAST(@db_id AS NVARCHAR(10)) + '
								and equality_columns is not null
							)
							SELECT '''+ @db_name +''' AS [database], table_name, unique_compiles, user_seeks, users_scans, avg_total_user_cost, avg_user_impact, overall_impact, ''CREATE NONCLUSTERED INDEX IX_'' + REPLACE(REPLACE(REPLACE(index_definitions.column_names, ''], ['', ''_''), ''['', ''''), '']'', '''') 
									+ '' on '' + index_definitions.table_name + '' ('' + index_definitions.column_names + '') INCLUDE ('' + index_definitions.included_columns + '')''  AS [sql_code]
							FROM index_definitions WHERE index_definitions.included_columns IS NOT NULL
							UNION 
							SELECT '''+ @db_name +''' AS [database], table_name, unique_compiles, user_seeks, users_scans, avg_total_user_cost, avg_user_impact, overall_impact, ''CREATE NONCLUSTERED INDEX IX_'' + REPLACE(REPLACE(REPLACE(index_definitions.column_names, ''], ['', ''_''), ''['', ''''), '']'', '''') 
									+ '' on '' + index_definitions.table_name + '' ('' + index_definitions.column_names + '')'' AS [sql_code]
							FROM index_definitions WHERE index_definitions.included_columns IS NULL' AS NVARCHAR(MAX));

							INSERT INTO @index_data EXEC sys.sp_executesql @sql;
							FETCH NEXT FROM db_id_cursor INTO @db_id, @db_name;  
						END
						CLOSE db_id_cursor
						DEALLOCATE db_id_cursor
						SELECT * FROM @index_data;"
		add_field => {
						"sql_instance" => "SQL2"
					}
	}
	################################################################################################################################################################	
}
output {
	elasticsearch {
		hosts => "elasticsearch:9200"
		index => "sql-missing-indexes-%{+YYYY.MM}"            
	}
}

With data collection out of the way it’s time to move on to plotting graphs. Grafana is quite easy to pick up so instead of repeating official documentation we’d share a few dashboards we have in place

{
  "annotations": {
    "list": [
      {
        "builtIn": 1,
        "datasource": "-- Grafana --",
        "enable": true,
        "hide": true,
        "iconColor": "rgba(0, 211, 255, 1)",
        "name": "Annotations & Alerts",
        "type": "dashboard"
      }
    ]
  },
  "editable": true,
  "gnetId": null,
  "graphTooltip": 0,
  "id": 33,
  "iteration": 1584410507264,
  "links": [],
  "panels": [
    {
      "collapsed": false,
      "datasource": null,
      "gridPos": {
        "h": 1,
        "w": 24,
        "x": 0,
        "y": 0
      },
      "id": 74,
      "panels": [],
      "title": "Index usage",
      "type": "row"
    },
    {
      "columns": [],
      "datasource": "Elasticsearch [index-stats]",
      "fontSize": "100%",
      "gridPos": {
        "h": 5,
        "w": 24,
        "x": 0,
        "y": 1
      },
      "id": 72,
      "interval": "1h",
      "links": [],
      "maxPerRow": null,
      "options": {},
      "pageSize": null,
      "repeat": "Database",
      "repeatDirection": "v",
      "scopedVars": {
        "Database": {
          "selected": true,
          "text": "All",
          "value": "All"
        }
      },
      "scroll": true,
      "showHeader": true,
      "sort": {
        "col": 4,
        "desc": true
      },
      "styles": [
        {
          "alias": "Table",
          "align": "auto",
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "pattern": "table_name.keyword",
          "type": "string"
        },
        {
          "alias": "Index",
          "align": "auto",
          "colorMode": "value",
          "colors": [
            "rgba(50, 172, 45, 0.97)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(245, 54, 54, 0.9)"
          ],
          "decimals": 2,
          "pattern": "index_name.keyword",
          "thresholds": [
            "60",
            "80",
            "90"
          ],
          "type": "string",
          "unit": "percent"
        },
        {
          "alias": "Scans",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "Average user_scans",
          "sanitize": false,
          "thresholds": [],
          "type": "number",
          "unit": "none"
        },
        {
          "alias": "Seeks",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "Average user_seeks",
          "thresholds": [],
          "type": "number",
          "unit": "none"
        },
        {
          "alias": "Lookups",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "Average user_lookups",
          "thresholds": [],
          "type": "number",
          "unit": "none"
        },
        {
          "alias": "Updates",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "Average user_updates",
          "thresholds": [],
          "type": "number",
          "unit": "none"
        },
        {
          "alias": "Total index usages",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "bucket",
          "thresholds": [],
          "type": "number",
          "unit": "none"
        }
      ],
      "targets": [
        {
          "alias": "",
          "bucketAggs": [
            {
              "fake": true,
              "field": "table_name.keyword",
              "id": "4",
              "settings": {
                "min_doc_count": 1,
                "order": "desc",
                "orderBy": "9",
                "size": "0"
              },
              "type": "terms"
            },
            {
              "fake": true,
              "field": "index_name.keyword",
              "id": "3",
              "settings": {
                "min_doc_count": 1,
                "order": "desc",
                "orderBy": "9",
                "size": "0"
              },
              "type": "terms"
            }
          ],
          "hide": true,
          "metrics": [
            {
              "field": "user_scans",
              "id": "5",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "user_seeks",
              "id": "6",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "user_lookups",
              "id": "7",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "user_updates",
              "id": "8",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "select field",
              "id": "9",
              "meta": {},
              "pipelineVariables": [
                {
                  "name": "var1",
                  "pipelineAgg": "5"
                },
                {
                  "name": "var2",
                  "pipelineAgg": "6"
                },
                {
                  "name": "var3",
                  "pipelineAgg": "7"
                }
              ],
              "settings": {
                "script": "params.var1+params.var2+params.var3"
              },
              "type": "bucket_script"
            }
          ],
          "query": "database_name.keyword:$Database",
          "refId": "A",
          "timeField": "@timestamp"
        },
        {
          "bucketAggs": [
            {
              "fake": true,
              "field": "table_name.keyword",
              "id": "3",
              "settings": {
                "min_doc_count": 1,
                "order": "desc",
                "orderBy": "4",
                "size": "0"
              },
              "type": "terms"
            },
            {
              "fake": true,
              "field": "index_name.keyword",
              "id": "9",
              "settings": {
                "min_doc_count": 1,
                "order": "desc",
                "orderBy": "4",
                "size": "10"
              },
              "type": "terms"
            }
          ],
          "metrics": [
            {
              "field": "user_scans",
              "id": "4",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "user_seeks",
              "id": "5",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "user_lookups",
              "id": "6",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "user_updates",
              "id": "7",
              "meta": {},
              "settings": {},
              "type": "avg"
            }
          ],
          "query": "database_name.keyword:$Database",
          "refId": "B",
          "timeField": "@timestamp"
        }
      ],
      "timeFrom": null,
      "timeShift": null,
      "title": "Index usage - $Database",
      "transform": "table",
      "type": "table"
    },
    {
      "collapsed": false,
      "datasource": null,
      "gridPos": {
        "h": 1,
        "w": 24,
        "x": 0,
        "y": 6
      },
      "id": 76,
      "panels": [],
      "title": "Underutilised indexes",
      "type": "row"
    },
    {
      "cacheTimeout": null,
      "columns": [],
      "datasource": "Elasticsearch [index-stats]",
      "fontSize": "100%",
      "gridPos": {
        "h": 7,
        "w": 24,
        "x": 0,
        "y": 7
      },
      "id": 58,
      "links": [],
      "options": {},
      "pageSize": null,
      "repeat": "Database",
      "repeatDirection": "h",
      "scopedVars": {
        "Database": {
          "selected": true,
          "text": "",
          "value": ""
        }
      },
      "showHeader": true,
      "sort": {
        "col": 3,
        "desc": true
      },
      "styles": [
        {
          "alias": "Table",
          "align": "auto",
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "pattern": "table_name.keyword",
          "type": "string"
        },
        {
          "alias": "Index",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "decimals": 2,
          "pattern": "index_name.keyword",
          "thresholds": [],
          "type": "string",
          "unit": "short"
        },
        {
          "alias": "Lookups",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "Average user_lookups",
          "thresholds": [],
          "type": "number",
          "unit": "short"
        },
        {
          "alias": "Scans",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "Average user_scans",
          "thresholds": [],
          "type": "number",
          "unit": "short"
        },
        {
          "alias": "Seeks",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "Average user_seeks",
          "thresholds": [],
          "type": "number",
          "unit": "short"
        },
        {
          "alias": "Total Usage",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 3,
          "mappingType": 1,
          "pattern": "Bucket Script",
          "thresholds": [],
          "type": "number",
          "unit": "short"
        },
        {
          "alias": "Updates",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "Average user_updates",
          "thresholds": [],
          "type": "number",
          "unit": "short"
        }
      ],
      "targets": [
        {
          "alias": "{{table_name.keyword}}.{{index_name.keyword}}",
          "bucketAggs": [
            {
              "fake": true,
              "field": "table_name.keyword",
              "id": "5",
              "settings": {
                "min_doc_count": 1,
                "order": "desc",
                "orderBy": "6",
                "size": "5"
              },
              "type": "terms"
            },
            {
              "fake": true,
              "field": "index_name.keyword",
              "id": "3",
              "settings": {
                "min_doc_count": 1,
                "order": "desc",
                "orderBy": "6",
                "size": "10"
              },
              "type": "terms"
            }
          ],
          "metrics": [
            {
              "field": "user_lookups",
              "hide": true,
              "id": "1",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "user_scans",
              "hide": true,
              "id": "6",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "user_seeks",
              "hide": true,
              "id": "7",
              "meta": {},
              "settings": {},
              "type": "avg"
            },
            {
              "field": "select field",
              "id": "8",
              "meta": {},
              "pipelineVariables": [
                {
                  "name": "var1",
                  "pipelineAgg": "1"
                },
                {
                  "name": "var2",
                  "pipelineAgg": "6"
                },
                {
                  "name": "var3",
                  "pipelineAgg": "7"
                },
                {
                  "name": "var4",
                  "pipelineAgg": "9"
                }
              ],
              "settings": {
                "script": "params.var4/(params.var1+params.var2+params.var3)"
              },
              "type": "bucket_script"
            },
            {
              "field": "user_updates",
              "hide": true,
              "id": "9",
              "meta": {},
              "settings": {},
              "type": "avg"
            }
          ],
          "query": "database_name.keyword:$Database AND user_lookups:[0 TO 100] AND user_scans:[0 TO 100] AND user_seeks:[0 TO 100]",
          "refId": "A",
          "timeField": "@timestamp"
        }
      ],
      "timeFrom": null,
      "timeShift": null,
      "title": "$Database - Underutilised indexes",
      "transform": "table",
      "type": "table"
    },
    {
      "collapsed": false,
      "datasource": null,
      "gridPos": {
        "h": 1,
        "w": 24,
        "x": 0,
        "y": 14
      },
      "id": 91,
      "panels": [],
      "title": "Missing Indexes",
      "type": "row"
    },
    {
      "columns": [],
      "datasource": "Elasticsearch [missing-indexes]",
      "fontSize": "100%",
      "gridPos": {
        "h": 5,
        "w": 24,
        "x": 0,
        "y": 15
      },
      "id": 89,
      "interval": "30m",
      "links": [],
      "maxPerRow": 2,
      "options": {},
      "pageSize": null,
      "repeat": "Database",
      "repeatDirection": "v",
      "scopedVars": {
        "Database": {
          "selected": true,
          "text": "",
          "value": ""
        }
      },
      "scroll": true,
      "showHeader": true,
      "sort": {
        "col": 6,
        "desc": true
      },
      "styles": [
        {
          "alias": "Table",
          "align": "auto",
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "pattern": "table_name.keyword",
          "type": "string"
        },
        {
          "alias": "Index",
          "align": "auto",
          "colorMode": "value",
          "colors": [
            "rgba(50, 172, 45, 0.97)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(245, 54, 54, 0.9)"
          ],
          "decimals": 2,
          "pattern": "sql_code.keyword",
          "thresholds": [
            ""
          ],
          "type": "string",
          "unit": "percent"
        },
        {
          "alias": "Impact",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 2,
          "mappingType": 1,
          "pattern": "Average",
          "sanitize": false,
          "thresholds": [],
          "type": "number",
          "unit": "none"
        },
        {
          "alias": "Seeks",
          "align": "auto",
          "colorMode": null,
          "colors": [
            "rgba(245, 54, 54, 0.9)",
            "rgba(237, 129, 40, 0.89)",
            "rgba(50, 172, 45, 0.97)"
          ],
          "dateFormat": "YYYY-MM-DD HH:mm:ss",
          "decimals": 0,
          "mappingType": 1,
          "pattern": "Average user_seeks",
          "thresholds": [],
          "type": "number",
          "unit": "none"
        }
      ],
      "targets": [
        {
          "alias": "",
          "bucketAggs": [
            {
              "fake": true,
              "field": "table_name.keyword",
              "id": "11",
              "settings": {
                "min_doc_count": 1,
                "order": "desc",
                "orderBy": "10",
                "size": "0"
              },
              "type": "terms"
            },
            {
              "fake": true,
              "field": "sql_code.keyword",
              "id": "4",
              "settings": {
                "min_doc_count": 1,
                "order": "desc",
                "orderBy": "10",
                "size": "0"
              },
              "type": "terms"
            }
          ],
          "metrics": [
            {
              "field": "overall_impact",
              "id": "10",
              "meta": {},
              "settings": {},
              "type": "avg"
            }
          ],
          "query": "database_name.keyword:$Database",
          "refId": "A",
          "timeField": "@timestamp"
        }
      ],
      "timeFrom": null,
      "timeShift": null,
      "title": "Missing indexes - $Database",
      "transform": "table",
      "type": "table"
    }
  ],
  "refresh": false,
  "schemaVersion": 22,
  "style": "dark",
  "tags": [],
  "templating": {
    "list": [
      {
        "allValue": null,
        "current": {
          "text": "",
          "value": [
            ""
          ]
        },
        "datasource": "Elasticsearch [index-stats]",
        "definition": "{\"find\": \"terms\",\"field\": \"database_name.keyword\"}",
        "hide": 0,
        "includeAll": true,
        "label": null,
        "multi": true,
        "name": "Database",
        "options": [],
        "query": "{\"find\": \"terms\",\"field\": \"database_name.keyword\"}",
        "refresh": 2,
        "regex": "",
        "skipUrlSync": false,
        "sort": 0,
        "tagValuesQuery": "",
        "tags": [],
        "tagsQuery": "",
        "type": "query",
        "useTags": false
      }
    ]
  },
  "time": {
    "from": "now-24h",
    "to": "now"
  },
  "timepicker": {
    "refresh_intervals": [
      "5s",
      "10s",
      "30s",
      "1m",
      "5m",
      "15m",
      "30m",
      "1h",
      "2h",
      "1d"
    ],
    "time_options": [
      "5m",
      "15m",
      "1h",
      "6h",
      "12h",
      "24h",
      "2d",
      "7d",
      "30d"
    ]
  },
  "timezone": "",
  "title": "Index Maintenance",
  "uid": "OQVK9BSWk",
  "version": 27
}