EF Core 6 – does our IMethodCallTranslator still work?

A year and a half ago we posted an article on how we were able to plug into EF Core pipeline and inject our own IMethodCallTranslator. That let us leverage SQL-native encryption functionality with EF Core 3.1 and was ultimately a win. A lot has changed in the ecosystem, we’ve got .NET 5 and and .NET 6 coming up soon. So, we could not help but wonder…

Will it work with EF Core 6?

Apparently, EF6 is mostly an evolutionary step over EF5. That said, we totally missed previous version. So it is unclear to what extent the EF team has reworked their internal APIs. Most of the extensibility points we used were internal and clearly marked as “not for public consumption”. With that in mind, our concerns seemed valid.

Turns out, the code needs change…

The first issue we needed to rectify was implementing ShouldUseSameServiceProvider: from what I can tell, it’s needed to cache services more efficiently, but in our case setting it to default value seems to make sense.

But that’s where things really went sideways:

Apparently adding our custom IDbContextOptionsExtension resets the cache and by the time EF arrives at Model initialisation, instance of DI container gets wiped, leaving us with a bunch of null references (including the one above).

One line fix

I am still unsure why EF so upset when we add new extension. Stepping through the code would likely provide me with the answer but I feel it’s not worth the effort. Playing around with service scopes I however noticed that many built-in services get registered using different extension method with Scoped lifecycle. This prompted me to try change my registration method signature and voila:

And as usual, fully functioning code sits on GitHub.

Setting up Basic Auth for Swashbuckle

Let us put the keywords up front: Swashbuckle Basic authentication setup that works for Swashbuckle.WebApi installed on WebAPI projects (.NET 4.x). It’s also much less common to need Basic auth nowadays where proper mechanisms are easily available in all shapes and sizes. However, it may just become useful for internal tools or existing integrations.

Why?

Most tutorials online would cover the next gen library aimed specifically at ASP.Net Core, which is great (we’re all in for tech advancement), but sometimes legacy needs a prop up. If you are looking to add Basic Auth to ASP.NET Core project – look up AddSecurityDefinition and go from there. If legacy is indeed the in trouble – read on.

Setting it up

The prescribed solution is two steps:
1. add authentication scheme on SwaggerDocsConfig class
2. attach authentication to endpoints as required with IDocumentFilter or IOperationFilter

Assuming the installer left us with App_Start/SwaggerConfig.cs file, we’d arrive at:

public class SwaggerConfig
{
	public static void Register()
	{
		var thisAssembly = typeof(SwaggerConfig).Assembly;

		GlobalConfiguration.Configuration
			.EnableSwagger(c =>
				{
					...
					c.BasicAuth("basic").Description("Basic HTTP Authentication");
					c.OperationFilter<BasicAuthOpFilter>();
					...
				});
	}
}

and the most basic IOperationFilter would look like so:

public class BasicAuthOpFilter : IOperationFilter
{
	public void Apply(Operation operation, SchemaRegistry schemaRegistry, ApiDescription apiDescription)
	{
		operation.security = operation.security ?? new List<IDictionary<string, IEnumerable<string>>>();
		operation.security.Add(new Dictionary<string, IEnumerable<string>>()
							   {
								   { "basic", new List<string>() }
							   });
	}
}

Approaches to handling simple expressions in C#

Every now and then we get asked if there’s an easy way to parse user input into filter conditions. Say, for example, we have a viewmodel of type DataThing:

public class DataThing
 {
     public string Name;
     public float Value;
     public int Count;
 }

From here we’d like to check if a given property of this class satisfies a certain condition. For example we’ll look at “Value is greater than 15”. But of course we’d like to be flexible.

The issue

The main issue here is we don’t know the type of property before hand, so we can’t use generics even if we try to be smart:

public class DataThing
 {
     public string Name;
     public float Value;
     public int Count;
 }
 public static void Main()
 {
     var data = new DataThing() {Value=10, Name="test", Count = 1};
     var values = new List {
         new ValueGetter(x => x.Value),
         new ValueGetter(x => x.Name)
     };
     (values[0].Run(data) > 15).Dump();
 }
 public abstract class ValueGetter
 {
     public abstract T Run<T>(DataThing d);
 }
 public class ValueGetter<T> : ValueGetter
 {
     public Func<DataThing, T> TestFunc;
     public ValueGetter(Func<DataThing, T> blah)
     {
         TestFunc = blah;
     }
     public override T Run(DataThing d) => TestFunc.Invoke(d); // CS0029 Cannot implicitly convert type…
 }

Even if we figured it out it’s obviously way too dependant on DataThing layout to be used everywhere.

LINQ Expression trees

One way to solve this issue is with the help of LINQ expression trees. This way we wrap everything into one delegate with predictable signature and figure out types at runtime:

 bool BuildComparer(DataThing data, string field, string op, T value) {    
     var p1 = Expression.Parameter(typeof(DataThing));
     var p2 = Expression.Parameter(typeof(T));
     if (op == ">")
     {
         var expr = Expression.Lambda>(
             Expression.MakeBinary(ExpressionType.GreaterThan
                                 , Expression.PropertyOrField(p1, field)
                                 , Expression.Convert(p2, typeof(T))), p1, p2);
         var f = expr.Compile();
         return f(data, value);
      } 
      return false;
 }

Code DOM CSharpScript

Another way to approach the same problem is to generate C# code that we can compile and run .We’d need Microsoft.CodeAnalysis.CSharp.Scripting package for this to work:

bool BuildScript(DataThing data, string field, string op, T value)
 {
     var code = $"return {field} {op} {value};";
     var script = CSharpScript.Create(code, globalsType: typeof(DataThing), options: ScriptOptions.Default);
     var scriptRunner = script.CreateDelegate();
     return scriptRunner(data).Result;
 }

.NET 5 Code Generator

This is a new .NET 5 feature, that allows us to plug into compilation process and generate classes as we see fit. For example we’d generate extension methods that would all return correct values from DataThing:

[Generator] // see https://github.com/dotnet/roslyn/blob/main/docs/features/source-generators.cookbook.md for even more cool stuff
 class AccessorGenerator: ISourceGenerator {
     public void Execute(GeneratorExecutionContext context) {
       var syntaxReceiver = (CustomSyntaxReceiver) context.SyntaxReceiver;
       ClassDeclarationSyntax userClass = syntaxReceiver.ClassToAugment;
       SourceText sourceText = SourceText.From($ @ "
         public static class DataThingExtensions {
           { 
           // This is where we'd reflect over type members and generate code dynamically. Following code is oversimplification
             public static string GetValue<string>(this DataThing d) => d.Name;
             public static string GetValue<float>(this DataThing d) => d.Value;
             public static string GetValue<int>(this DataThing d) => d.Count;
           }
         }
         ", Encoding.UTF8);
         context.AddSource("DataThingExtensions.cs", sourceText);
       }
       public void Initialize(GeneratorInitializationContext context) {
         context.RegisterForSyntaxNotifications(() => new CustomSyntaxReceiver());
       }
       class CustomSyntaxReceiver: ISyntaxReceiver {
         public ClassDeclarationSyntax ClassToAugment {
           get;
           private set;
         }
         public void OnVisitSyntaxNode(SyntaxNode syntaxNode) {
           // Business logic to decide what we're interested in goes here
           if (syntaxNode is ClassDeclarationSyntax cds &&
             cds.Identifier.ValueText == "DataThing") {
             ClassToAugment = cds;
           }
         }
       }
     }

Running this should be as easy as calling extension methods on the class instance: data.GreaterThan(15f).Dump();

LINQ: Dynamic Join

Suppose we’ve got two lists that we would like to join based on, say, common Id property. With LINQ the code would look something along these lines:

var list1 = new List<MyItem> {};
var list2 = new List<MyItem> {};
var joined = list1.Join(list2, i => i.Id, j => j.Id, (k,l) => new {List1Item=k, List2Item=l});

resulting list of anonymous objects would have a property for each source object in the join. This is nothing new and has been documented pretty well.

But what if

We don’t know how many lists we’d have to join? Now we’ve got a list of lists of our entities (List Inception?!): List<List<MyItem>>. It becomes pretty obvious that we’d need to generate this code dynamically. We’ll run with LINQ Expression Trees – surely there’s a way. Generally speaking, we’ll have to build an object (anonymous type would be ideal) with fields like so:

{
  i0: items[0] // added on first run - we need to have at least two lists to join so it's safe to assume we'd
  i1: items[1] // added on first run - you need to have at least two lists in your join array 
  ... 
  iN: items[N] // added on each pass an joined with items[0] 
}

It is safe to assume that we need at least two lists for join to make sense, so we’d build the object above in two stages – first join two MyItem instances and get the structure going, Each subsequent join should append more MyItem instances to the resulting object until we’d get our result.

Picking types for the result

Now the problem is how we best define this object. The way anonymous types are declared, requires a type initialiser and a new keyword. We don’t have either of these at design time, so this method unfortunately will not work for us.

ExpandoObject

Another way to achieve decent developer experience with named object properties would be to use dynamic keyword – this is less than ideal as it effectively disables compiler static type checks. But we can keep going – so it’s an option here. To allow us to add properties at run time, we will use ExpandoObject:

static List<ExpandoObject> Join<TSource, TDest>(List<List<TSource>> items, Expression<Func<TSource, int>> srcAccessor, Expression<Func<ExpandoObject, int>> intermediaryAccessor, Expression<Func<TSource, TSource, ExpandoObject>> outerResultSelector)
{
	var joinLambdaType = typeof(ExpandoObject);            
	Expression<Func<ExpandoObject, TSource, ExpandoObject>> innerResultSelector = (expando, item) => expando.AddValue(item);
	
	var joinMethod = typeof(Enumerable).GetMethods().Where(m => m.Name == "Join").First().MakeGenericMethod(typeof(TSource), typeof(TSource), typeof(int), joinLambdaType);
	var toListMethod = typeof(Enumerable).GetMethods().Where(m => m.Name == "ToList").First().MakeGenericMethod(typeof(TDest));

	var joinCall = Expression.Call(joinMethod,
							Expression.Constant(items[0]),
							Expression.Constant(items[1]),
							srcAccessor,
							srcAccessor,
							outerResultSelector);
	joinMethod = typeof(Enumerable).GetMethods().Where(m => m.Name == "Join").First().MakeGenericMethod(typeof(TDest), typeof(TSource), typeof(int), joinLambdaType); // from now on we'll be joining ExpandoObject with MyEntity
	for (int i = 2; i < items.Count; i++) // skip the first two
	{
		joinCall =
			Expression.Call(joinMethod,
							joinCall,
							Expression.Constant(items[i]),
							intermediaryAccessor,
							srcAccessor,
							innerResultSelector);
	}

	var lambda = Expression.Lambda<Func<List<ExpandoObject>>>(Expression.Call(toListMethod, joinCall));
	return lambda.Compile()();
}

The above block references two extension methods so that we can easier manupulate the ExpandoObjects:

public static class Extensions 
 {
     public static ExpandoObject AddValue(this ExpandoObject expando, object value)
     {
         var dict = (IDictionary)expando;
         var key = $"i{dict.Count}"; // that was the easiest way to keep track of what's already in. You would probably find a way to do it better
         dict.Add(key, value);
         return expando;
     }
     public static ExpandoObject NewObject<T>(this ExpandoObject expando, T value1, T value2) 
     {
          var dict = (IDictionary<string, object>)expando;
          dict.Add("i0", value1);
          dict.Add("i1", value2);
          return expando; 
     }
 }

And with that, we should have no issue running a simple test like so:

class Program
{
    class MyEntity
    {
        public int Id { get; set; }
        public string Name { get; set; }

        public MyEntity(int id, string name)
        {
            Id = id; Name = name;
        }
    }

    static void Main()
    {
        List<List<MyEntity>> items = new List<List<MyEntity>> {
            new List<MyEntity> {new MyEntity(1,"test1_1"), new MyEntity(2,"test1_2")},
            new List<MyEntity> {new MyEntity(1,"test2_1"), new MyEntity(2,"test2_2")},
            new List<MyEntity> {new MyEntity(1,"test3_1"), new MyEntity(2,"test3_2")},
            new List<MyEntity> {new MyEntity(1,"test4_1"), new MyEntity(2,"test4_2")}
        };

        Expression<Func<MyEntity, MyEntity, ExpandoObject>> outerResultSelector = (i, j) => new ExpandoObject().NewObject(i, j); // we create a new ExpandoObject and populate it with first two items we join
        Expression<Func<ExpandoObject, int>> intermediaryAccessor = (expando) => ((MyEntity)((IDictionary<string, object>)expando)["i0"]).Id; // you could probably get rid of hardcoding this by, say, examining the first key in the dictionary
        
        dynamic cc = Join<MyEntity, ExpandoObject>(items, i => i.Id, intermediaryAccessor, outerResultSelector);

        var test1_1 = cc[0].i1;
        var test1_2 = cc[0].i2;

        var test2_1 = cc[1].i1;
        var test2_2 = cc[1].i2;
    }
}

ASP.Net Core – Resolving types from dynamic assemblies

It is not a secret that ASP.NET core comes with dependency injection support out of the box. And we don’t remember ever feeling it lacks features. All we have to do is register a type in Startup.cs and it’s ready to be consumed in our controllers:

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        ...
        services.AddScoped<IDBLogger, IdbLogger>();
    }
}
//
public class HomeController : Controller
{
    private readonly IDBLogger _idbLogger;
    public HomeController(IDBLogger idbLogger)     
    {
        _idbLogger = idbLogger; // all good here!
    }
...
}

What if it’s a plugin?

Now imagine we’ve got a Order type that we for whatever strange reason load at runtime dynamically.

public class Order
{
     private readonly IDBLogger _logger; // suppose we've got the reference from common assembly that both our main application and this plugin are allowed to reference
     public Order(IDBLogger logger)
     {
         _logger = logger; // will it resolve?
     }
     public void GetOrderDetail()
     {
        _logger.Log("Inside GetOrderDetail"); // do we get a NRE here?
     }
}

Load it in the controller

External assembly being external kind of implies that we want to load it at the very last moment – right in our controller where we presumably need it. If we try explore this avenue, we immediately see the issue:

public HomeController(IDBLogger idbLogger)
 {
     _idbLogger = idbLogger;
     var assembly = Assembly.LoadFrom(Path.Combine("..\Controllers\bin\Debug\netcoreapp3.1", "Orders.dll"));
     var orderType = assembly.ExportedTypes.First(t => t.Name == "Order");
     var order = Activator.CreateInstance(orderType); //throws System.MissingMethodException: 'No parameterless constructor defined for type 'Orders.Order'.'
     orderType.InvokeMember("GetOrderDetail", BindingFlags.Public | BindingFlags.Instance|BindingFlags.InvokeMethod, null, order, new object[] { });
 }

The exception makes perfect sense – we need to inject dependencies! Making it so:

public HomeController(IDBLogger idbLogger)
 {
     _idbLogger = idbLogger;
     var assembly = Assembly.LoadFrom(Path.Combine("..\Controllers\bin\Debug\netcoreapp3.1", "Orders.dll"));
     var orderType = assembly.ExportedTypes.First(t => t.Name == "Order");
     var order = Activator.CreateInstance(orderType, new object[] { _idbLogger }); // we happen to know what the constructor is expecting
     orderType.InvokeMember("GetOrderDetail", BindingFlags.Public | BindingFlags.Instance|BindingFlags.InvokeMethod, null, order, new object[] { });
 }

Victory! or is it?

The above exercise is nothing new of exceptional – the point we are making here is – dependency injection frameworks were invented so we don’t have to do this manually. In this case it was pretty easy but more compex constructors can many dependencies. What’s worse – we may not be able to guarantee we even know all dependencies we need. If only there was a way to register dynamic types with system DI container…

Yes we can

The most naive solution would be to load our assembly on Startup.cs and register needed types along with our own:

public void ConfigureServices(IServiceCollection services)
 {
     services.AddControllersWithViews();
     services.AddScoped();
     // load assembly and register with DI  
     var assembly = Assembly.LoadFrom(Path.Combine("..\\Controllers\\bin\\Debug\\netcoreapp3.1", "Orders.dll")); 
    var orderType = assembly.ExportedTypes.First(t => t.Name == "Order");
    services.AddScoped(orderType); // this is where we would make our type known to the DI container  
    var loadedTypesCache = new LoadedTypesCache(); // this step is optional - i chose to leverage the same DI mechanism to avoid having to load assembly in my controller for type definition.  
    loadedTypesCache.LoadedTypes.Add("order", orderType);
    services.AddSingleton(loadedTypesCache); // singleton seems like a good fit here            
 }

And that’s it – literally no difference where the type is coming from! In controller, we’d inject IServiceProvider and ask to hand us an instance of type we cached earlier:

public HomeController(IServiceProvider serviceProvider, LoadedTypesCache cache)
{
     var order = serviceProvider.GetService(cache.LoadedTypes["order"]); // leveraging that same loaded type cache to avoid having to load assembly again
 // following two lines are just to call the method 
    var m = cache.LoadedTypes["order"].GetMethod("GetOrderDetail", BindingFlags.Public | BindingFlags.Instance); 
    m.Invoke(order, new object[] { }); // Victory!
}

Attaching debugger to dynamically loaded assembly with Reflection.Emit

Imagine a situation where you’d like to attach a debugger to an assembly that you have loaded dynamically? To make it a bit more plausible let us consider a scenario. Our client has a solution where they maintain extensive plugin ecosystem. Each plugin is a class library built with .net 4.5. Each plugin implements a common interface that main application is aware of. At runtime the application scans a folder and loads all assemblies into separate AppDomains. Under certain circumstances users/developers would like to be able to debug plugins in Visual Studio.
Given how seldom we would opt for this technique, documenting our solution might be an exercise in vain. But myself being a huge fan of weird and wonderful – I couldn’t resist going through with this case study.

Inventorying moving parts

First of all we’d need a way to inject code into the assembly. Apparently we can not directly replace methods we loaded from disk – SwapMethodBody() needs a DynamicModule. So we opted to define a subclass wrapper. Next, we need to actually stop execution and offer developers to start debugging. Using Debugger.Launch() is the easiest way to achieve that. Finally, we’d look at different ways to load assemblies into separate AppDomains to maintain existing convention.

Injecting Debugger.Launch()

The main attraction here – and Reflection.Emit is a perfect candidate for the job. Theory is fairly simple: we create a new dynamic assembly, module, type and a method. Then we generate code inside of the method and return wrapper instance:

public static object CreateWrapper(Type ServiceType, MethodInfo baseMethod)
{
    var asmBuilder = AppDomain.CurrentDomain.DefineDynamicAssembly(new AssemblyName($"newAssembly_{Guid.NewGuid()}"), AssemblyBuilderAccess.Run);
    var module = asmBuilder.DefineDynamicModule($"DynamicAssembly_{Guid.NewGuid()}");
    var typeBuilder = module.DefineType($"DynamicType_{Guid.NewGuid()}", TypeAttributes.Public, ServiceType);
    var methodBuilder = typeBuilder.DefineMethod("Run", MethodAttributes.Public | MethodAttributes.NewSlot);

    var ilGenerator = methodBuilder.GetILGenerator();

    ilGenerator.EmitCall(OpCodes.Call, typeof(Debugger).GetMethod("Launch", BindingFlags.Static | BindingFlags.Public), null);
    ilGenerator.Emit(OpCodes.Pop);

    ilGenerator.Emit(OpCodes.Ldarg_0);
    ilGenerator.EmitCall(OpCodes.Call, baseMethod, null);
    ilGenerator.Emit(OpCodes.Ret);

    /*
     * the generated method would be roughly equivalent to:
     * new void Run()
     * {
     *   Debugger.Launch();
     *   base.Run();
     * }
     */

    var wrapperType = typeBuilder.CreateType();
    return Activator.CreateInstance(wrapperType);
}

Triggering the method

After we’ve generated a wrapper – we should be in position to invoke the desired method. In this example I’m using all-reflection approach:

public void Run()
{
    var wrappedInstance = DebuggerWrapperGenerator.CreateWrapper(ServiceType, ServiceType.GetMethod("Run"));
    wrappedInstance.GetType().GetMethod("Run")?.Invoke(wrappedInstance, null);
 // nothing special here
}

The task becomes even easier if we know the interface to cast to.

Adding AppDomain into the mix

The above parts don’t depend much on where the code will run. However, trying to satisfy the layout requirement, we experimented with a few different configurations. In the end it appears that I’m able to confidently place the code in correct AppDomain by either leveraging .DoCallBack() or making sure that Launcher helper is created with .CreateInstanceAndUnwrap():

static void Main(string[] args)
{
    var appDomain = AppDomain.CreateDomain("AppDomainInMain", AppDomain.CurrentDomain.Evidence,
        new AppDomainSetup { ApplicationBase = AppDomain.CurrentDomain.SetupInformation.ApplicationBase });

    appDomain.DoCallBack(() =>
    {
        var launcher = new Launcher(PathToDll);
        launcher.Run();
    });    
}
static void Main(string[] args)
{
    Launcher.RunInNewAppDomain(PathToDll);
}
public class Launcher : MarshalByRefObject
{
    private Type ServiceType { get; }

    public Launcher(string pathToDll)
    {
        var assembly = Assembly.LoadFrom(pathToDll);
        ServiceType = assembly.GetTypes().SingleOrDefault(t => t.Name == "Class1");
    }

    public void Run()
    {
        var wrappedInstance = DebuggerWrapperGenerator.CreateWrapper(ServiceType, ServiceType.GetMethod("Run"));
        wrappedInstance.GetType().GetMethod("Run")?.Invoke(wrappedInstance, null);
    }

    public static void RunInNewAppDomain(string pathToDll)
    {
        var appDomain = AppDomain.CreateDomain("AppDomainInLauncher", AppDomain.CurrentDomain.Evidence, AppDomain.CurrentDomain.SetupInformation);

        var launcher = appDomain.CreateInstanceAndUnwrap(typeof(Launcher).Assembly.FullName, typeof(Launcher).FullName, false, BindingFlags.Public|BindingFlags.Instance,
            null, new object[] { pathToDll }, CultureInfo.CurrentCulture, null);
        (launcher as Launcher)?.Run();
    }
}

Testing it

In the end we’ve got the following prompt:

after letting it run through, we’d get something looking like this:

As usual, full code for this example sits in my GitHub if you want to take it for a spin.

Parsing OData queries

OData (Open Data Protocol) is an ISO approved standard that defines a set of best practices for building and consuming RESTful APIs. It allows us write business logic and not worry too much about request and response headers, status codes, HTTP methods, and other variables.

We won’t go into too much detail on how to write OData queries and how to use it – there’s plenty resources out there. We’ll rather have a look at a bit esoteric scenario where we consider defining our own parser and then walking the AST to get desired values.

Problem statement

Suppose we’ve got a filter string that we received from the client:

"?$filter =((Name eq 'John' or Name eq 'Peter') and (Department eq 'Professional Services'))"

And we’d like to apply custom validation to the filter. Ideally we’d like to get a structured list of properties and values so we can run our checks:

Filter 1:
    Key: Name
    Operator: eq
    Value: John
Operator: or

Filter 2:
    Key: Name
    Operator: eq
    Value: Peter

Operator: and

Filter 3:
    Key: Department
    Operator: eq
    Value: Professional Services

Some options are:

  • ODataUriParser – but it seems to have some issues with .net Core support just yet
  • Regular Expression – not very flexible
  • ODataQueryOptions – produces raw text but cannot broken down any further

What else?

One other way to approach this would be parsing. And there are plenty tools to do that (see flex or bison for example). In .net world, however, Irony might be a viable option: it’s available in .net standard 2.0 which we had no issues plugging into a .net core 3.1 console test project.

Grammar

To start off, we normally need to define a grammar. But luckily, Microsoft have been kind enough to supply us with EBNF reference so all we have to do is to adapt it to Irony. I ended up implementing a subset of the grammar above that seems to cater for example statement (and a bit above and beyond, feel free to cut it down).

using Irony.Parsing;

namespace irony_playground
{
    [Language("OData", "1.0", "OData Filter")]
    public class OData: Grammar
    {
        public OData()
        {
            // first we define some terms
            var identifier = new RegexBasedTerminal("identifier", "[a-zA-Z_][a-zA-Z_0-9]*");
            var string_literal = new StringLiteral("string_literal", "'");
            var integer_literal = new NumberLiteral("integer_literal", NumberOptions.IntOnly);
            var float_literal = new NumberLiteral("float_literal", NumberOptions.AllowSign|NumberOptions.AllowSign) 
                                        | new RegexBasedTerminal("float_literal", "(NaN)|-?(INF)");
            var boolean_literal = new RegexBasedTerminal("boolean_literal", "(true)|(false)");

            var filter_expression = new NonTerminal("filter_expression");
            var boolean_expression = new NonTerminal("boolean_expression");
            var collection_filter_expression = new NonTerminal("collection_filter_expression");
            var logical_expression = new NonTerminal("logical_expression");
            var comparison_expression = new NonTerminal("comparison_expression");
            var variable = new NonTerminal("variable");
            var field_path = new NonTerminal("field_path");
            var lambda_expression = new NonTerminal("lambda_expression");
            var comparison_operator = new NonTerminal("comparison_operator");
            var constant = new NonTerminal("constant");

            Root = filter_expression; // this is where our entry point will be. 

            // and from here on we expand on all terms and their relationships
            filter_expression.Rule = boolean_expression;

            boolean_expression.Rule = collection_filter_expression
                                      | logical_expression
                                      | comparison_expression
                                      | boolean_literal
                                      | "(" + boolean_expression + ")"
                                      | variable;
            variable.Rule = identifier | field_path;

            field_path.Rule = MakeStarRule(field_path, ToTerm("/"), identifier);

            collection_filter_expression.Rule =
                field_path + "/all(" + lambda_expression + ")"
                | field_path + "/any(" + lambda_expression + ")"
                | field_path + "/any()";

            lambda_expression.Rule = identifier + ":" + boolean_expression;

            logical_expression.Rule =
                boolean_expression + (ToTerm("and", "and") | ToTerm("or", "or")) + boolean_expression
                | ToTerm("not", "not") + boolean_expression;

            comparison_expression.Rule =
                variable + comparison_operator + constant |
                constant + comparison_operator + variable;

            constant.Rule =
                string_literal
                | integer_literal
                | float_literal
                | boolean_literal
                | ToTerm("null");

            comparison_operator.Rule = ToTerm("gt") | "lt" | "ge" | "le" | "eq" | "ne";

            RegisterBracePair("(", ")");
        }
    }
}

NB: Irony comes with Grammar Explorer tool that allows us to load grammar dlls and debug them with free text input.

enter image description here

after we’re happy with the grammar, we need to reference it from our project and parse the input string:

class Program
{
    static void Main(string[] args)
    {
        var g = new OData();
        var l = new LanguageData(g);
        var r = new Parser(l);
        var p = r.Parse("((Name eq 'John' or Name eq 'Grace Paul') and (Department eq 'Finance and Accounting'))"); // here's your tree
        // this is where you walk it and extract whatever data you desire 
    }
}

Then, all we’ve got to do is walk the resulting tree and apply any custom logic based on syntax node type. One example how to do that can be found in this StackOverflow answer.

Entity Framework Core 3.1 – Peeking Into Generated SQL

Writing LINQ that produces optimal SQL can be even harder as developers often don’t have visibility into the process. It becomes even more confusing when the application is designed to run against different databases.

We often find ourselves questioning whether this particular query will fall in line with our expectations. And until not so long ago our tool of choice was a SQL Profiler, that ships with SQL Server. It’s plenty powerful but has one flaw – it pretty much requires the SQL Server installation. This might be a deal breaker for some clients using other DBs, like Postgres or MySQL (which are all supported by the way).

EF to the resque

Instead of firing off the profiler and fishing out the batches, we could have Entity Framework itself pass us the result. After all, it needs to build SQL before sending it off the the database, so all we have to do it to ask nicely. Stack Overflow is quite helpful here:

public static class IQueryableExtensions // this is the EF Core 3.1 version.
    {
        public static string ToSql<TEntity>(this IQueryable<TEntity> query) where TEntity : class
        {
            var enumerator = query.Provider.Execute<IEnumerable<TEntity>>(query.Expression).GetEnumerator();
            var relationalCommandCache = enumerator.Private("_relationalCommandCache");
            var selectExpression = relationalCommandCache.Private<SelectExpression>("_selectExpression");
            var factory = relationalCommandCache.Private<IQuerySqlGeneratorFactory>("_querySqlGeneratorFactory");

            var sqlGenerator = factory.Create();
            var command = sqlGenerator.GetCommand(selectExpression);

            string sql = command.CommandText;
            return sql;
        }

        private static object Private(this object obj, string privateField) => obj?.GetType().GetField(privateField, BindingFlags.Instance | BindingFlags.NonPublic)?.GetValue(obj);
        private static T Private<T>(this object obj, string privateField) => (T)obj?.GetType().GetField(privateField, BindingFlags.Instance | BindingFlags.NonPublic)?.GetValue(obj);
    }

The usage is simple

Suppose we’ve got the following inputs: One simple table, that we’d like to group by one field and total by another. Database Context is also pretty much boilerplate. One thing to note here is a couple of database providers we are going to try the query against.

public class SomeTable
{
    public int Id { get; set; }
    public int Foobar { get; set; }
    public int Quantity { get; set; }
}

class MyDbContext : DbContext
{
    public DbSet<SomeTable> SomeTables { get; set; }
    public static readonly LoggerFactory DbCommandConsoleLoggerFactory
        = new LoggerFactory(new[] {
        new ConsoleLoggerProvider ((category, level) =>
            category == DbLoggerCategory.Database.Command.Name &&
            level == LogLevel.Trace, true)
        });
    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        //run with SQL Server provider to get T-SQL
        optionsBuilder.UseNpgsql("Server=localhost;Port=5432;Database=test;User Id=;Password=;")
        //alternatively use other supported provider
        //optionsBuilder.UseSqlServer("Server=.\\SQLEXPRESS;Database=test;Trusted_Connection=true")
        ;
        base.OnConfiguring(optionsBuilder);
    }
}

The test bench would look something like so

class Program
{
    static void Main(string[] args)
    {

        var context = new MyDbContext();
        var someTableData = context.SomeTables
                .GroupBy(x => x.Foobar)
                .Select(x => new { Foobar = x.Key, Quantity = x.Sum(y => y.Quantity) })
                .OrderByDescending(x => x.Quantity)
                .Take(10) // we've built our query as per normal
                .ToSql(); // this is the magic
        Console.Write(someTableData);
        Console.ReadKey();
    }
}

And depending on our choice of provider the output would show ef core generated sql for SQL Server and Postgres

        -- MSSQL
        SELECT TOP(@__p_0) [s].[Foobar], SUM([s].[Quantity]) AS [Quantity]
        FROM [SomeTables] AS [s]
        GROUP BY [s].[Foobar]
        ORDER BY SUM([s].[Quantity]) DESC

        -- PG SQL
         SELECT s."Foobar", SUM(s."Quantity")::INT AS "Quantity"
        FROM "SomeTables" AS s
        GROUP BY s."Foobar"
        ORDER BY SUM(s."Quantity")::INT DESC
        LIMIT @__p_0

Getting started with Roslyn code analysis

It was going to happen eventually – our research on C# dynamic features eventually ended up with an attempt to parse bits of source code. There are quite a few solutions on the market, with NRefactory being our preferred tool over the years. There are however a few limitations: it does not support .NET core and C# 6.

It is a big deal

It might seem, that support for newer language spec is not critical. But in fact, it gets problematic very quickly even in more established projects. Luckily for us, Microsoft has chosen to open source Roslyn – the very engine that powers their compiler services. Their official documentation covers the platform pretty well and goes in great detail of writing Visual Studio code analysers. We however often have to deal with writing MSBuild tasks that load the whole solution and run analysis on class hierarchies (for example, to detect whether a single `SQL SELECT` statement is being called inside a foreach loop – we would fail the build and suggest to replace it with bulk select)

Installing

Roslyn is available via NuGet as a number of Microsoft.CodeAnalysis.* packages. We normally include these four:

Install-Package Microsoft.CodeAnalysis.Workspaces.MSBuild
Install-Package Microsoft.CodeAnalysis
Install-Package Microsoft.CodeAnalysis.CSharp
Install-Package Microsoft.Build # these classes are needed to support MSBuild workspace when it starts to load solution
Install-Package Microsoft.Build.Utilities.Core # these classes are needed to support MSBuild workspace when it starts to load solution
Install-Package Microsoft.Build.Locator # this is a helper to locate correct MSBuild toolchain (in case the machine has more than one installed)

Sometimes the environment gets confused as to what version MSBuild to use, and this is why starting a project with something like this is pretty much a must since VS2015:

// put this somewhere early in the program
if (!MSBuildLocator.IsRegistered) //MSBuildLocator.RegisterDefaults(); // ensures correct version is loaded up
{
    var vs2022 = MSBuildLocator.QueryVisualStudioInstances().Where(x => x.Name == "Visual Studio Community 2022").First(); // find the correct VS setup. There are namy ways to organise logic here, we'll just assume we want VS2022
    MSBuildLocator.RegisterInstance(vs2022); // register the selected instance
    var _ = typeof(Microsoft.CodeAnalysis.CSharp.Formatting.CSharpFormattingOptions); // this ensures library is referenced so the compiler would not try to optimise it away (if dynamically loading assemblies or doing other voodoo that can throw the compiler off) - probably less important than the above but we prefer to follow cargo cult here and leave it be
}

After initial steps, simplistic solution traversal would look something along these lines:

async Task AnalyseSolution()
{
	using (var w = MSBuildWorkspace.Create())
	{
		var solution = await w.OpenSolutionAsync(@"MySolution.sln");		
		foreach (var project in solution.Projects)
		{			
			var docs = project.Documents; // allows for file-level document filtering
			var compilation = await project.GetCompilationAsync(); // allows for assembly-level analysis as well as SemanticModel 
			foreach (var doc in docs)
			{
				var walker = new CSharpSyntaxWalker(); // CSharpSyntaxWalker is an abstract class - we will need to define our own implementation for this to actually work
				walker.Visit(await doc.GetSyntaxRootAsync()); // traverse the syntax tree
			}
		}
	}
}

Syntax Tree Visitor

As with pretty much every single mainstream syntax analyser, the easiest way to traverse syntax trees is by using a Visitor Pattern. It allows to decouple tree nodes and processing logic. Which will allow room for expansion on either side (easy to add new logic, easy to add new tree node types). Roslyn has stub CSharpSyntaxWalker that allows us to only override required nodes for processing. It then takes care of everything else.

With basics out of the way, let’s look into classes that make up our platform here. Top of the hierarchy is MSBuild Workspace followed by Solution, Project and Document. Roslyn makes a distinction between parsing code and compiling it. Meaning some analytics will only be available in Compilation class that is available for project as well as for individual documents down the track.

Traversing the tree

Just loading the solution is kind of pointless though. We’d need to come up with processing logic – and the best place to do it would be a CSharpSyntaxWalker subclass. Suppose, we’d like to determine whether class constructor contains if statements that are driven by parameters. This might mean we’ve got overly complex classes and could benefit from refactoring these out:

public class ConstructorSyntaxWalker : CSharpSyntaxWalker
{
    public List<IParameterSymbol> Parameters { get; set; }
    public int IfConditions { get; set; }
    
    bool processingConstructor = false;

    SemanticModel sm;

    public ConstructorSyntaxWalker(SemanticModel sm)
    {
        this.sm = sm;
        Parameters = new List<IParameterSymbol>();
    }

    public override void VisitConstructorDeclaration(ConstructorDeclarationSyntax node)
    {
        processingConstructor = true;
        base.VisitConstructorDeclaration(node);
        processingConstructor = false;
    }

    public override void VisitIfStatement(IfStatementSyntax node)
    {
        if (!processingConstructor) return; // we only want to keep traversing if we know we're inside constructor body
        Parameters.AddRange(sm.AnalyzeDataFlow(node).DataFlowsIn.Cast<IParameterSymbol>()); // .AnalyzeDataFlow() is one of the most commonly used parts of the platform: it requires a compilation to work off and allows tracking dependencies. We could then check if these parameters are supplied to constructor and make a call whether this is allowed 
        IfConditions++; // just count for now, nothing fancy
        base.VisitIfStatement(node);
    }
}

Then, somewhere in our solution (or any other solution, really!) We have a class definition like so:

public class TestClass
{
    public TestClass(int a, string o) 
    {
        if (a == 1) DoThis() else DoSomethingElse();
        if (o == "a") Foo() else Bar();
    }
}

If we wanted to throw an exception and halt the build we could invoke out SyntaxWalker:

public static async Task Main()
{
    await AnalyseSolution();
}
...
async static Task AnalyseSolution()
{

    using (var w = MSBuildWorkspace.Create())
    {
        var solution = await w.OpenSolutionAsync(@"..\..\..\TestRoslyn.sln"); // let's analyse our own solution. But can be any file on disk
        foreach (var project in solution.Projects)
        {
            var docs = project.Documents; // allows for file-level document filtering
            var compilation = await project.GetCompilationAsync(); // allows for assembly-level analysis as well as SemanticModel 
            foreach (var doc in docs)
            {
                var walker = new ConstructorSyntaxWalker(await doc.GetSemanticModelAsync());
                walker.Visit(await doc.GetSyntaxRootAsync()); // traverse the syntax tree
                if (walker.IfConditions > 0 && walker.Parameters.Any()) throw new Exception("We do not allow branching in constructors.");
            }
        }
    }
}

And there we have it. This is a very simplistic example, but possibilities are endless!