EF core 3.1: dynamic GroupBy clause

Expanding on many ways we can build dynamic clauses for EF execution, let us look at another example. Suppose we’d like to give our users ability to group items based on a property of their choice.

As usual, our first choice would be to build a LINQ expression as we did with the WHERE case. There’s however a problem: GroupBy needs an object to use as a grouping key. Usually, we’d just make an anonymous type, but it is a compile-time luxury we don’t get with LINQ:

dbSet.GroupBy(s => new {s.Col1, s.Col2}); // not going to fly :/

IL Emit it is then

So, it seems we are left with no other choice but to go through TypeBuilder ordeal (which isn’t too bad really). One thing to point out here – we want to create properties that EF will later use for grouping key. This is where ability to interrogate EF as-built model comes very handy. Another important point – creating a property in fact means creating a private backing field and two special methods for each. We certainly started to appreciate how much the language and runtime do for us:

private void CreateProperty(TypeBuilder typeBuilder, string propertyName, Type propertyType)
{
 // really, just generating "public PropertyType propertyName {get;set;}"
	FieldBuilder fieldBuilder = typeBuilder.DefineField("_" + propertyName, propertyType, FieldAttributes.Private);

	PropertyBuilder propertyBuilder = typeBuilder.DefineProperty(propertyName, PropertyAttributes.HasDefault, propertyType, null);
	MethodBuilder getPropMthdBldr = typeBuilder.DefineMethod("get_" + propertyName, MethodAttributes.Public | MethodAttributes.SpecialName | MethodAttributes.HideBySig, propertyType, Type.EmptyTypes);
	ILGenerator getIl = getPropMthdBldr.GetILGenerator();

	getIl.Emit(OpCodes.Ldarg_0);
	getIl.Emit(OpCodes.Ldfld, fieldBuilder);
	getIl.Emit(OpCodes.Ret);

	MethodBuilder setPropMthdBldr = typeBuilder.DefineMethod("set_" + propertyName,
		  MethodAttributes.Public |
		  MethodAttributes.SpecialName |
		  MethodAttributes.HideBySig,
		  null, new[] { propertyType });

	ILGenerator setIl = setPropMthdBldr.GetILGenerator();
	Label modifyProperty = setIl.DefineLabel();
	Label exitSet = setIl.DefineLabel();

	setIl.MarkLabel(modifyProperty);
	setIl.Emit(OpCodes.Ldarg_0);
	setIl.Emit(OpCodes.Ldarg_1);
	setIl.Emit(OpCodes.Stfld, fieldBuilder);

	setIl.Emit(OpCodes.Nop);
	setIl.MarkLabel(exitSet);
	setIl.Emit(OpCodes.Ret);

	propertyBuilder.SetGetMethod(getPropMthdBldr);
	propertyBuilder.SetSetMethod(setPropMthdBldr);
}

after we’ve sorted this out – it’s pretty much the same approach as with any other dynamic LINQ expression: we need to build something like DbSet.GroupBy(s => new dynamicType {col1 = s.q1, col2 = s.q2}). There’s a slight issue with this lambda however – it returns IGrouping<dynamicType, TElement> – and since outside code has no idea of the dynamic type – there’s no easy way to work with it (unless we want to keep reflecting). I thought it might be easier to build a Select as well and return a Count against each instance of dynamic type. Luckily, we only needed a count, but other aggregations work in similar fashion.

Finally

I arrived at the following code to generate required expressions:

public static IQueryable<Tuple<object, int>> BuildExpression<TElement>(this IQueryable<TElement> source, DbContext context, List<string> columnNames)
{
	var entityParameter = Expression.Parameter(typeof(TElement));
	var sourceParameter = Expression.Parameter(typeof(IQueryable<TElement>));

	var model = context.Model.FindEntityType(typeof(TElement)); // start with our own entity
	var props = model.GetPropertyAccessors(entityParameter); // get all available field names including navigations			

	var objectProps = new List<Tuple<string, Type>>();
	var accessorProps = new List<Tuple<string, Expression>>();
	var groupKeyDictionary = new Dictionary<object, string>();
	foreach (var prop in props.Where(p => columnNames.Contains(p.Item3)))
	{
		var propName = prop.Item3.Replace(".", "_"); // we need some form of cross-reference, this seems to be good enough
		objectProps.Add(new Tuple<string, Type>(propName, (prop.Item2 as MemberExpression).Type));
		accessorProps.Add(new Tuple<string, Expression>(propName, prop.Item2));
	}

	var groupingType = BuildGroupingType(objectProps); // build new type we'll use for grouping. think `new Test() { A=, B=, C= }`

	// finally, we're ready to build our expressions
	var groupbyCall = BuildGroupBy<TElement>(sourceParameter, entityParameter, accessorProps, groupingType); // source.GroupBy(s => new Test(A = s.Field1, B = s.Field2 ... ))
	var selectCall = groupbyCall.BuildSelect<TElement>(groupingType); // .Select(g => new Tuple<object, int> (g.Key, g.Count()))
	
	var lambda = Expression.Lambda<Func<IQueryable<TElement>, IQueryable<Tuple<object, int>>>>(selectCall, sourceParameter);
	return lambda.Compile()(source);
}

private static MethodCallExpression BuildSelect<TElement>(this MethodCallExpression groupbyCall, Type groupingAnonType) 
{	
	var groupingType = typeof(IGrouping<,>).MakeGenericType(groupingAnonType, typeof(TElement));
	var selectMethod = QueryableMethods.Select.MakeGenericMethod(groupingType, typeof(Tuple<object, int>));
	var resultParameter = Expression.Parameter(groupingType);

	var countCall = BuildCount<TElement>(resultParameter);
	var resultSelector = Expression.New(typeof(Tuple<object, int>).GetConstructors().First(), Expression.PropertyOrField(resultParameter, "Key"), countCall);

	return Expression.Call(selectMethod, groupbyCall, Expression.Lambda(resultSelector, resultParameter));
}

private static MethodCallExpression BuildGroupBy<TElement>(ParameterExpression sourceParameter, ParameterExpression entityParameter, List<Tuple<string, Expression>> accessorProps, Type groupingAnonType) 
{
	var groupByMethod = QueryableMethods.GroupByWithKeySelector.MakeGenericMethod(typeof(TElement), groupingAnonType);
	var groupBySelector = Expression.Lambda(Expression.MemberInit(Expression.New(groupingAnonType.GetConstructors().First()),
			accessorProps.Select(op => Expression.Bind(groupingAnonType.GetMember(op.Item1)[0], op.Item2))
		), entityParameter);

	return Expression.Call(groupByMethod, sourceParameter, groupBySelector);
}

private static MethodCallExpression BuildCount<TElement>(ParameterExpression resultParameter)
{
	var asQueryableMethod = QueryableMethods.AsQueryable.MakeGenericMethod(typeof(TElement));
	var countMethod = QueryableMethods.CountWithoutPredicate.MakeGenericMethod(typeof(TElement));

	return Expression.Call(countMethod, Expression.Call(asQueryableMethod, resultParameter));
}

And the full working version is on my GitHub

Calling WCF services from .NET Core clients

Imagine situation: company runs a business-critical application that was built when WCF was a hot topic. Over the years the code base has grown and became a hot mess. But now, finally, the development team got a go ahead to break it down into microservices. Yay? Calling WCF services from .NET Core clients can be a challenge.

Not so fast

We already discussed some high-level architectural approaches to integrate systems. But we didn’t touch upon the data exchange between monolith and microservice consumers: we could post complete object feed onto a message queue, but that’s not always fit for purpose as messages should be lightweight. Another way (keeping in mind our initial WCF premise), we could call the services as needed and make alterations inside microservices. And Core WCF is a fantastic way to do that. If only we used all stock standard service code.

What if custom is the way?

But sometimes our WCF implementation has evolved so much that it’s impossible to retrofit off the shelf tools. For example, one client we worked with, was stuck with binary formatting for performance reasons. And that meant that we needed to use same legacy .net 4.x assemblies to ensure full compatibility. Issue was – not all of references was supported by .net core anyway. So we had to get creative.

What if there was an API?

Surely, we could write an API that would adapt REST requests to WCF calls. We could probably just use Azure API Management and call it a day, but our assumption here was not all customers are going to do that. The question is how to minimize the amount of effort developers need to expose the endpoints.

A perfect case for C# Source Generators

C# Source Generators is a new C# compiler feature that lets C# developers inspect user code and generate new C# source files that can be added to a compilation. This is our chance to write code that will write more code when a project is built (think C# Code Inception).

The setup is going to be very simple: we’ll add a generator to our WCF project and get it to write our WebAPI controllers for us. Official blog post describes all steps necessary to enable this feature, so we’d skip this trivial bit.

We’ll look for WCF endpoints that developers have decorated with a custom attribute (we’re opt-in) and do the following:

  • Find all Operations marked with GenerateApiEndpoint attribute
  • Generate Proxy class for each ServiceContract we discovered
  • Generate API Controller for each ServiceContract that exposes at least one operation
  • Generate Data Transfer Objects for all exposed methods
  • Use generated DTOs to create WCF client and call required method, return data back

Proxy classes

For .net core to call legacy WCF, we have to either use svcutil to scaffold everything for us or we have to have a proxy class that inherits from ClientBase

namespace WcfService.BridgeControllers {
public class {proxyToGenerate.Name}Proxy: ClientBase<{proxyToGenerate}>, {proxyToGenerate} {
    foreach (var method in proxyToGenerate.GetMembers()) 
    {
        var parameters = // craft calling parameters; // need to make sure we build correct parameters here
        public {method.ReturnType} {method.Name}({parameters}) {
            return Channel.{method.Name}({parameters}); // calling respective WCF method
    }
}

DTO classes

We thought it’s easier to standardize calling convention so all methods in our API are always POST and all accept only one DTO on input (which in turn very much depends on callee sugnature):

public static string GenerateDtoCode(this MethodDeclarationSyntax method) 
{
    var methodName = method.Identifier.ValueText;
    var methodDtoCode = new StringBuilder($"public class {methodName}Dto {{").AppendLine(""); 
    foreach (var parameter in method.ParameterList.Parameters)
    {
        var isOut = parameter.IsOut();
        if (!isOut)
        {
            methodDtoCode.AppendLine($"public {parameter.Type} {parameter.Identifier} {{ get; set; }}");
        }
    }
    methodDtoCode.AppendLine("}");
    return methodDtoCode.ToString();
}

Controllers

And finally, controllers follow simple conventions to ensure we always know how to call them:

sourceCode.AppendLine()
   .AppendLine("namespace WcfService.BridgeControllers {").AppendLine()
   .AppendLine($"[RoutePrefix(\"api/{className}\")]public class {className}Controller: ApiController {{");
...
 var methodCode = new StringBuilder($"[HttpPost][Route(\"{methodName}\")]")
                  .AppendLine($"public Dictionary<string, object> {methodName}([FromBody] {methodName}Dto request) {{")
                  .AppendLine($"var proxy = new {clientProxy.Name}Proxy();")
                  .AppendLine($"var response = proxy.{methodName}({wcfCallParameterList});")
                  .AppendLine("return new Dictionary<string, object> {")
                  .AppendLine(" {\"response\", response },")
                  .AppendLine(outParameterResultList);

As a result

We should be able to wrap required calls into REST API and fully decouple our legacy data contracts from new data models. Working sample project is on GitHub.

Breaking down the Monolith: data flows

One common pattern we see repeatedly is how clients are transitioning their monolithic applications to distributed architectures. The challenge here is doing that while still retaining the data on the main database for consistency and other coupled systems. Implementing microservices gets a bit tricky. We suddenly need to have a copy of the data and keep it consistent.

Initial snapshot

Often teams take this exercise to rethink the way they handle schema. So cloning tables to the new database and calling it a day does not cut it. We’d like to be able to use new microservice not only as a fancy DB proxy, but also as a model for future state. Since we don’t always know what the future state will look like, ingesting all data in one go might be too much of commitment. The is where incrementally building microservice-specific data store comes in handy. As requests flow through our system, we’d fulfill them from the Monolith but keep a copy and massage for efficiency.

Caching data with Microservice flow chart

Updates go here

There’s no question we need some way to let our microservices know that something has gotten updated. A message queue of sorts will likely do. So next time the Monolith updates an entity we’re interested in – we’d get a message:

Flow chart outlining Monolith leading update feeds for microservices to build up own data snapshots

As we progress

The schematic above can be extended to allow monolith be part of receiving the update feeds too. When we are ready to commit to moving System of Record to a microservice – we reverse the flow and have Monolith listen to changes and update “master” record accordingly. Only at that time it won’t be “master” anymore.

Flow chart outlining Monolith becoming a subscriber to update feeds for consistency and backward compatibility

Choice of Message Bus

We’d need to employ a proper message bus for this flow to work. There are quite a few options out there and picking a particular one without considering trade-offs is meaningless. We prefer to keep our options limited to RabbitMQ and Kafka. A few reasons to pick one or another are: community size, delivery guarantees and scalability constraints. Stay tuned for an overview of those!

Approaches to handling simple expressions in C#

Every now and then we get asked if there’s an easy way to parse user input into filter conditions. Say, for example, we have a viewmodel of type DataThing:

public class DataThing
 {
     public string Name;
     public float Value;
     public int Count;
 }

From here we’d like to check if a given property of this class satisfies a certain condition. For example we’ll look at “Value is greater than 15”. But of course we’d like to be flexible.

The issue

The main issue here is we don’t know the type of property before hand, so we can’t use generics even if we try to be smart:

public class DataThing
 {
     public string Name;
     public float Value;
     public int Count;
 }
 public static void Main()
 {
     var data = new DataThing() {Value=10, Name="test", Count = 1};
     var values = new List {
         new ValueGetter(x => x.Value),
         new ValueGetter(x => x.Name)
     };
     (values[0].Run(data) > 15).Dump();
 }
 public abstract class ValueGetter
 {
     public abstract T Run<T>(DataThing d);
 }
 public class ValueGetter<T> : ValueGetter
 {
     public Func<DataThing, T> TestFunc;
     public ValueGetter(Func<DataThing, T> blah)
     {
         TestFunc = blah;
     }
     public override T Run(DataThing d) => TestFunc.Invoke(d); // CS0029 Cannot implicitly convert type…
 }

Even if we figured it out it’s obviously way too dependant on DataThing layout to be used everywhere.

LINQ Expression trees

One way to solve this issue is with the help of LINQ expression trees. This way we wrap everything into one delegate with predictable signature and figure out types at runtime:

 bool BuildComparer(DataThing data, string field, string op, T value) {    
     var p1 = Expression.Parameter(typeof(DataThing));
     var p2 = Expression.Parameter(typeof(T));
     if (op == ">")
     {
         var expr = Expression.Lambda>(
             Expression.MakeBinary(ExpressionType.GreaterThan
                                 , Expression.PropertyOrField(p1, field)
                                 , Expression.Convert(p2, typeof(T))), p1, p2);
         var f = expr.Compile();
         return f(data, value);
      } 
      return false;
 }

Code DOM CSharpScript

Another way to approach the same problem is to generate C# code that we can compile and run .We’d need Microsoft.CodeAnalysis.CSharp.Scripting package for this to work:

bool BuildScript(DataThing data, string field, string op, T value)
 {
     var code = $"return {field} {op} {value};";
     var script = CSharpScript.Create(code, globalsType: typeof(DataThing), options: ScriptOptions.Default);
     var scriptRunner = script.CreateDelegate();
     return scriptRunner(data).Result;
 }

.NET 5 Code Generator

This is a new .NET 5 feature, that allows us to plug into compilation process and generate classes as we see fit. For example we’d generate extension methods that would all return correct values from DataThing:

[Generator] // see https://github.com/dotnet/roslyn/blob/main/docs/features/source-generators.cookbook.md for even more cool stuff
 class AccessorGenerator: ISourceGenerator {
     public void Execute(GeneratorExecutionContext context) {
       var syntaxReceiver = (CustomSyntaxReceiver) context.SyntaxReceiver;
       ClassDeclarationSyntax userClass = syntaxReceiver.ClassToAugment;
       SourceText sourceText = SourceText.From($ @ "
         public static class DataThingExtensions {
           { 
           // This is where we'd reflect over type members and generate code dynamically. Following code is oversimplification
             public static string GetValue<string>(this DataThing d) => d.Name;
             public static string GetValue<float>(this DataThing d) => d.Value;
             public static string GetValue<int>(this DataThing d) => d.Count;
           }
         }
         ", Encoding.UTF8);
         context.AddSource("DataThingExtensions.cs", sourceText);
       }
       public void Initialize(GeneratorInitializationContext context) {
         context.RegisterForSyntaxNotifications(() => new CustomSyntaxReceiver());
       }
       class CustomSyntaxReceiver: ISyntaxReceiver {
         public ClassDeclarationSyntax ClassToAugment {
           get;
           private set;
         }
         public void OnVisitSyntaxNode(SyntaxNode syntaxNode) {
           // Business logic to decide what we're interested in goes here
           if (syntaxNode is ClassDeclarationSyntax cds &&
             cds.Identifier.ValueText == "DataThing") {
             ClassToAugment = cds;
           }
         }
       }
     }

Running this should be as easy as calling extension methods on the class instance: data.GreaterThan(15f).Dump();

LINQ: Dynamic Join

Suppose we’ve got two lists that we would like to join based on, say, common Id property. With LINQ the code would look something along these lines:

var list1 = new List<MyItem> {};
var list2 = new List<MyItem> {};
var joined = list1.Join(list2, i => i.Id, j => j.Id, (k,l) => new {List1Item=k, List2Item=l});

resulting list of anonymous objects would have a property for each source object in the join. This is nothing new and has been documented pretty well.

But what if

We don’t know how many lists we’d have to join? Now we’ve got a list of lists of our entities (List Inception?!): List<List<MyItem>>. It becomes pretty obvious that we’d need to generate this code dynamically. We’ll run with LINQ Expression Trees – surely there’s a way. Generally speaking, we’ll have to build an object (anonymous type would be ideal) with fields like so:

{
  i0: items[0] // added on first run - we need to have at least two lists to join so it's safe to assume we'd
  i1: items[1] // added on first run - you need to have at least two lists in your join array 
  ... 
  iN: items[N] // added on each pass an joined with items[0] 
}

It is safe to assume that we need at least two lists for join to make sense, so we’d build the object above in two stages – first join two MyItem instances and get the structure going, Each subsequent join should append more MyItem instances to the resulting object until we’d get our result.

Picking types for the result

Now the problem is how we best define this object. The way anonymous types are declared, requires a type initialiser and a new keyword. We don’t have either of these at design time, so this method unfortunately will not work for us.

ExpandoObject

Another way to achieve decent developer experience with named object properties would be to use dynamic keyword – this is less than ideal as it effectively disables compiler static type checks. But we can keep going – so it’s an option here. To allow us to add properties at run time, we will use ExpandoObject:

static List<ExpandoObject> Join<TSource, TDest>(List<List<TSource>> items, Expression<Func<TSource, int>> srcAccessor, Expression<Func<ExpandoObject, int>> intermediaryAccessor, Expression<Func<TSource, TSource, ExpandoObject>> outerResultSelector)
{
	var joinLambdaType = typeof(ExpandoObject);            
	Expression<Func<ExpandoObject, TSource, ExpandoObject>> innerResultSelector = (expando, item) => expando.AddValue(item);
	
	var joinMethod = typeof(Enumerable).GetMethods().Where(m => m.Name == "Join").First().MakeGenericMethod(typeof(TSource), typeof(TSource), typeof(int), joinLambdaType);
	var toListMethod = typeof(Enumerable).GetMethods().Where(m => m.Name == "ToList").First().MakeGenericMethod(typeof(TDest));

	var joinCall = Expression.Call(joinMethod,
							Expression.Constant(items[0]),
							Expression.Constant(items[1]),
							srcAccessor,
							srcAccessor,
							outerResultSelector);
	joinMethod = typeof(Enumerable).GetMethods().Where(m => m.Name == "Join").First().MakeGenericMethod(typeof(TDest), typeof(TSource), typeof(int), joinLambdaType); // from now on we'll be joining ExpandoObject with MyEntity
	for (int i = 2; i < items.Count; i++) // skip the first two
	{
		joinCall =
			Expression.Call(joinMethod,
							joinCall,
							Expression.Constant(items[i]),
							intermediaryAccessor,
							srcAccessor,
							innerResultSelector);
	}

	var lambda = Expression.Lambda<Func<List<ExpandoObject>>>(Expression.Call(toListMethod, joinCall));
	return lambda.Compile()();
}

The above block references two extension methods so that we can easier manupulate the ExpandoObjects:

public static class Extensions 
 {
     public static ExpandoObject AddValue(this ExpandoObject expando, object value)
     {
         var dict = (IDictionary)expando;
         var key = $"i{dict.Count}"; // that was the easiest way to keep track of what's already in. You would probably find a way to do it better
         dict.Add(key, value);
         return expando;
     }
     public static ExpandoObject NewObject<T>(this ExpandoObject expando, T value1, T value2) 
     {
          var dict = (IDictionary<string, object>)expando;
          dict.Add("i0", value1);
          dict.Add("i1", value2);
          return expando; 
     }
 }

And with that, we should have no issue running a simple test like so:

class Program
{
    class MyEntity
    {
        public int Id { get; set; }
        public string Name { get; set; }

        public MyEntity(int id, string name)
        {
            Id = id; Name = name;
        }
    }

    static void Main()
    {
        List<List<MyEntity>> items = new List<List<MyEntity>> {
            new List<MyEntity> {new MyEntity(1,"test1_1"), new MyEntity(2,"test1_2")},
            new List<MyEntity> {new MyEntity(1,"test2_1"), new MyEntity(2,"test2_2")},
            new List<MyEntity> {new MyEntity(1,"test3_1"), new MyEntity(2,"test3_2")},
            new List<MyEntity> {new MyEntity(1,"test4_1"), new MyEntity(2,"test4_2")}
        };

        Expression<Func<MyEntity, MyEntity, ExpandoObject>> outerResultSelector = (i, j) => new ExpandoObject().NewObject(i, j); // we create a new ExpandoObject and populate it with first two items we join
        Expression<Func<ExpandoObject, int>> intermediaryAccessor = (expando) => ((MyEntity)((IDictionary<string, object>)expando)["i0"]).Id; // you could probably get rid of hardcoding this by, say, examining the first key in the dictionary
        
        dynamic cc = Join<MyEntity, ExpandoObject>(items, i => i.Id, intermediaryAccessor, outerResultSelector);

        var test1_1 = cc[0].i1;
        var test1_2 = cc[0].i2;

        var test2_1 = cc[1].i1;
        var test2_2 = cc[1].i2;
    }
}

ASP.Net Core – Resolving types from dynamic assemblies

It is not a secret that ASP.NET core comes with dependency injection support out of the box. And we don’t remember ever feeling it lacks features. All we have to do is register a type in Startup.cs and it’s ready to be consumed in our controllers:

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        ...
        services.AddScoped<IDBLogger, IdbLogger>();
    }
}
//
public class HomeController : Controller
{
    private readonly IDBLogger _idbLogger;
    public HomeController(IDBLogger idbLogger)     
    {
        _idbLogger = idbLogger; // all good here!
    }
...
}

What if it’s a plugin?

Now imagine we’ve got a Order type that we for whatever strange reason load at runtime dynamically.

public class Order
{
     private readonly IDBLogger _logger; // suppose we've got the reference from common assembly that both our main application and this plugin are allowed to reference
     public Order(IDBLogger logger)
     {
         _logger = logger; // will it resolve?
     }
     public void GetOrderDetail()
     {
        _logger.Log("Inside GetOrderDetail"); // do we get a NRE here?
     }
}

Load it in the controller

External assembly being external kind of implies that we want to load it at the very last moment – right in our controller where we presumably need it. If we try explore this avenue, we immediately see the issue:

public HomeController(IDBLogger idbLogger)
 {
     _idbLogger = idbLogger;
     var assembly = Assembly.LoadFrom(Path.Combine("..\Controllers\bin\Debug\netcoreapp3.1", "Orders.dll"));
     var orderType = assembly.ExportedTypes.First(t => t.Name == "Order");
     var order = Activator.CreateInstance(orderType); //throws System.MissingMethodException: 'No parameterless constructor defined for type 'Orders.Order'.'
     orderType.InvokeMember("GetOrderDetail", BindingFlags.Public | BindingFlags.Instance|BindingFlags.InvokeMethod, null, order, new object[] { });
 }

The exception makes perfect sense – we need to inject dependencies! Making it so:

public HomeController(IDBLogger idbLogger)
 {
     _idbLogger = idbLogger;
     var assembly = Assembly.LoadFrom(Path.Combine("..\Controllers\bin\Debug\netcoreapp3.1", "Orders.dll"));
     var orderType = assembly.ExportedTypes.First(t => t.Name == "Order");
     var order = Activator.CreateInstance(orderType, new object[] { _idbLogger }); // we happen to know what the constructor is expecting
     orderType.InvokeMember("GetOrderDetail", BindingFlags.Public | BindingFlags.Instance|BindingFlags.InvokeMethod, null, order, new object[] { });
 }

Victory! or is it?

The above exercise is nothing new of exceptional – the point we are making here is – dependency injection frameworks were invented so we don’t have to do this manually. In this case it was pretty easy but more compex constructors can many dependencies. What’s worse – we may not be able to guarantee we even know all dependencies we need. If only there was a way to register dynamic types with system DI container…

Yes we can

The most naive solution would be to load our assembly on Startup.cs and register needed types along with our own:

public void ConfigureServices(IServiceCollection services)
 {
     services.AddControllersWithViews();
     services.AddScoped();
     // load assembly and register with DI  
     var assembly = Assembly.LoadFrom(Path.Combine("..\\Controllers\\bin\\Debug\\netcoreapp3.1", "Orders.dll")); 
    var orderType = assembly.ExportedTypes.First(t => t.Name == "Order");
    services.AddScoped(orderType); // this is where we would make our type known to the DI container  
    var loadedTypesCache = new LoadedTypesCache(); // this step is optional - i chose to leverage the same DI mechanism to avoid having to load assembly in my controller for type definition.  
    loadedTypesCache.LoadedTypes.Add("order", orderType);
    services.AddSingleton(loadedTypesCache); // singleton seems like a good fit here            
 }

And that’s it – literally no difference where the type is coming from! In controller, we’d inject IServiceProvider and ask to hand us an instance of type we cached earlier:

public HomeController(IServiceProvider serviceProvider, LoadedTypesCache cache)
{
     var order = serviceProvider.GetService(cache.LoadedTypes["order"]); // leveraging that same loaded type cache to avoid having to load assembly again
 // following two lines are just to call the method 
    var m = cache.LoadedTypes["order"].GetMethod("GetOrderDetail", BindingFlags.Public | BindingFlags.Instance); 
    m.Invoke(order, new object[] { }); // Victory!
}

Attaching debugger to dynamically loaded assembly with Reflection.Emit

Imagine a situation where you’d like to attach a debugger to an assembly that you have loaded dynamically? To make it a bit more plausible let us consider a scenario. Our client has a solution where they maintain extensive plugin ecosystem. Each plugin is a class library built with .net 4.5. Each plugin implements a common interface that main application is aware of. At runtime the application scans a folder and loads all assemblies into separate AppDomains. Under certain circumstances users/developers would like to be able to debug plugins in Visual Studio.
Given how seldom we would opt for this technique, documenting our solution might be an exercise in vain. But myself being a huge fan of weird and wonderful – I couldn’t resist going through with this case study.

Inventorying moving parts

First of all we’d need a way to inject code into the assembly. Apparently we can not directly replace methods we loaded from disk – SwapMethodBody() needs a DynamicModule. So we opted to define a subclass wrapper. Next, we need to actually stop execution and offer developers to start debugging. Using Debugger.Launch() is the easiest way to achieve that. Finally, we’d look at different ways to load assemblies into separate AppDomains to maintain existing convention.

Injecting Debugger.Launch()

The main attraction here – and Reflection.Emit is a perfect candidate for the job. Theory is fairly simple: we create a new dynamic assembly, module, type and a method. Then we generate code inside of the method and return wrapper instance:

public static object CreateWrapper(Type ServiceType, MethodInfo baseMethod)
{
    var asmBuilder = AppDomain.CurrentDomain.DefineDynamicAssembly(new AssemblyName($"newAssembly_{Guid.NewGuid()}"), AssemblyBuilderAccess.Run);
    var module = asmBuilder.DefineDynamicModule($"DynamicAssembly_{Guid.NewGuid()}");
    var typeBuilder = module.DefineType($"DynamicType_{Guid.NewGuid()}", TypeAttributes.Public, ServiceType);
    var methodBuilder = typeBuilder.DefineMethod("Run", MethodAttributes.Public | MethodAttributes.NewSlot);

    var ilGenerator = methodBuilder.GetILGenerator();

    ilGenerator.EmitCall(OpCodes.Call, typeof(Debugger).GetMethod("Launch", BindingFlags.Static | BindingFlags.Public), null);
    ilGenerator.Emit(OpCodes.Pop);

    ilGenerator.Emit(OpCodes.Ldarg_0);
    ilGenerator.EmitCall(OpCodes.Call, baseMethod, null);
    ilGenerator.Emit(OpCodes.Ret);

    /*
     * the generated method would be roughly equivalent to:
     * new void Run()
     * {
     *   Debugger.Launch();
     *   base.Run();
     * }
     */

    var wrapperType = typeBuilder.CreateType();
    return Activator.CreateInstance(wrapperType);
}

Triggering the method

After we’ve generated a wrapper – we should be in position to invoke the desired method. In this example I’m using all-reflection approach:

public void Run()
{
    var wrappedInstance = DebuggerWrapperGenerator.CreateWrapper(ServiceType, ServiceType.GetMethod("Run"));
    wrappedInstance.GetType().GetMethod("Run")?.Invoke(wrappedInstance, null);
 // nothing special here
}

The task becomes even easier if we know the interface to cast to.

Adding AppDomain into the mix

The above parts don’t depend much on where the code will run. However, trying to satisfy the layout requirement, we experimented with a few different configurations. In the end it appears that I’m able to confidently place the code in correct AppDomain by either leveraging .DoCallBack() or making sure that Launcher helper is created with .CreateInstanceAndUnwrap():

static void Main(string[] args)
{
    var appDomain = AppDomain.CreateDomain("AppDomainInMain", AppDomain.CurrentDomain.Evidence,
        new AppDomainSetup { ApplicationBase = AppDomain.CurrentDomain.SetupInformation.ApplicationBase });

    appDomain.DoCallBack(() =>
    {
        var launcher = new Launcher(PathToDll);
        launcher.Run();
    });    
}
static void Main(string[] args)
{
    Launcher.RunInNewAppDomain(PathToDll);
}
public class Launcher : MarshalByRefObject
{
    private Type ServiceType { get; }

    public Launcher(string pathToDll)
    {
        var assembly = Assembly.LoadFrom(pathToDll);
        ServiceType = assembly.GetTypes().SingleOrDefault(t => t.Name == "Class1");
    }

    public void Run()
    {
        var wrappedInstance = DebuggerWrapperGenerator.CreateWrapper(ServiceType, ServiceType.GetMethod("Run"));
        wrappedInstance.GetType().GetMethod("Run")?.Invoke(wrappedInstance, null);
    }

    public static void RunInNewAppDomain(string pathToDll)
    {
        var appDomain = AppDomain.CreateDomain("AppDomainInLauncher", AppDomain.CurrentDomain.Evidence, AppDomain.CurrentDomain.SetupInformation);

        var launcher = appDomain.CreateInstanceAndUnwrap(typeof(Launcher).Assembly.FullName, typeof(Launcher).FullName, false, BindingFlags.Public|BindingFlags.Instance,
            null, new object[] { pathToDll }, CultureInfo.CurrentCulture, null);
        (launcher as Launcher)?.Run();
    }
}

Testing it

In the end we’ve got the following prompt:

after letting it run through, we’d get something looking like this:

As usual, full code for this example sits in my GitHub if you want to take it for a spin.

Parsing OData queries

OData (Open Data Protocol) is an ISO approved standard that defines a set of best practices for building and consuming RESTful APIs. It allows us write business logic and not worry too much about request and response headers, status codes, HTTP methods, and other variables.

We won’t go into too much detail on how to write OData queries and how to use it – there’s plenty resources out there. We’ll rather have a look at a bit esoteric scenario where we consider defining our own parser and then walking the AST to get desired values.

Problem statement

Suppose we’ve got a filter string that we received from the client:

"?$filter =((Name eq 'John' or Name eq 'Peter') and (Department eq 'Professional Services'))"

And we’d like to apply custom validation to the filter. Ideally we’d like to get a structured list of properties and values so we can run our checks:

Filter 1:
    Key: Name
    Operator: eq
    Value: John
Operator: or

Filter 2:
    Key: Name
    Operator: eq
    Value: Peter

Operator: and

Filter 3:
    Key: Department
    Operator: eq
    Value: Professional Services

Some options are:

  • ODataUriParser – but it seems to have some issues with .net Core support just yet
  • Regular Expression – not very flexible
  • ODataQueryOptions – produces raw text but cannot broken down any further

What else?

One other way to approach this would be parsing. And there are plenty tools to do that (see flex or bison for example). In .net world, however, Irony might be a viable option: it’s available in .net standard 2.0 which we had no issues plugging into a .net core 3.1 console test project.

Grammar

To start off, we normally need to define a grammar. But luckily, Microsoft have been kind enough to supply us with EBNF reference so all we have to do is to adapt it to Irony. I ended up implementing a subset of the grammar above that seems to cater for example statement (and a bit above and beyond, feel free to cut it down).

using Irony.Parsing;

namespace irony_playground
{
    [Language("OData", "1.0", "OData Filter")]
    public class OData: Grammar
    {
        public OData()
        {
            // first we define some terms
            var identifier = new RegexBasedTerminal("identifier", "[a-zA-Z_][a-zA-Z_0-9]*");
            var string_literal = new StringLiteral("string_literal", "'");
            var integer_literal = new NumberLiteral("integer_literal", NumberOptions.IntOnly);
            var float_literal = new NumberLiteral("float_literal", NumberOptions.AllowSign|NumberOptions.AllowSign) 
                                        | new RegexBasedTerminal("float_literal", "(NaN)|-?(INF)");
            var boolean_literal = new RegexBasedTerminal("boolean_literal", "(true)|(false)");

            var filter_expression = new NonTerminal("filter_expression");
            var boolean_expression = new NonTerminal("boolean_expression");
            var collection_filter_expression = new NonTerminal("collection_filter_expression");
            var logical_expression = new NonTerminal("logical_expression");
            var comparison_expression = new NonTerminal("comparison_expression");
            var variable = new NonTerminal("variable");
            var field_path = new NonTerminal("field_path");
            var lambda_expression = new NonTerminal("lambda_expression");
            var comparison_operator = new NonTerminal("comparison_operator");
            var constant = new NonTerminal("constant");

            Root = filter_expression; // this is where our entry point will be. 

            // and from here on we expand on all terms and their relationships
            filter_expression.Rule = boolean_expression;

            boolean_expression.Rule = collection_filter_expression
                                      | logical_expression
                                      | comparison_expression
                                      | boolean_literal
                                      | "(" + boolean_expression + ")"
                                      | variable;
            variable.Rule = identifier | field_path;

            field_path.Rule = MakeStarRule(field_path, ToTerm("/"), identifier);

            collection_filter_expression.Rule =
                field_path + "/all(" + lambda_expression + ")"
                | field_path + "/any(" + lambda_expression + ")"
                | field_path + "/any()";

            lambda_expression.Rule = identifier + ":" + boolean_expression;

            logical_expression.Rule =
                boolean_expression + (ToTerm("and", "and") | ToTerm("or", "or")) + boolean_expression
                | ToTerm("not", "not") + boolean_expression;

            comparison_expression.Rule =
                variable + comparison_operator + constant |
                constant + comparison_operator + variable;

            constant.Rule =
                string_literal
                | integer_literal
                | float_literal
                | boolean_literal
                | ToTerm("null");

            comparison_operator.Rule = ToTerm("gt") | "lt" | "ge" | "le" | "eq" | "ne";

            RegisterBracePair("(", ")");
        }
    }
}

NB: Irony comes with Grammar Explorer tool that allows us to load grammar dlls and debug them with free text input.

enter image description here

after we’re happy with the grammar, we need to reference it from our project and parse the input string:

class Program
{
    static void Main(string[] args)
    {
        var g = new OData();
        var l = new LanguageData(g);
        var r = new Parser(l);
        var p = r.Parse("((Name eq 'John' or Name eq 'Grace Paul') and (Department eq 'Finance and Accounting'))"); // here's your tree
        // this is where you walk it and extract whatever data you desire 
    }
}

Then, all we’ve got to do is walk the resulting tree and apply any custom logic based on syntax node type. One example how to do that can be found in this StackOverflow answer.

Entity Framework Core 3.1 – dynamic WHERE clause

Every now and then we get tasked with building a backend for filtering arbitrary queries. Usually clients would like to have a method of sending over an array of fields, values, and comparisons operations in order to retrieve their data. For simplicity we’ll assume that all conditions are joining each other with an AND operator.

public class QueryableFilter {
    public string Name { get; set; }
    public string Value { get; set; }
    public QueryableFilterCompareEnum? Compare { get; set; }
}

With a twist

There’s however one slight complication to this problem – filters must apply to fields on dependent entities (possible multiple levels of nesting as well). This can become a problem not only because we’d have to traverse model hierarchy (we’ll touch on that later), but also because of ambiguity this requirement introduces. Sometimes we’re lucky to only have unique column names across the hierarchy. However more often than not this needs to be resolved one way or another. We can, for example, require filter fields to use dot notation so we know which entity each field relates to. For example, Name -eq "ACME Ltd" AND Name -eq "Cloud Solutions" becomes company.Name -eq "ACME Ltd" AND team.Name -eq "Cloud Solutions"

Building an expression

It is pretty common that clients already have some sort of data querying service with EF Core doing the actual database comms. And since EF relies on LINQ Expressions a lot – we can build required filters dynamically.

public static IQueryable<T> BuildExpression<T>(this IQueryable<T> source, DbContext context, string columnName, string value, QueryableFilterCompareEnum? compare = QueryableFilterCompareEnum.Equal)
{
	var param = Expression.Parameter(typeof(T));

	// Get the field/column from the Entity that matches the supplied columnName value
	// If the field/column does not exists on the Entity, throw an exception; There is nothing more that can be done
	MemberExpression dataField;
	
	var model = context.Model.FindEntityType(typeof(T)); // start with our own entity
	var props = model.GetPropertyAccessors(param); // get all available field names including navigations
	var reference = props.First(p => RelationalPropertyExtensions.GetColumnName(p.Item1) == columnName); // find the filtered column - you might need to handle cases where column does not exist

	dataField = reference.Item2 as MemberExpression; // we happen to already have correct property accessors in our Tuples	

	ConstantExpression constant = !string.IsNullOrWhiteSpace(value)
		? Expression.Constant(value.Trim(), typeof(string))
		: Expression.Constant(value, typeof(string));

	BinaryExpression binary = GetBinaryExpression(dataField, constant, compare);
	Expression<Func<T, bool>> lambda = (Expression<Func<T, bool>>)Expression.Lambda(binary, param);
	return source.Where(lambda);
}

Most of the code above is pretty standard for building property accessor lambdas, but GetPropertyAccessors is the key:

private static IEnumerable<Tuple<IProperty, Expression>> GetPropertyAccessors(this IEntityType model, Expression param)
{
	var result = new List<Tuple<IProperty, Expression>>();

	result.AddRange(model.GetProperties()
								.Where(p => !p.IsShadowProperty()) // this is your chance to ensure property is actually declared on the type before you attempt building Expression
								.Select(p => new Tuple<IProperty, Expression>(p, Expression.Property(param, p.Name)))); // Tuple is a bit clunky but hopefully conveys the idea

	foreach (var nav in model.GetNavigations().Where(p => p is Navigation))
	{
		var parentAccessor = Expression.Property(param, nav.Name); // define a starting point so following properties would hang off there
		result.AddRange(GetPropertyAccessors(nav.ForeignKey.PrincipalEntityType, parentAccessor)); //recursively call ourselves to travel up the navigation hierarchy
	}

	return result;
}

this is where we interrogate EF as-built data model, traverse navigation properties and recursively build a list of all properties we can ever filter on!

Testing it out

Talk is cheap, let’s run a complete example here:

public class Entity
{
	public int Id { get; set; }
}
class Company : Entity
{
	public string CompanyName { get; set; }
}

class Team : Entity
{
	public string TeamName { get; set; }
	public Company Company { get; set; }
}

class Employee : Entity
{
	public string EmployeeName { get; set; }
	public Team Team { get; set; }
}

class DynamicFilters<T> where T : Entity
{
	private readonly DbContext _context;

	public DynamicFilters(DbContext context)
	{
		_context = context;
	}

	public IEnumerable<T> Filter(IEnumerable<QueryableFilter> queryableFilters = null)
	{
		IQueryable<T> mainQuery = _context.Set<T>().AsQueryable().AsNoTracking();
		// Loop through the supplied queryable filters (if any) to construct a dynamic LINQ-to-SQL queryable
		foreach (var filter in queryableFilters ?? new List<QueryableFilter>())
		{
			mainQuery = mainQuery.BuildExpression(_context, filter.Name, filter.Value, filter.Compare);
		}

		mainQuery = mainQuery.OrderBy(x => x.Id);

		return mainQuery.ToList();
	}
}
// --- DbContext
class MyDbContext : DbContext
{
	public DbSet<Company> Companies { get; set; }
	public DbSet<Team> Teams { get; set; }
	public DbSet<Employee> Employees { get; set; }

	protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
	{
		optionsBuilder.UseSqlServer("Server=.\\SQLEXPRESS;Database=test;Trusted_Connection=true");
		base.OnConfiguring(optionsBuilder);
	}
}
// ---
static void Main(string[] args)
{
	var context = new MyDbContext();
	var someTableData = new DynamicFilters<Employee>(context).Filter(new
	List<QueryableFilter> { new QueryableFilter { Name = "CompanyName", Value = "ACME Ltd" }, new QueryableFilter { Name = "TeamName", Value = "Cloud Solutions" } });
}

The above block should produce following SQL:

SELECT [e].[Id], [e].[EmployeeName], [e].[TeamId]
FROM [Employees] AS [e]
LEFT JOIN [Teams] AS [t] ON [e].[TeamId] = [t].[Id]
LEFT JOIN [Companies] AS [c] ON [t].[CompanyId] = [c].[Id]
WHERE [c].[CompanyName] = N'ACME Ltd'
 AND [t].[TeamName] = N'Cloud Solutions'
ORDER BY [e].[Id]

Entity Framework Core 3.1 – Peeking Into Generated SQL

Writing LINQ that produces optimal SQL can be even harder as developers often don’t have visibility into the process. It becomes even more confusing when the application is designed to run against different databases.

We often find ourselves questioning whether this particular query will fall in line with our expectations. And until not so long ago our tool of choice was a SQL Profiler, that ships with SQL Server. It’s plenty powerful but has one flaw – it pretty much requires the SQL Server installation. This might be a deal breaker for some clients using other DBs, like Postgres or MySQL (which are all supported by the way).

EF to the resque

Instead of firing off the profiler and fishing out the batches, we could have Entity Framework itself pass us the result. After all, it needs to build SQL before sending it off the the database, so all we have to do it to ask nicely. Stack Overflow is quite helpful here:

public static class IQueryableExtensions // this is the EF Core 3.1 version.
    {
        public static string ToSql<TEntity>(this IQueryable<TEntity> query) where TEntity : class
        {
            var enumerator = query.Provider.Execute<IEnumerable<TEntity>>(query.Expression).GetEnumerator();
            var relationalCommandCache = enumerator.Private("_relationalCommandCache");
            var selectExpression = relationalCommandCache.Private<SelectExpression>("_selectExpression");
            var factory = relationalCommandCache.Private<IQuerySqlGeneratorFactory>("_querySqlGeneratorFactory");

            var sqlGenerator = factory.Create();
            var command = sqlGenerator.GetCommand(selectExpression);

            string sql = command.CommandText;
            return sql;
        }

        private static object Private(this object obj, string privateField) => obj?.GetType().GetField(privateField, BindingFlags.Instance | BindingFlags.NonPublic)?.GetValue(obj);
        private static T Private<T>(this object obj, string privateField) => (T)obj?.GetType().GetField(privateField, BindingFlags.Instance | BindingFlags.NonPublic)?.GetValue(obj);
    }

The usage is simple

Suppose we’ve got the following inputs: One simple table, that we’d like to group by one field and total by another. Database Context is also pretty much boilerplate. One thing to note here is a couple of database providers we are going to try the query against.

public class SomeTable
{
    public int Id { get; set; }
    public int Foobar { get; set; }
    public int Quantity { get; set; }
}

class MyDbContext : DbContext
{
    public DbSet<SomeTable> SomeTables { get; set; }
    public static readonly LoggerFactory DbCommandConsoleLoggerFactory
        = new LoggerFactory(new[] {
        new ConsoleLoggerProvider ((category, level) =>
            category == DbLoggerCategory.Database.Command.Name &&
            level == LogLevel.Trace, true)
        });
    protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
    {
        //run with SQL Server provider to get T-SQL
        optionsBuilder.UseNpgsql("Server=localhost;Port=5432;Database=test;User Id=;Password=;")
        //alternatively use other supported provider
        //optionsBuilder.UseSqlServer("Server=.\\SQLEXPRESS;Database=test;Trusted_Connection=true")
        ;
        base.OnConfiguring(optionsBuilder);
    }
}

The test bench would look something like so

class Program
{
    static void Main(string[] args)
    {

        var context = new MyDbContext();
        var someTableData = context.SomeTables
                .GroupBy(x => x.Foobar)
                .Select(x => new { Foobar = x.Key, Quantity = x.Sum(y => y.Quantity) })
                .OrderByDescending(x => x.Quantity)
                .Take(10) // we've built our query as per normal
                .ToSql(); // this is the magic
        Console.Write(someTableData);
        Console.ReadKey();
    }
}

And depending on our choice of provider the output would show ef core generated sql for SQL Server and Postgres

        -- MSSQL
        SELECT TOP(@__p_0) [s].[Foobar], SUM([s].[Quantity]) AS [Quantity]
        FROM [SomeTables] AS [s]
        GROUP BY [s].[Foobar]
        ORDER BY SUM([s].[Quantity]) DESC

        -- PG SQL
         SELECT s."Foobar", SUM(s."Quantity")::INT AS "Quantity"
        FROM "SomeTables" AS s
        GROUP BY s."Foobar"
        ORDER BY SUM(s."Quantity")::INT DESC
        LIMIT @__p_0