Entity Framework Core 3 – Custom Functions (Using IMethodCallTranslator)


Every now and then Stack Overflow provides fantastic opportunities to learn something new. One user asked whether SQL Server’s DECRYPTBYPASSPHRASE can be implemented with Entity Framework Core 2.2 so they can fetch encrypted strings in SQL.

We like the challenge

This post led us to quickly implement a PoC for EF2.2. We’ve got a Model defined like so:

public partial class Model
    public int Id { get; set; }
    public byte[] Encrypted { get; set; } // apparently encrypted data is stored in `VARBINARY`, which translates to `byte[]`, so I had to tweak it here
    [NotMapped] // this is still required as EF will not know where to get the data unless we tell it (see down below)
    public string Decrypted { get; set; } // the whole goal of this exercise here
    public Table2 Table2 { get; set; }

And a Concrete Repository to access the DB:

public IEnumerable<Model> GetAllById(int id)
    // you will need to uncomment the following line to work with your key
    //_dbContext.Database.ExecuteSqlCommand("OPEN SYMMETRIC KEY {1} DECRYPTION BY PASSWORD = '{2}';", SymmetricKeyName, SymmetricKeyPassword);
    var filteredSet = Set.Include(x => x.Table2)
        .Where(x => x.Id == id)
        .Where(x => x.Table2.IsSomething)
        .Select(m => new Model
        Id = m.Id,
        //Decrypted = EF.Functions.DecryptByKey(m.Encrypted), // since the key's opened for session scope - just relying on it should do the trick
        Decrypted = EF.Functions.Decrypt("test", m.Encrypted),
        Table2 = m.Table2,
        Encrypted = m.Encrypted
    // you will need to uncomment the following line to work with your key
    //_dbContext.Database.ExecuteSqlCommand("CLOSE SYMMETRIC KEY {1};", SymmetricKeyName);
    return filteredSet;

now, defining EF.Functions.Decrypt is the key here. We basically have to do it twice:

  1. as extension methods so we can use then in LINQ and
  2. as EF Expression tree nodes.

What EF then does, for each method call it discovers, it checks internal list of IMethodCallTranslator and if it discovers a match – it defers the function to SQL. Otherwise it will have to be run in C#. So all the plumbing we’re going to have to do next is basically needed to inject TranslateImpl into that list.

The IMethodCallTranslator itself

public class TranslateImpl : IMethodCallTranslator

    private static readonly MethodInfo _encryptMethod
        = typeof(DbFunctionsExtensions).GetMethod(
            new[] { typeof(DbFunctions), typeof(string), typeof(string) });
    private static readonly MethodInfo _decryptMethod
        = typeof(DbFunctionsExtensions).GetMethod(
            new[] { typeof(DbFunctions), typeof(string), typeof(byte[]) });

    private static readonly MethodInfo _decryptByKeyMethod
        = typeof(DbFunctionsExtensions).GetMethod(
            new[] { typeof(DbFunctions), typeof(byte[]) });

    public Expression Translate(MethodCallExpression methodCallExpression)
        if (methodCallExpression.Method == _encryptMethod)
            var password = methodCallExpression.Arguments[1];
            var value = methodCallExpression.Arguments[2];
            return new EncryptExpression(password, value);
        if (methodCallExpression.Method == _decryptMethod)
            var password = methodCallExpression.Arguments[1];
            var value = methodCallExpression.Arguments[2];
            return new DecryptExpression(password, value);

        if (methodCallExpression.Method == _decryptByKeyMethod)
            var value = methodCallExpression.Arguments[1];
            return new DecryptByKeyExpression(value);

        return null;

I ended up implementing three expression stubs:  DecryptByKeyDecryptByPassphrase and EncryptByPassphrase, for example:

public class DecryptByKeyExpression : Expression
    private readonly Expression _value;

    public override ExpressionType NodeType => ExpressionType.Extension;
    public override Type Type => typeof(string);
    public override bool CanReduce => false;

    protected override Expression VisitChildren(ExpressionVisitor visitor)
        var visitedValue = visitor.Visit(_value);

        if (ReferenceEquals(_value, visitedValue))
            return this;

        return new DecryptByKeyExpression(visitedValue);

    protected override Expression Accept(ExpressionVisitor visitor)
        if (!(visitor is IQuerySqlGenerator))
            return base.Accept(visitor);
        visitor.Visit(new SqlFragmentExpression("CONVERT(VARCHAR(MAX), DECRYPTBYKEY("));
        visitor.Visit(new SqlFragmentExpression("))"));
        return this;

    public DecryptByKeyExpression(Expression value)
        _value = value;

All in all, it ends up being a pretty trivial string building exercise after all.

But the question remained

Can we port it forward? When we tried applying the same code base to version 3 of the framework we found that code has changed quite a bit. For one, Expressions were no longer an option.

Good news

Good news was that the approach was still valid. Custom Method Call translation is still a thing in EF Core 3. Even better news, the functionality seems to have gotten a bit more polished in the new version and now it’s a bit easier to do.


In EF Core 3, we’ve got to deal with ISqlExpressionFactory to get most of the job done. It gets passed around to plugins and has convenient methods to generate required expressions:

public class TranslateImpl : IMethodCallTranslator
        private readonly ISqlExpressionFactory _expressionFactory;

        private static readonly MethodInfo _encryptMethod
            = typeof(DbFunctionsExtensions).GetMethod(
                new[] { typeof(DbFunctions), typeof(string), typeof(string) });
        private static readonly MethodInfo _decryptMethod
            = typeof(DbFunctionsExtensions).GetMethod(
                new[] { typeof(DbFunctions), typeof(string), typeof(byte[]) });

        private static readonly MethodInfo _decryptByKeyMethod
            = typeof(DbFunctionsExtensions).GetMethod(
                new[] { typeof(DbFunctions), typeof(byte[]) });

        public TranslateImpl(ISqlExpressionFactory expressionFactory)
            _expressionFactory = expressionFactory;

        public SqlExpression Translate(SqlExpression instance, MethodInfo method, IReadOnlyList<SqlExpression> arguments)
            var args = new List<SqlExpression> { arguments[1], arguments[2] }; // cut the first parameter from extension function
            if (method == _encryptMethod)
                return _expressionFactory.Function(instance, "EncryptByPassPhrase", args, typeof(byte[]));
            if (method == _decryptMethod)
                return _expressionFactory.Function(instance, "DecryptByPassPhrase", args, typeof(byte[]));

            if (method == _decryptByKeyMethod)
                return _expressionFactory.Function(instance, "DecryptByKey", args, typeof(byte[]));

            return null;

this is way easier than the previous version!

Talk is cheap. Show me the code

Working code can be found on my GitHub, check out the difference between implementations for both frameworks.

EF Core 3: Getting model metadata from dynamically loaded assembly with IL Emit

Yet another Stack Overflow question has sparked a heated discussion and got us thinking whether we can do better.

In a nutshell, the question was about finding a way to query EF Core model metadata without directly referencing the assembly that defines it. Think MsBuild Task that needs to check if your model is following your company standards. Or a test of some sort.

First stab at it

We were able to help the OP by quickly whipping up the following loader code:

var assembly = Assembly.LoadFrom(@"C:\OnlineShoppingStore\bin\Debug\netcoreapp2.2\OnlineShoppingStore.dll");
var contextType = assembly.GetTypes().First(d => d.Name == "OnlineStoreDbContext");
var ctx = Activator.CreateInstance(contextType) as DbContext; // instantiate your context. this will effectively build your model, so you must have all required EF references in your project
var p = ctx.Model.FindEntityType(assembly.GetTypes().First(d => d.Name == "Product")); // get the type from loaded assembly
//var p = ctx.Model.FindEntityType("OnlineStoreDbContext.Product"); // querying model by type name also works, but you'd need to correctly qualify your type names
var pk = p.FindPrimaryKey().Properties.First().Name; // your PK property name as built by EF model

The answer ended up being accepted, but the OP had a bit of an issue with instantiating the Context:

System.InvalidOperationException: 'No database provider has been configured for this DbContext. 
A provider can be configured by overriding the DbContext.OnConfiguring method or by using AddDbContext on the application service provider. 
If AddDbContext is used, then also ensure that your DbContext type accepts a DbContextOptions object in its constructor and passes it to the base constructor for DbContext.

This is kind of expected: when EF creates the context it will invoke OnConfiguring override and set up DB provider with connection strings and so on and so forth. It all is necessary for the actual thing to run, but for the OP it meant having to drag all DB providers into the test harness. Not ideal.

The idea

After a bit back and forth I’ve got an idea. What if we subclass the Context yet again and override the OnConfiguring with a predefined Provider (say, InMemory)?

IL Emit all things

We don’t get to use IL Emit often – it’s meant for pretty specific use cases and I think this is one. The key to getting it right in our case was finding the correct overload of UseInMemoryDatabase. There’s a chance however, that you might need to tweak it to suit your needs. It is pretty trivial once you know what you’re looking for.

public static MethodBuilder OverrideOnConfiguring(this TypeBuilder tb)
            MethodBuilder onConfiguringMethod = tb.DefineMethod("OnConfiguring",
                | MethodAttributes.HideBySig
                | MethodAttributes.NewSlot
                | MethodAttributes.Virtual,
                new[] { typeof(DbContextOptionsBuilder) });

            // the easiest method to pick will be .UseInMemoryDatabase(this DbContextOptionsBuilder optionsBuilder, string databaseName, Action<InMemoryDbContextOptionsBuilder> inMemoryOptionsAction = null)
            // but since constructing generic delegate seems a bit too much effort we'd rather filter everything else out
            var useInMemoryDatabaseMethodSignature = typeof(InMemoryDbContextOptionsExtensions)
                .Where(m => m.Name == "UseInMemoryDatabase")
                .Where(m => m.GetParameters().Length == 3)
                .Where(m => m.GetParameters().Select(p => p.ParameterType).Contains(typeof(DbContextOptionsBuilder)))
                .Where(m => m.GetParameters().Select(p => p.ParameterType).Contains(typeof(string)))
            // emits the equivalent of optionsBuilder.UseInMemoryDatabase("test");
            var gen = onConfiguringMethod.GetILGenerator();
            gen.Emit(OpCodes.Ldstr, Guid.NewGuid().ToString());
            gen.Emit(OpCodes.Call, useInMemoryDatabaseMethodSignature);

            return onConfiguringMethod;

with the above out of the way we now can build our dynamic type and plug it into our test harness!

class Program
        static void Main(string[] args)
            // load assembly under test
            var assembly = Assembly.LoadFrom(@"..\ef-metadata-query\OnlineShoppingStore\bin\Debug\netcoreapp3.1\OnlineShoppingStore.dll");
            var contextType = assembly.GetTypes().First(d => d.Name == "OnlineStoreDbContext");

            // create yet another assembly that will hold our dynamically generated type
            var typeBuilder = AssemblyBuilder
                                .DefineDynamicAssembly(new AssemblyName(Guid.NewGuid().ToString()), AssemblyBuilderAccess.RunAndCollect)
                                .DefineDynamicModule(Guid.NewGuid() + ".dll")
                                .DefineType("InheritedDbContext", TypeAttributes.Public, contextType); // make new type inherit from DbContext under test!

            // this is the key here! now our dummy implementation will kick in!
            var onConfiguringMethod = typeBuilder.OverrideOnConfiguring();
            typeBuilder.DefineMethodOverride(onConfiguringMethod, typeof(DbContext).GetMethod("OnConfiguring", BindingFlags.Instance | BindingFlags.NonPublic));
            var inheritedDbContext = typeBuilder.CreateType(); // enough config, let's get the type and roll with it

            // instantiate inheritedDbContext with default OnConfiguring implementation
            var context = Activator.CreateInstance(inheritedDbContext) as DbContext; // instantiate your context. this will effectively build your model, so you must have all required EF references in your project
            var p = context?.Model.FindEntityType(assembly.GetTypes().First(d => d.Name == "Product")); // get the type from loaded assembly
            //query the as-built model
            //var p = ctx.Model.FindEntityType("OnlineStoreDbContext.Product"); // querying model by type name also works, but you'd need to correctly qualify your type names
            var pk = p.FindPrimaryKey().Properties.First().Name; // your PK property name as built by EF model

This is runnable

Source code is available on GitHub in case you want to check it out and play a bit

Making Swagger to get the authorization token from URL query string

Swagger is extremely useful when developing and debugging Web APIs. Some dev environments however got a bit of security added on top which can get a bit too painful to work around.

Enter API key

It doesn’t need to be tedious! We’ll be looking at overriding Swagger-UI’s index page so we can plug a custom handler into onComplete callback. Solution is extremely simple:

  1. Grab latest index.html from Swashbuckle’s source repo (ideally, get the matching version)
  2. Tweak configObject to add an OnComplete callback handler so it will call preauthorizeApiKey when the UI is ready
  3. Override IndexStream in UserSwaggerUI extension method to serve the custom html

I ended up having the following setup (some bits are omitted for brevity):


<!-- your standard HTML here, nothing special -->
    // some boilerplate initialisation
    // Begin Swagger UI call region
    configObject.onComplete = () => {

        // get the authorization portion of the query string
        var urlParams = new URLSearchParams(window.location.search);
        if (urlParams.has('authorization')) {
            var apikey = urlParams.get('authorization');

            // this is the important bit, see documentation
            ui.preauthorizeApiKey('api key', apikey );// key name must match the one you defined in AddSecurityDefinition method in Startup.cs
    const ui = SwaggerUIBundle(configObject);
    window.ui = ui        


    public void ConfigureServices(IServiceCollection services)
        services.AddSwaggerGen(c => {
            c.SwaggerDoc("v1", new Info { Title = "You api title", Version = "v1" });
            c.AddSecurityDefinition("api key", new ApiKeyScheme() // key name must match the one you supply to preauthorizeApiKey call in JS
                Description = "Authorization query string expects API key",
                In = "query",
                Name = "authorization",
                Type = "apiKey"

            var requirements = new Dictionary<string, IEnumerable<string>> {
                { "api key", new List<string>().AsEnumerable() }

    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        app.UseSwaggerUI(c =>
            c.IndexStream = () => File.OpenRead("wwwroot/swashbuckle.html"); // this is the important bit. see documentation https://github.com/domaindrivendev/Swashbuckle.AspNetCore/blob/master/README.md
            c.SwaggerEndpoint("/swagger/v1/swagger.json", "My API V1"); // very standard Swashbuckle init

After having finished all that, calling the standard swagger URL with ?authorization=1234567890 should automatically authorize the page.

Integration testing aide for MVC core routing


Sometimes unit tests just don’t cut it. This is where integration tests come in. This however brings a whole new set of issues with finding the beast way to isolate the aspects under test and mock everything else away.

Problem statement

Suppose, we’ve got an api and a test that needs to make an http call to our api endpoint, like so:

  public class TestController : ControllerBase {

    public IActionResult OkTest() {
      return Ok(true);
public class TestControllerTests {

    private readonly HttpClient _client;

    public TestControllerTests() {
      _client = TestSetup.GetTestClient();

    public async Task OkTest() {
      var path = GetPathHere(nameof(OkTest)); // should return "/api/test/oktest".
      var response = await _client.GetAsync(path);


Knowing that ASP.NET Core comes with such a lightweight package now, and exposes so many extensibility points, one approach we found efficient was to build up the whole Host and query its properties:

private string GetPathHere(string actionName)
        var host = Program.CreateWebHostBuilder(new string[] { }).Build();
        IActionDescriptorCollectionProvider provider = (host.Services as ServiceProvider).GetService<IActionDescriptorCollectionProvider>();
        return provider.ActionDescriptors.Items.First(i => (i as ControllerActionDescriptor)?.ActionName == actionName).AttributeRouteInfo.Template;

    public void OkTestShouldBeFine()
        var path = GetPathHere(nameof(ValuesController.OkTest)); // "api/test/oktest"


This is pretty basic case we’ve been dealing with, and the code makes quite a few assumptions. This approach however seems to hold up pretty well and surely will be our starting point next time round we test MVC actions!

Moq-ing around existing instance

We love unit testing! Seriously, it makes sense if you consider how many times a simple test had saved us from having to revisit that long forgotten project we’ve already moved on from. Not fun and not good for business.

Moq: our tool of choice

To be able to only test the code we want, we need to isolate it. Of course, there’s heaps libraries for that already. Moq is just one of them. It allows of to create objects based on given interfaces and set up the expected behaviour in a way that we can abstract away all code we don’t currently test. Extremely powerful tool

Sometimes you just need a bit more of that

Suppose we’re testing a object that depends on internal state that’s tricky to abstract away. We’d however like to use Mock to replace one operation without changing the others:

public class A
    public string Test {get;set;}
    public virtual string ReturnTest() => Test;
//and some code below:
void Main()
    var config = new A() {
        Test = "TEST"
    } ;

    var mockedConfig = new Mock<A>(); // first we run a stock standard mock
    mockedConfig.CallBase = true; // we will enable CallBase just to point out that it makes no difference  
    var o = mockedConfig.Object;
    Console.WriteLine(o.ReturnTest()); // this will be null because Test has not been initialised from constructor
    mockedConfig.Setup(c => c.ReturnTest()).Returns("mocked"); // of course if you set up your mocks - you will get the value
    Console.WriteLine(o.ReturnTest()); // this will be "mocked" now, no surprises

The code above illustrates the problem quite nicely. You’ll know if this is your case when you see it.

General sentiment towards these problems

“It can’t be done, use something else”, they say. Some people on StackOverflow suggest to ditch Moq completely and go for it’s underlying technology Castle DynamicProxy. And it is a valid idea – create a proxy class around yours and intercept calls to the method under test. Easy!

Kinda easy

One advantage or Moq (which is by the way built on top of Castle DynamicProxy) is that it’s not just creating mock objects, but also tracks invocations and allows us to verify those later. Of course, we could opt to write the reuiqred bits ourselves, but why reinvent the wheel and introduce so much code that no one will maintain?

How about we mix and match?

We know that Moq internally leverages Castle DynamicProxy and it actually allows us to generate proxies for instances (they call it Class proxy with target). Therefore the question is – how do we get Moq to make one for us. It seems there’s no such option out of the box, and simply injecting the override didn’t quite go well as there’s not much inversion of control inside the library and most of the types and properties are marked as internal, making inheritance virtually impossible.

Castle Proxy is however much more user firendly and has quite a few methods exposed and available for overriding. So let us define a ProxyGenerator class that would take the method Moq calls and add required functionality to it (just compare CreateClassProxyWithTarget and CreateClassProxy implementations – they are almost identical!)


class MyProxyGenerator : ProxyGenerator
    object _target;

    public MyProxyGenerator(object target) {
        _target = target; // this is the missing piece, we'll have to pass it on to Castle proxy
    // this method is 90% taken from the library source. I only had to tweak two lines (see below)
    public override object CreateClassProxy(Type classToProxy, Type[] additionalInterfacesToProxy, ProxyGenerationOptions options, object[] constructorArguments, params IInterceptor[] interceptors)
        if (classToProxy == null)
            throw new ArgumentNullException("classToProxy");
        if (options == null)
            throw new ArgumentNullException("options");
        if (!classToProxy.GetTypeInfo().IsClass)
            throw new ArgumentException("'classToProxy' must be a class", "classToProxy");
        CheckNotGenericTypeDefinition(classToProxy, "classToProxy");
        CheckNotGenericTypeDefinitions(additionalInterfacesToProxy, "additionalInterfacesToProxy");
        Type proxyType = CreateClassProxyTypeWithTarget(classToProxy, additionalInterfacesToProxy, options); // these really are the two lines that matter
        List<object> list =  BuildArgumentListForClassProxyWithTarget(_target, options, interceptors);       // these really are the two lines that matter
        if (constructorArguments != null && constructorArguments.Length != 0)
        return CreateClassProxyInstance(proxyType, list, classToProxy, constructorArguments);

if all of the above was relativaly straightforward, actually feeding it into Moq is going to be somewhat of a hack. As I mentioned, most of the structures are marked internal so we’ll have to use reflection to get through:


public class MyMock<T> : Mock<T>, IDisposable where T : class
    void PopulateFactoryReferences()
        // Moq tries ridiculously hard to protect their internal structures - pretty much every class that could be of interest to us is marked internal
        // All below code is basically serving one simple purpose = to swap a `ProxyGenerator` field on the `ProxyFactory.Instance` singleton
        // all types are internal so reflection it is
        // I will invite you to make this a bit cleaner by obtaining the `_generatorFieldInfo` value once and caching it for later
        var moqAssembly = Assembly.Load(nameof(Moq));
        var proxyFactoryType = moqAssembly.GetType("Moq.ProxyFactory");
        var castleProxyFactoryType = moqAssembly.GetType("Moq.CastleProxyFactory");     
        var proxyFactoryInstanceProperty = proxyFactoryType.GetProperty("Instance");
        _generatorFieldInfo = castleProxyFactoryType.GetField("generator", BindingFlags.NonPublic | BindingFlags.Instance);     
        _castleProxyFactoryInstance = proxyFactoryInstanceProperty.GetValue(null);
        _originalProxyFactory = _generatorFieldInfo.GetValue(_castleProxyFactoryInstance);//save default value to restore it later

    public MyMock(T targetInstance) {       
        // this is where we do the trick!
        _generatorFieldInfo.SetValue(_castleProxyFactoryInstance, new MyProxyGenerator(targetInstance));

    private FieldInfo _generatorFieldInfo;
    private object _castleProxyFactoryInstance;
    private object _originalProxyFactory;

    public void Dispose()
         // you will notice I opted to implement IDisposable here. 
         // My goal is to ensure I restore the original value on Moq's internal static class property in case you will want to mix up this class with stock standard implementation
         // there are probably other ways to ensure reference is restored reliably, but I'll leave that as another challenge for you to tackle
        _generatorFieldInfo.SetValue(_castleProxyFactoryInstance, _originalProxyFactory);

Then, given we’ve got the above working, the actual solution would look like so:

    var config = new A()
        Test = "TEST"
    using (var superMock = new MyMock<A>(config)) // now we can pass instances!
        superMock.CallBase = true; // you still need this, because as far as Moq is oncerned it passes control over to CastleDynamicProxy   
        var o1 = superMock.Object;
        Console.WriteLine(o1.ReturnTest()); // but this should return TEST

.NET Membership audit

In previous article we’ve got through the setup and prepared a hash list for us to work on. And it’s perfectly valid for running through a smaller list of words and hashes (anything smaller than Rockyou dictionary is fine). However, for serious jobs…

Hashcat alone is not going to cut it

By serious jobs we mean anything that takes orders of magnitude of months of GPU time on at least a couple of nodes. Yes, you can run Hashcat on one node, and save session between node reboots. Just make sure to back it up every now and then as it’s very easy to accidentally overwrite! Alternatively you can run multiple nodes and split jobs by calculating [cci]–skip[/cci] factor yourself. But soon you’d reach a point when managing it becomes too mundane. It doesn’t have to be, however.

Ladies and gentlemen, I give you Hashtopolis

Everything has been taken care of for us: managing lists, dictionaries, jobs and workers. All packaged up in a nice Bootstrap UI that even offers cracked password reports out of the box. And if you don’t want to bother setting up LAMP stack yourself…

There’s a Docker image for that

There’s a few to choose from, we’ve been quite happy with this one.
[cc lang=”yaml”]version: ‘3.6’
image: sizmek/mariadb:10.1.20
– db_data:/var/lib/mysql
MYSQL_ROOT_PASSWORD: “whatever you’ve set up”
image: phpmyadmin/phpmyadmin
– 8080:80
PMA_HOST: “mysql”
image: kpeiruza/hashtopolis
– 80:80
MYSQL_HOST: “mysql”
MYSQL_ROOT_PASSWORD: “whatever you’ve set up”
H8_USER: “admin”
H8_PASS: “your password of choice”
– ./php.ini:/etc/php/7.0/php.ini:ro #a bit of a gotcha here. Big dictionaries and hashlists require bigger PHP file upload limits. if you’re fine with stock standard you won’t need this
– hashtopolis_import:/var/www/html/import
– hashtopolis_upload:/var/www/html/files

Setting up agents is a bit more pain though

With current level of GPU support in Docker we can’t really run these for seious work. Luckily for us Hashtopolis comes with a choice of agents that should be suiable for most platforms. Given we reside in Windows world we went with C# agent. Set up is pretty simple:

  1. Generate secret token on server
  2. Fire up agent, let it know server URL
  3. Feed the secret token to agent when requested and that’s it

.NET Membership audit preparation steps

Not so long ago one of our clients asked us to audit their custom .net CMS users password security. It’s not a secret that 99% of cases when we review .net user accounts and security we would see default SqlMembershipProvider implementation. It has the potential to be extremely secure. Opposite however is also true – you can make it a disaster allowing symmetric encryption, for example. Our case was pretty much default settings across the board, so our passwords were SHA1 hashed with random salts. Good enough, really.
Following up on Troy Hunt’s post we have got ourselves a fresh copy of Hashcat and a few dictionaries to play with.

This is where it all went wrong

To begin with, although EpiServer is running on .net platform, it seems to use slightly relaxed version of membership provider: their salts seem to be shorter than the default. Hashcat however seems to make a great fuss out of this difference:
[cc lang=”bash”]Z:\hashcat>hashcat64.exe -m 141 z:\hashes.txt dictionary.txt
hashcat (v4.2.1) starting…

Hashfile ‘z:\hashes.txt’ on line 1 ($epise…/JapfY=*j+gvb/c0mt28CHJssmwFvQ==): Token length exception
No hashes loaded.

Started: Thu Sep 01 07:48:39 2018
Stopped: Thu Sep 01 07:48:39 2018[/cc]
So apparently Troy’s Episerver-specific routine does not work for any other ASP membership hashes. This seems to be a show stopper.

This is still .net membership though

We know where exactly Sql Memebership Provider is located, we can decompile it and have a peek at how exactly it works. I quoted the relevant function with some comments below:
[cc lang=”c#”]namespace System.Web.Security
public class SqlMembershipProvider : MembershipProvider
private string EncodePassword(string pass, int passwordFormat, string salt)
if (passwordFormat == 0) return pass; // this is plain text passwords. keep away from it
byte[] bytes = Encoding.Unicode.GetBytes(pass); // this is very important, we’ll get back to it later
byte[] numArray1 = Convert.FromBase64String(salt);
byte[] inArray;
if (passwordFormat == 1) // this is where the interesting stuff happens
HashAlgorithm hashAlgorithm = this.GetHashAlgorithm();//by default we’d get SHA1 here
if (hashAlgorithm is KeyedHashAlgorithm)
// removed this block for brevity as it makes no sense in our case as SHA1 does not require a key.
/* and so we end up with this little block of code that does all the magic.
it’s pretty easy to follow: concatenates salt+password (order matters!) and runs it through SHA1 */

byte[] buffer = new byte[numArray1.Length + bytes.Length];
Buffer.BlockCopy((Array)numArray1, 0, (Array)buffer, 0, numArray1.Length);
Buffer.BlockCopy((Array)bytes, 0, (Array)buffer, numArray1.Length, bytes.Length);
inArray = hashAlgorithm.ComputeHash(buffer);
// symmetric encryption. again, keep away
return Convert.ToBase64String(inArray);
Hashing implementation is not secret either, we can look through Hashcat’s extensive list of supported hashing modes and see if anything pops out.
We’re looking for something that does SHA1 over concatenated salt and password. At first glance we’ve got mode 120 | sha1($salt.$pass), but there’s a catch.
As I noted in the code comment, Encoding.Unicode actually represents UTF16, not UTF8 most of us are used to.
But that’s fine, because Hashcat has just the method for us: 140 | sha1($salt.utf16le($pass)). Fantastic!

A bit of housekeeping

The easiest way to prepare a hashlist in our case was to run a simple LINQPad script against membership database and save results in a file. We just need to keep in mind that salts are represented as hex-encoded strings, so we’ll need to let Hashcat know.
[cc lang=”csharp”]static class StringExtensions {
public static string ToHex(this string input) {
byte[] bytes = Convert.FromBase64String(input);
var result = BitConverter.ToString(bytes).Replace(“-“, “”).ToLower();
return result;

/// to make this work you’d need to set up a database connection in your LINQ Pad first
/// we assume Memberships and Users tables exist in the database we’re connected to. Tweak to your environment

void Main()
var f = File.CreateText(@”z:\hashes.txt”);
/// Memberships – main ASP.NET Membership table. can be called differently, so watch out for your use case
/// Users – table mapping memberships to user names. Again, can be called differently
foreach (var user in Memberships.Join(Users, m => m.UserId, u => u.UserId, (mm, uu) => new { uu.UserName, mm.Email, mm.Password, mm.PasswordSalt, uu.UserId }).OrderBy(x => x.UserName))
//choose either of these lines, to add users or not. We found *hash:salt* format to be less painful to work with in Hashcat

Hacking custom Start Action in Visual Studio 2017

As part of our technical debt collection commitment we do deal with weird situations where simply running a solution in Visual Studio might not be enough. A recent example to that was a web application split into two projects:

  • ASP.NET MVC – provided a shell for SPA as well as all backend integration – the entry point for application
  • React with Webpack – SPA mostly responsible for the UI and user intercations. Project was declared as Class Library with no code to run, while developers managed all javascript and Webpack off-band

Both projects are reasonably complex and I can see why the developers tried to split them. The best intention however makes running this tandem a bit awkward execrise. We run  npm run build  and expect it to drop files into correct places for MVC project that just relies on compiled javascript to be there. Using both technology stacks in one solution is nothing new. We’d have opted for this template back at project inception. Now was too late however.

Just make VS run both projects

Indeed, Microsoft has been generous enough to let us designate multiple projects as start up:

This however does not help our particular case as React project was created as Class Library and there’s no code to run. We need a way to kick that npm run build command line every time VS ‘builds’ the React project… How do we do it?

Okay, let’s use custom Start Action

Bingo! We can absolutely do this and bootstrap us a shell which then would run our npm command. Technically we can run npm directly, but I could never quite remember where to look for the executable.

There’s a slight issue with this approach though: it is not portable between developers’ machines. There are at least two reasons for that:

  • Input boxes on this dialog form do not support environment variables and/or relative paths.
  • Changes made in ths window go to .csproj.user file, that by default is .gitignore’d (here’s a good explanation why it should be)

So this does not work:

There might be a way however

  1. First and foremost, unload the solution (not just project). Project .user settings are loaded on solution start so we want it to be clean.
  2. Open up .user file in your favourite text editor, mine looks like this:
    [cc lang=”xml”]

    /c start /min npm run build
    And change the path to whatever your requirements are:
    [cc lang=”xml”]
    /c start /min npm run build


We could potentially stop here, but the file is still user-specific and is not going into source control.

Merging all the way up to project file

As it turns out, we can just cut the elements we’re after ( StartAction ,  StartProgram  and  StartArguments ) and paste them into respective .csproj section (look out for the same  Condition  on  PropertyGroup , that should be it)
[cc lang=”xml”]

/c start /min npm run build

Open the solution again and check if everything works as intended.

Landing pages made easy

Do more with less

Every now and then customers would like us to build them a landing page with as little development and running overhead as possible.
First thing that pops to mind when presented with such requirements is to go for static sites. Indeed, there’s nothing easier than chucking a bunch of html into a directory and having it served.
Problems however start to arise when we consider maintainability and reusability of the solution as well as keeping things DRY. On top of that customers would almost certainly want a way to do periodic updates and changes without having to pay IT contractors’ fees (some might argue this is job security, but we believe it’s a sneaky attempt at customer lock-in that will probably backfire).

So how do we keep everyone happy?

Meet Jekyll, the static site generator. To be fair Jekyll is not the only attempt at translating templates into web pages. We however prefer it over the competition because of its adoption by GitHub – this means Jekyll sites get free hosting on Github pages. And this is real important when talking low overheads.

Up and running

If you don’t normally develop on Ruby, you probably will need to install a few things to start editing with Jekyll. Luckily with Docker we don’t need to do much but pull an official image and expose a few ports:
[cc lang=”yaml”]#this is an example docker-compose.yml file feel free to alter as needed
version: “2”
command: jekyll serve –force_polling # this tells the container to spin up a web server and instructs Jekyll to constantly scan working directory for changes
image: jekyll/jekyll:3.8
– ${PWD}/data:/srv/jekyll # this example assumes all jekyll files would go into /data subdirectory
– ${PWD}/bundle:/usr/local/bundle # this mapping is optional, however it helps speed up builds by caching all ruby gems outside of container
– 4000:4000 # this is where webserver is exposed[/cc]
Save this docker-compose.yml file somewhere and open up command prompt. First we need to init the site structure, so run [cc lang=”bash”]docker-compose run site jekyll new .[/cc]
Finally start the server [cc lang=”bash”]docker-compose up[/cc], open up http://localhost:4000 and start editing.

Jekyll basics

Repeating official documentation is probably not the best idea, so I’ll just mention a few points that would help to get started quickly:

  1. Jekyll loves Markdown. Either learn it (it’s really simple and rewarding, I promise) or use online editor
  2. Jekyll allows HTML. so don’t feel restricted by previous bullet point
  3. You’ve got a choice of posts, pages and collections with Jekyll. Depending on your goals one might be better suited than the others.
  4. Utilise data files and collections as needed. They are really helpful when building rich content.

How about a test drive?

Absolutely, check out a git demo we did for one of the training sessions: https://wiseowls.github.io/jekyll-git-demo/

Dev/BA love/hate relationship

Love/hate, really?

While love and hate might be overstated, some developers in the community keep wondering if they’d be better off without BAs. No matter how much I’d like to say “yes” outright, to make it somewhat fair, I did some research on both good and bad in Dev/BA relatioship.
A quick google reveals some of the most obvious reasons why Developers need Business Analysts (and I guess should love them for?)

  1. BAs figure out requirements by talking to end users/business so devs don’t have to
  2. BAs are able to turn vague business rules and processes into code-able requirements
  3. BAs leverage their knowledge in business and SDLC to break the work into manageable chunks
  4. BAs have enough domain knowledge (and stakeholder trust) to be able to make some calls and unblock developers on the spot
  5. BAs are able to negotiate features/scope on behalf of developers
  6. BAs document scope of work and maintain it
  7. BAs document everything so that it can be used to validate project outcomes

To be fair the only one meaningful “hate” result I’ve got was

  1. BAs are difficult

Let’s address elephant in the room first

BAs are as difficult as anyone else. Hopefully we’re all professionals here so that should just be a matter of working with them rather than against them. We all have the same goal in the end. Read on.

And now comes the big question

How many “love” are points actually valid?

  1. Sounds like a lazy Dev reasoning

    If you’re passionate about what you do – working with users/business is an excellent opportunity to learn, grow and see how results of your work help real people solve real problems. If you’re not passionate about what you do – maybe you should do something else instead?

  2. Formalising requirements is a biggie indeed

    Not just because it takes time to sit down and document that process. Often business have no idea themselves, so it takes collective effort to get it across the line. And I doubt anyone can nail everything down perfectly right from the get go. Also truth is, everyone has some sort of domain knowledge. So given the Dev is up to scratch, it seems logical to shorten the feedback loop. Cutting the middle man ensures everyone is on the same page and there’s no unnecessary backwards and forwards. This can be dangerous though because it’s very easy to get carried away and let this phase run forever. This is where an impartial 3rd party would be required. Another reason to potentially involve a BA here would be domain knowledge should the Dev be lacking in that department.

  3. BAs presumably excel at knowing domain

    But Devs are likely far far better aware of all the technical intricacies of the solution being built, so naturally Devs would have much better idea how to slice the work. On a few occasions I personally found myself in these situations. You can’t just build a rich UX user facing feature without thinking about all the plumbing and backend work that needs to happen first. Some BAs can argue they know the guts of system just as well. Perfect – with modern software development values of cross-functional teams there’s no reason to not get coding then.

  4. This is a valid point…kind of

    Dependant on (inter-)personal relationships however. Personally I find that having a stakeholder proxy in the team tends to speed things up a bit. Unless stakeholder is readily available anyway.

  5. Being an impartial 3rd party BAs are indeed able to help

    From my experience Devs tend to overengineer, hiding behind things like “best practice” and “future-proofing”. Common sense seems to be a very elusive concept when it comes to crafting code. So having someone not personally interested in technology for the sake of technology is probably going to benefit the project in the long run.

  6. Developers do get carried away

    I’ve been there, as well as probably everyone else. It gets very tempting to slip another shiny trendy library into the mix when no one is watching. Generalising and abstracting everything seems to be one of the things that set good developers apart from outstanding ones. For the sake of project success, this obsession just needs to be kept in check with goals and timeframes as in the end clients pay for solutions not frameworks.

  7. Documenting software is key

    Unfortunately this is something developers are generally not very good at – most of us would jump at coding instead of writing a plan first. This leads to all sorts of cognitive bias issues where features get documented after the fact they are finished and every justification goes to actual implementation rather than initial business requirement. Just having someone else with a different perspective take a look at it most likely going to help produce better results. Even better if documentation is made up front and independently. It becomes a valuable artifact and validation criteria.

In the end

It seems, applicability of all the aforementioned heavily depends on how experienced the particular Dev or BA is. There’s no point arguing whether the project would be better off with or without BAs (or Devs for that sake). Everyone adds value in their own way. As long as everyone adds value.