Moq-ing around existing instance

We love unit testing! Seriously, it makes sense if you consider how many times a simple test had saved us from having to revisit that long forgotten project we’ve already moved on from. Not fun and not good for business.

Moq: our tool of choice

To be able to only test the code we want, we need to isolate it. Of course, there’s heaps libraries for that already. Moq is just one of them. It allows of to create objects based on given interfaces and set up the expected behaviour in a way that we can abstract away all code we don’t currently test. Extremely powerful tool

Sometimes you just need a bit more of that

Suppose we’re testing a object that depends on internal state that’s tricky to abstract away. We’d however like to use Mock to replace one operation without changing the others:

public class A
{
    public string Test {get;set;}
    public virtual string ReturnTest() => Test;
}
//and some code below:
void Main()
{
    var config = new A() {
        Test = "TEST"
    } ;

    var mockedConfig = new Mock<A>(); // first we run a stock standard mock
    mockedConfig.CallBase = true; // we will enable CallBase just to point out that it makes no difference  
    var o = mockedConfig.Object;
    Console.WriteLine(o.ReturnTest()); // this will be null because Test has not been initialised from constructor
    mockedConfig.Setup(c => c.ReturnTest()).Returns("mocked"); // of course if you set up your mocks - you will get the value
    Console.WriteLine(o.ReturnTest()); // this will be "mocked" now, no surprises
}

The code above illustrates the problem quite nicely. You’ll know if this is your case when you see it.

General sentiment towards these problems

“It can’t be done, use something else”, they say. Some people on StackOverflow suggest to ditch Moq completely and go for it’s underlying technology Castle DynamicProxy. And it is a valid idea – create a proxy class around yours and intercept calls to the method under test. Easy!

Kinda easy

One advantage or Moq (which is by the way built on top of Castle DynamicProxy) is that it’s not just creating mock objects, but also tracks invocations and allows us to verify those later. Of course, we could opt to write the reuiqred bits ourselves, but why reinvent the wheel and introduce so much code that no one will maintain?

How about we mix and match?

We know that Moq internally leverages Castle DynamicProxy and it actually allows us to generate proxies for instances (they call it Class proxy with target). Therefore the question is – how do we get Moq to make one for us. It seems there’s no such option out of the box, and simply injecting the override didn’t quite go well as there’s not much inversion of control inside the library and most of the types and properties are marked as internal, making inheritance virtually impossible.

Castle Proxy is however much more user firendly and has quite a few methods exposed and available for overriding. So let us define a ProxyGenerator class that would take the method Moq calls and add required functionality to it (just compare CreateClassProxyWithTarget and CreateClassProxy implementations – they are almost identical!)

MyProxyGenerator.cs

class MyProxyGenerator : ProxyGenerator
{
    object _target;

    public MyProxyGenerator(object target) {
        _target = target; // this is the missing piece, we'll have to pass it on to Castle proxy
    }
    // this method is 90% taken from the library source. I only had to tweak two lines (see below)
    public override object CreateClassProxy(Type classToProxy, Type[] additionalInterfacesToProxy, ProxyGenerationOptions options, object[] constructorArguments, params IInterceptor[] interceptors)
    {
        if (classToProxy == null)
        {
            throw new ArgumentNullException("classToProxy");
        }
        if (options == null)
        {
            throw new ArgumentNullException("options");
        }
        if (!classToProxy.GetTypeInfo().IsClass)
        {
            throw new ArgumentException("'classToProxy' must be a class", "classToProxy");
        }
        CheckNotGenericTypeDefinition(classToProxy, "classToProxy");
        CheckNotGenericTypeDefinitions(additionalInterfacesToProxy, "additionalInterfacesToProxy");
        Type proxyType = CreateClassProxyTypeWithTarget(classToProxy, additionalInterfacesToProxy, options); // these really are the two lines that matter
        List<object> list =  BuildArgumentListForClassProxyWithTarget(_target, options, interceptors);       // these really are the two lines that matter
        if (constructorArguments != null && constructorArguments.Length != 0)
        {
            list.AddRange(constructorArguments);
        }
        return CreateClassProxyInstance(proxyType, list, classToProxy, constructorArguments);
    }
}

if all of the above was relativaly straightforward, actually feeding it into Moq is going to be somewhat of a hack. As I mentioned, most of the structures are marked internal so we’ll have to use reflection to get through:

MyMock.cs

public class MyMock<T> : Mock<T>, IDisposable where T : class
{
    void PopulateFactoryReferences()
    {
        // Moq tries ridiculously hard to protect their internal structures - pretty much every class that could be of interest to us is marked internal
        // All below code is basically serving one simple purpose = to swap a `ProxyGenerator` field on the `ProxyFactory.Instance` singleton
        // all types are internal so reflection it is
        // I will invite you to make this a bit cleaner by obtaining the `_generatorFieldInfo` value once and caching it for later
        var moqAssembly = Assembly.Load(nameof(Moq));
        var proxyFactoryType = moqAssembly.GetType("Moq.ProxyFactory");
        var castleProxyFactoryType = moqAssembly.GetType("Moq.CastleProxyFactory");     
        var proxyFactoryInstanceProperty = proxyFactoryType.GetProperty("Instance");
        _generatorFieldInfo = castleProxyFactoryType.GetField("generator", BindingFlags.NonPublic | BindingFlags.Instance);     
        _castleProxyFactoryInstance = proxyFactoryInstanceProperty.GetValue(null);
        _originalProxyFactory = _generatorFieldInfo.GetValue(_castleProxyFactoryInstance);//save default value to restore it later
    }

    public MyMock(T targetInstance) {       
        PopulateFactoryReferences();
        // this is where we do the trick!
        _generatorFieldInfo.SetValue(_castleProxyFactoryInstance, new MyProxyGenerator(targetInstance));
    }

    private FieldInfo _generatorFieldInfo;
    private object _castleProxyFactoryInstance;
    private object _originalProxyFactory;

    public void Dispose()
    {
         // you will notice I opted to implement IDisposable here. 
         // My goal is to ensure I restore the original value on Moq's internal static class property in case you will want to mix up this class with stock standard implementation
         // there are probably other ways to ensure reference is restored reliably, but I'll leave that as another challenge for you to tackle
        _generatorFieldInfo.SetValue(_castleProxyFactoryInstance, _originalProxyFactory);
    }
}

Then, given we’ve got the above working, the actual solution would look like so:

    var config = new A()
    {
        Test = "TEST"
    };
    using (var superMock = new MyMock<A>(config)) // now we can pass instances!
    {
        superMock.CallBase = true; // you still need this, because as far as Moq is oncerned it passes control over to CastleDynamicProxy   
        var o1 = superMock.Object;
        Console.WriteLine(o1.ReturnTest()); // but this should return TEST
    }

.NET Membership audit preparation steps

Not so long ago one of our clients asked us to audit their custom .net CMS users password security. It’s not a secret that 99% of cases when we review .net user accounts and security we would see default SqlMembershipProvider implementation. It has the potential to be extremely secure. Opposite however is also true – you can make it a disaster allowing symmetric encryption, for example. Our case was pretty much default settings across the board, so our passwords were SHA1 hashed with random salts. Good enough, really.
Following up on Troy Hunt’s post we have got ourselves a fresh copy of Hashcat and a few dictionaries to play with.

This is where it all went wrong

To begin with, although EpiServer is running on .net platform, it seems to use slightly relaxed version of membership provider: their salts seem to be shorter than the default. Hashcat however seems to make a great fuss out of this difference:
[cc lang=”bash”]Z:\hashcat>hashcat64.exe -m 141 z:\hashes.txt dictionary.txt
hashcat (v4.2.1) starting…

Hashfile ‘z:\hashes.txt’ on line 1 ($epise…/JapfY=*j+gvb/c0mt28CHJssmwFvQ==): Token length exception
No hashes loaded.

Started: Thu Sep 01 07:48:39 2018
Stopped: Thu Sep 01 07:48:39 2018[/cc]
So apparently Troy’s Episerver-specific routine does not work for any other ASP membership hashes. This seems to be a show stopper.

This is still .net membership though

We know where exactly Sql Memebership Provider is located, we can decompile it and have a peek at how exactly it works. I quoted the relevant function with some comments below:
[cc lang=”c#”]namespace System.Web.Security
{
public class SqlMembershipProvider : MembershipProvider
{
private string EncodePassword(string pass, int passwordFormat, string salt)
{
if (passwordFormat == 0) return pass; // this is plain text passwords. keep away from it
byte[] bytes = Encoding.Unicode.GetBytes(pass); // this is very important, we’ll get back to it later
byte[] numArray1 = Convert.FromBase64String(salt);
byte[] inArray;
if (passwordFormat == 1) // this is where the interesting stuff happens
{
HashAlgorithm hashAlgorithm = this.GetHashAlgorithm();//by default we’d get SHA1 here
if (hashAlgorithm is KeyedHashAlgorithm)
{
// removed this block for brevity as it makes no sense in our case as SHA1 does not require a key.
}
else
{
/* and so we end up with this little block of code that does all the magic.
it’s pretty easy to follow: concatenates salt+password (order matters!) and runs it through SHA1 */

byte[] buffer = new byte[numArray1.Length + bytes.Length];
Buffer.BlockCopy((Array)numArray1, 0, (Array)buffer, 0, numArray1.Length);
Buffer.BlockCopy((Array)bytes, 0, (Array)buffer, numArray1.Length, bytes.Length);
inArray = hashAlgorithm.ComputeHash(buffer);
}
}
else
{
// symmetric encryption. again, keep away
}
return Convert.ToBase64String(inArray);
}
}
}[/cc]
Hashing implementation is not secret either, we can look through Hashcat’s extensive list of supported hashing modes and see if anything pops out.
We’re looking for something that does SHA1 over concatenated salt and password. At first glance we’ve got mode 120 | sha1($salt.$pass), but there’s a catch.
As I noted in the code comment, Encoding.Unicode actually represents UTF16, not UTF8 most of us are used to.
But that’s fine, because Hashcat has just the method for us: 140 | sha1($salt.utf16le($pass)). Fantastic!

A bit of housekeeping

The easiest way to prepare a hashlist in our case was to run a simple LINQPad script against membership database and save results in a file. We just need to keep in mind that salts are represented as hex-encoded strings, so we’ll need to let Hashcat know.
[cc lang=”csharp”]static class StringExtensions {
public static string ToHex(this string input) {
byte[] bytes = Convert.FromBase64String(input);
var result = BitConverter.ToString(bytes).Replace(“-“, “”).ToLower();
return result;
}
}
///

/// to make this work you’d need to set up a database connection in your LINQ Pad first
/// we assume Memberships and Users tables exist in the database we’re connected to. Tweak to your environment
///

void Main()
{
var f = File.CreateText(@”z:\hashes.txt”);
/// Memberships – main ASP.NET Membership table. can be called differently, so watch out for your use case
/// Users – table mapping memberships to user names. Again, can be called differently
foreach (var user in Memberships.Join(Users, m => m.UserId, u => u.UserId, (mm, uu) => new { uu.UserName, mm.Email, mm.Password, mm.PasswordSalt, uu.UserId }).OrderBy(x => x.UserName))
{
//choose either of these lines, to add users or not. We found *hash:salt* format to be less painful to work with in Hashcat
//f.WriteLine($”{user.UserName}:{user.Password.ToHex()}:{user.PasswordSalt.ToHex()}”);
f.WriteLine($”{user.Password.ToHex()}:{user.PasswordSalt.ToHex()}”);
}
f.Close();
}[/cc]

Hacking custom Start Action in Visual Studio 2017

As part of our technical debt collection commitment we do deal with weird situations where simply running a solution in Visual Studio might not be enough. A recent example to that was a web application split into two projects:

  • ASP.NET MVC – provided a shell for SPA as well as all backend integration – the entry point for application
  • React with Webpack – SPA mostly responsible for the UI and user intercations. Project was declared as Class Library with no code to run, while developers managed all javascript and Webpack off-band

Both projects are reasonably complex and I can see why the developers tried to split them. The best intention however makes running this tandem a bit awkward execrise. We run  npm run build  and expect it to drop files into correct places for MVC project that just relies on compiled javascript to be there. Using both technology stacks in one solution is nothing new. We’d have opted for this template back at project inception. Now was too late however.

Just make VS run both projects

Indeed, Microsoft has been generous enough to let us designate multiple projects as start up:

This however does not help our particular case as React project was created as Class Library and there’s no code to run. We need a way to kick that npm run build command line every time VS ‘builds’ the React project… How do we do it?

Okay, let’s use custom Start Action

Bingo! We can absolutely do this and bootstrap us a shell which then would run our npm command. Technically we can run npm directly, but I could never quite remember where to look for the executable.

There’s a slight issue with this approach though: it is not portable between developers’ machines. There are at least two reasons for that:

  • Input boxes on this dialog form do not support environment variables and/or relative paths.
  • Changes made in ths window go to .csproj.user file, that by default is .gitignore’d (here’s a good explanation why it should be)

So this does not work:

There might be a way however

  1. First and foremost, unload the solution (not just project). Project .user settings are loaded on solution start so we want it to be clean.
  2. Open up .user file in your favourite text editor, mine looks like this:
Program
C:\Windows\System32\cmd.exe

/c start /min npm run build

And change the path to whatever your requirements are:

Program
$(WINDIR)\System32\cmd.exe
/c start /min npm run build

We could potentially stop here, but the file is still user-specific and is not going into source control.

Merging all the way up to project file

As it turns out, we can just cut the elements we’re after (StartAction, StartProgram and StartArguments) and paste them into respective .csproj section (look out for the same Condition on PropertyGroup, that should be it)

true
full
false
bin\Debug\
DEBUG;TRACE
prompt
4
Program
$(WINDIR)\System32\cmd.exe

/c start /min npm run build

Open the solution again and check if everything works as intended.

Landing pages made easy

Do more with less

Every now and then customers would like us to build them a landing page with as little development and running overhead as possible.
First thing that pops to mind when presented with such requirements is to go for static sites. Indeed, there’s nothing easier than chucking a bunch of html into a directory and having it served.
Problems however start to arise when we consider maintainability and reusability of the solution as well as keeping things DRY. On top of that customers would almost certainly want a way to do periodic updates and changes without having to pay IT contractors’ fees (some might argue this is job security, but we believe it’s a sneaky attempt at customer lock-in that will probably backfire).

So how do we keep everyone happy?

Meet Jekyll, the static site generator. To be fair Jekyll is not the only attempt at translating templates into web pages. We however prefer it over the competition because of its adoption by GitHub – this means Jekyll sites get free hosting on Github pages. And this is real important when talking low overheads.

Up and running

If you don’t normally develop on Ruby, you probably will need to install a few things to start editing with Jekyll. Luckily with Docker we don’t need to do much but pull an official image and expose a few ports:

#this is an example docker-compose.yml file feel free to alter as needed
version: "2"
services:
site:
command: jekyll serve --force_polling # this tells the container to spin up a web server and instructs Jekyll to constantly scan working directory for changes
image: jekyll/jekyll:3.8
volumes:
- ${PWD}/data:/srv/jekyll # this example assumes all jekyll files would go into /data subdirectory
- ${PWD}/bundle:/usr/local/bundle # this mapping is optional, however it helps speed up builds by caching all ruby gems outside of container
ports:
- 4000:4000 # this is where webserver is exposed

Save this docker-compose.yml file somewhere and open up command prompt. First we need to init the site structure, so run docker-compose run site jekyll new .
Finally start the server docker-compose up, open up http://localhost:4000 and start editing.

Jekyll basics

Repeating official documentation is probably not the best idea, so I’ll just mention a few points that would help to get started quickly:

  1. Jekyll loves Markdown. Either learn it (it’s really simple and rewarding, I promise) or use online editor
  2. Jekyll allows HTML. so don’t feel restricted by previous bullet point
  3. You’ve got a choice of posts, pages and collections with Jekyll. Depending on your goals one might be better suited than the others.
  4. Utilise data files and collections as needed. They are really helpful when building rich content.

How about a test drive?

Absolutely, check out a git demo we did for one of the training sessions: https://wiseowls.github.io/jekyll-git-demo/

Dev/BA love/hate relationship

Love/hate, really?

While love and hate might be overstated, some developers in the community keep wondering if they’d be better off without BAs. No matter how much I’d like to say “yes” outright, to make it somewhat fair, I did some research on both good and bad in Dev/BA relatioship.
A quick google reveals some of the most obvious reasons why Developers need Business Analysts (and I guess should love them for?)

  1. BAs figure out requirements by talking to end users/business so devs don’t have to
  2. BAs are able to turn vague business rules and processes into code-able requirements
  3. BAs leverage their knowledge in business and SDLC to break the work into manageable chunks
  4. BAs have enough domain knowledge (and stakeholder trust) to be able to make some calls and unblock developers on the spot
  5. BAs are able to negotiate features/scope on behalf of developers
  6. BAs document scope of work and maintain it
  7. BAs document everything so that it can be used to validate project outcomes

To be fair the only one meaningful “hate” result I’ve got was

  1. BAs are difficult

Let’s address elephant in the room first

BAs are as difficult as anyone else. Hopefully we’re all professionals here so that should just be a matter of working with them rather than against them. We all have the same goal in the end. Read on.

And now comes the big question

How many “love” are points actually valid?

  1. Sounds like a lazy Dev reasoning

    If you’re passionate about what you do – working with users/business is an excellent opportunity to learn, grow and see how results of your work help real people solve real problems. If you’re not passionate about what you do – maybe you should do something else instead?

  2. Formalising requirements is a biggie indeed

    Not just because it takes time to sit down and document that process. Often business have no idea themselves, so it takes collective effort to get it across the line. And I doubt anyone can nail everything down perfectly right from the get go. Also truth is, everyone has some sort of domain knowledge. So given the Dev is up to scratch, it seems logical to shorten the feedback loop. Cutting the middle man ensures everyone is on the same page and there’s no unnecessary backwards and forwards. This can be dangerous though because it’s very easy to get carried away and let this phase run forever. This is where an impartial 3rd party would be required. Another reason to potentially involve a BA here would be domain knowledge should the Dev be lacking in that department.

  3. BAs presumably excel at knowing domain

    But Devs are likely far far better aware of all the technical intricacies of the solution being built, so naturally Devs would have much better idea how to slice the work. On a few occasions I personally found myself in these situations. You can’t just build a rich UX user facing feature without thinking about all the plumbing and backend work that needs to happen first. Some BAs can argue they know the guts of system just as well. Perfect – with modern software development values of cross-functional teams there’s no reason to not get coding then.

  4. This is a valid point…kind of

    Dependant on (inter-)personal relationships however. Personally I find that having a stakeholder proxy in the team tends to speed things up a bit. Unless stakeholder is readily available anyway.

  5. Being an impartial 3rd party BAs are indeed able to help

    From my experience Devs tend to overengineer, hiding behind things like “best practice” and “future-proofing”. Common sense seems to be a very elusive concept when it comes to crafting code. So having someone not personally interested in technology for the sake of technology is probably going to benefit the project in the long run.

  6. Developers do get carried away

    I’ve been there, as well as probably everyone else. It gets very tempting to slip another shiny trendy library into the mix when no one is watching. Generalising and abstracting everything seems to be one of the things that set good developers apart from outstanding ones. For the sake of project success, this obsession just needs to be kept in check with goals and timeframes as in the end clients pay for solutions not frameworks.

  7. Documenting software is key

    Unfortunately this is something developers are generally not very good at – most of us would jump at coding instead of writing a plan first. This leads to all sorts of cognitive bias issues where features get documented after the fact they are finished and every justification goes to actual implementation rather than initial business requirement. Just having someone else with a different perspective take a look at it most likely going to help produce better results. Even better if documentation is made up front and independently. It becomes a valuable artifact and validation criteria.

In the end

It seems, applicability of all the aforementioned heavily depends on how experienced the particular Dev or BA is. There’s no point arguing whether the project would be better off with or without BAs (or Devs for that sake). Everyone adds value in their own way. As long as everyone adds value.

Right?

Oops I did it again: git disaster recovery

Houston, we have a problem

So, you’ve accidentally pushed your commit before realising it breaks everything? Or maybe the merge you’ve been working on for the last 2 hours has turned out to have gone south in the most inexplicable and unimaginable way? Okay, no worries, we’re all use git nowadays, right?
When I end up in situation like this (and I find myself there quite more often than I’d like to admit), this is the decision tree I follow to get back up and running:

I am assuming you’ve got your working copy checked out and you have backed it up prior to running any commands pasted from the internet.

To rewrite or not to rewrite

Generally speaking you should avoid rewriting history if you could. Changing commit chain would affect other developers that might have been relying on some of the commits in their feature branches. This would mean more clean up for everyone in the team and more potential stuff-ups. Sometimes, however, there’s just no choice but to rewrite. For example when someone checks in their private keys or large binary files that just do not belong in the repo.

Revert last commit

This does not roll back. It in fact creates a new commit, that ends up negating the previous one.

git revert HEAD
git push

Reset master to previous commit

This is by far the most common scenario. Note the ^ symbol at the end. It tells git to get parent of that commit instead. Very handy.

git reset HEAD^ --hard
git push origin -f

Rebase interactively

It is likely that vi is the default text editor the shipped with your git installation. So before you go down this rabbit hole – make sure you understand the basics of text editing with vi.

git log --pretty=oneline
git rebase -i ^

you will be presented with a file with commit chain one per line along with action codes git should run over each commit. Don’t worry, quick command reference is just down the bottom. Once you have saved and exited – git will run commands sequentially producing the desired state in the end.

Go cherry picking

Enjoy your cherry picking responsibly. Cherry picking is generally frowned upon in the community as it essentially creates a duplicate commit with another SHA. But our justification here would be we’re ultimately going to drop the old branch so there’d be no double-ups in the end. My strategy here is to branch off a last known good state, cherry pick all good commits over to new branch. Finally swap branches around and kill the old one. This is not the end of story however as all other developers would need to check out the new branch and potentially rebase their unmerged changes. Once again – use with care.

git log --pretty=oneline
git branch fixing_master ^
git push

Restore deleted file

This is pretty straightforward. First we find the last commit to have touched the file in question (deleting it). And then we check out a copy from parent commit (which would be the last version available at the time):

$file='/path/to/file'
git checkout $(git rev-list -n 1 HEAD -- "$file")^ -- "$file"

Another take on Mocking HttpContext.Current away

Static classes and methods are a pain to unit test

Back in days of WebForms, HttpContext was the one object to rule them all. It’s got session state, request and response, cache, errors, server variables and so much more for developers to inspect and play with. HttpContext.Current was by far the easiest way to tap into all of this. But guess what? Static member invocation does not make mocking it out easy.

MVC controllers are much more unit-test friendly

Although technically HttpContext hasn’t gone anywhere with the coming of MVC, it’s been neatly wrapped into a HttpContextWrapper and exposed as controller instance .Context property. Just mock it out and everything will be fine.

End of story? Well may be

If you wanted to completely abstract from all HTTP specifics and happen to not need and utility methods that come with it – you’re sweet.
If, however, for some reason you feel like relying on some utility methods to reduce amount of non-productive mocking, try this trick:

public class MockHttpContext : IDisposable
{
public HttpContext Current { get; set; }
private AppDomain CurrentDomain { get; }

public MockHttpContext(string url, string query = null, IPrincipal principal = null)
{
CurrentDomain = Thread.GetDomain();
var path = CurrentDomain.BaseDirectory;
var virtualDir = "/";

CurrentDomain.SetData(".appDomain", "*");
CurrentDomain.SetData(".appPath", path);
CurrentDomain.SetData(".appId", "testId");
CurrentDomain.SetData(".appVPath", virtualDir);
CurrentDomain.SetData(".hostingVirtualPath", virtualDir);
CurrentDomain.SetData(".hostingInstallDir", HttpRuntime.AspInstallDirectory);
CurrentDomain.SetData(".domainId", CurrentDomain.Id);

// User is logged out by default
principal = principal ?? new GenericPrincipal(
new GenericIdentity(string.Empty),
new string[0]
);

Current = new HttpContext(
new HttpRequest("", url, query),
new HttpResponse(new StringWriter())
)
{
User = principal
};
HttpContext.Current = Current;
}

public void Dispose()
{
//clean up
HttpContext.Current = null;
}
}

First it looks very similar to this implementation taken from SO (well, this is where it’s been taken off to begin with). But then what’s up with all these CurrentDomain.SetData calls? This allows us to mock paths and transition between relative and absolute urls as if we were hosted somewhere.
Consider the code:

public static string ToAbsoluteUrl(this string relativeUrl)
{
if (string.IsNullOrEmpty(relativeUrl)) return relativeUrl;
if (HttpContext.Current == null) return relativeUrl;

if (relativeUrl.StartsWith("/")) relativeUrl = relativeUrl.Insert(0, "~");
if (!relativeUrl.StartsWith("~/")) relativeUrl = relativeUrl.Insert(0, "~/");

var url = HttpContext.Current.Request.Url;
var port = !url.IsDefaultPort ? ":" + url.Port : string.Empty;

return $"{url.Scheme}://{url.Host}{port}{VirtualPathUtility.ToAbsolute(relativeUrl)}";// and this is where the magic happens. Now static invocation of VirtualPathUtility does not fail with NullReferenceException anymore!
}

The code outside makes afew assumptions regarding the environment being mocked, but it should be a trivial task to introduce more parameters/settings and mock everything away.