Hacking custom Start Action in Visual Studio 2017

As part of our technical debt collection commitment we do deal with weird situations where simply running a solution in Visual Studio might not be enough. A recent example to that was a web application split into two projects:

  • ASP.NET MVC – provided a shell for SPA as well as all backend integration – the entry point for application
  • React with Webpack – SPA mostly responsible for the UI and user intercations. Project was declared as Class Library with no code to run, while developers managed all javascript and Webpack off-band

Both projects are reasonably complex and I can see why the developers tried to split them. The best intention however makes running this tandem a bit awkward execrise. We run  npm run build  and expect it to drop files into correct places for MVC project that just relies on compiled javascript to be there. Using both technology stacks in one solution is nothing new. We’d have opted for this template back at project inception. Now was too late however.

Just make VS run both projects

Indeed, Microsoft has been generous enough to let us designate multiple projects as start up:

This however does not help our particular case as React project was created as Class Library and there’s no code to run. We need a way to kick that npm run build command line every time VS ‘builds’ the React project… How do we do it?

Okay, let’s use custom Start Action

Bingo! We can absolutely do this and bootstrap us a shell which then would run our npm command. Technically we can run npm directly, but I could never quite remember where to look for the executable.

There’s a slight issue with this approach though: it is not portable between developers’ machines. There are at least two reasons for that:

  • Input boxes on this dialog form do not support environment variables and/or relative paths.
  • Changes made in ths window go to .csproj.user file, that by default is .gitignore’d (here’s a good explanation why it should be)

So this does not work:

There might be a way however

  1. First and foremost, unload the solution (not just project). Project .user settings are loaded on solution start so we want it to be clean.
  2. Open up .user file in your favourite text editor, mine looks like this:
Program
C:\Windows\System32\cmd.exe

/c start /min npm run build

And change the path to whatever your requirements are:

Program
$(WINDIR)\System32\cmd.exe
/c start /min npm run build

We could potentially stop here, but the file is still user-specific and is not going into source control.

Merging all the way up to project file

As it turns out, we can just cut the elements we’re after (StartAction, StartProgram and StartArguments) and paste them into respective .csproj section (look out for the same Condition on PropertyGroup, that should be it)

true
full
false
bin\Debug\
DEBUG;TRACE
prompt
4
Program
$(WINDIR)\System32\cmd.exe

/c start /min npm run build

Open the solution again and check if everything works as intended.

Landing pages made easy

Do more with less

Every now and then customers would like us to build them a landing page with as little development and running overhead as possible.
First thing that pops to mind when presented with such requirements is to go for static sites. Indeed, there’s nothing easier than chucking a bunch of html into a directory and having it served.
Problems however start to arise when we consider maintainability and reusability of the solution as well as keeping things DRY. On top of that customers would almost certainly want a way to do periodic updates and changes without having to pay IT contractors’ fees (some might argue this is job security, but we believe it’s a sneaky attempt at customer lock-in that will probably backfire).

So how do we keep everyone happy?

Meet Jekyll, the static site generator. To be fair Jekyll is not the only attempt at translating templates into web pages. We however prefer it over the competition because of its adoption by GitHub – this means Jekyll sites get free hosting on Github pages. And this is real important when talking low overheads.

Up and running

If you don’t normally develop on Ruby, you probably will need to install a few things to start editing with Jekyll. Luckily with Docker we don’t need to do much but pull an official image and expose a few ports:

#this is an example docker-compose.yml file feel free to alter as needed
version: "2"
services:
site:
command: jekyll serve --force_polling # this tells the container to spin up a web server and instructs Jekyll to constantly scan working directory for changes
image: jekyll/jekyll:3.8
volumes:
- ${PWD}/data:/srv/jekyll # this example assumes all jekyll files would go into /data subdirectory
- ${PWD}/bundle:/usr/local/bundle # this mapping is optional, however it helps speed up builds by caching all ruby gems outside of container
ports:
- 4000:4000 # this is where webserver is exposed

Save this docker-compose.yml file somewhere and open up command prompt. First we need to init the site structure, so run docker-compose run site jekyll new .
Finally start the server docker-compose up, open up http://localhost:4000 and start editing.

Jekyll basics

Repeating official documentation is probably not the best idea, so I’ll just mention a few points that would help to get started quickly:

  1. Jekyll loves Markdown. Either learn it (it’s really simple and rewarding, I promise) or use online editor
  2. Jekyll allows HTML. so don’t feel restricted by previous bullet point
  3. You’ve got a choice of posts, pages and collections with Jekyll. Depending on your goals one might be better suited than the others.
  4. Utilise data files and collections as needed. They are really helpful when building rich content.

How about a test drive?

Absolutely, check out a git demo we did for one of the training sessions: https://wiseowls.github.io/jekyll-git-demo/

Dev/BA love/hate relationship

Love/hate, really?

While love and hate might be overstated, some developers in the community keep wondering if they’d be better off without BAs. No matter how much I’d like to say “yes” outright, to make it somewhat fair, I did some research on both good and bad in Dev/BA relationship.
A quick google reveals some of the most obvious reasons why Developers need Business Analysts (and I guess should love them for?)

  1. BAs figure out requirements by talking to end users/business so devs don’t have to
  2. BAs are able to turn vague business rules and processes into code-able requirements
  3. BAs leverage their knowledge in business and SDLC to break the work into manageable chunks
  4. BAs have enough domain knowledge (and stakeholder trust) to be able to make some calls and unblock developers on the spot
  5. BAs are able to negotiate features/scope on behalf of developers
  6. BAs document scope of work and maintain it
  7. BAs document everything so that it can be used to validate project outcomes

To be fair the only one meaningful “hate” result I’ve got was

  1. BAs are difficult

Let’s address elephant in the room first

BAs are as difficult as anyone else. Hopefully we’re all professionals here so that should just be a matter of working with them rather than against them. We all have the same goal in the end. Read on.

And now comes the big question

How many “love” are points actually valid?

  1. Sounds like a lazy Dev reasoning

    If you’re passionate about what you do – working with users/business is an excellent opportunity to learn, grow and see how results of your work help real people solve real problems. If you’re not passionate about what you do – maybe you should do something else instead?

  2. Formalising requirements is a biggie indeed

    Not just because it takes time to sit down and document that process. Often business have no idea themselves, so it takes collective effort to get it across the line. And I doubt anyone can nail everything down perfectly right from the get go. Also truth is, everyone has some sort of domain knowledge. So given the Dev is up to scratch, it seems logical to shorten the feedback loop. Cutting the middle man ensures everyone is on the same page and there’s no unnecessary backwards and forwards. This can be dangerous though because it’s very easy to get carried away and let this phase run forever. This is where an impartial 3rd party would be required. Another reason to potentially involve a BA here would be domain knowledge should the Dev be lacking in that department.

  3. BAs presumably excel at knowing domain

    But Devs are likely far far better aware of all the technical intricacies of the solution being built, so naturally Devs would have much better idea how to slice the work. On a few occasions I personally found myself in these situations. You can’t just build a rich UX user facing feature without thinking about all the plumbing and backend work that needs to happen first. Some BAs can argue they know the guts of system just as well. Perfect – with modern software development values of cross-functional teams there’s no reason to not get coding then.

  4. This is a valid point…kind of

    Dependant on (inter-)personal relationships however. Personally I find that having a stakeholder proxy in the team tends to speed things up a bit. Unless stakeholder is readily available anyway.

  5. Being an impartial 3rd party BAs are indeed able to help

    From my experience Devs tend to overengineer, hiding behind things like “best practice” and “future-proofing”. Common sense seems to be a very elusive concept when it comes to crafting code. So having someone not personally interested in technology for the sake of technology is probably going to benefit the project in the long run.

  6. Developers do get carried away

    I’ve been there, as well as probably everyone else. It gets very tempting to slip another shiny trendy library into the mix when no one is watching. Generalising and abstracting everything seems to be one of the things that set good developers apart from outstanding ones. For the sake of project success, this obsession just needs to be kept in check with goals and timeframes as in the end clients pay for solutions not frameworks.

  7. Documenting software is key

    Unfortunately this is something developers are generally not very good at – most of us would jump at coding instead of writing a plan first. This leads to all sorts of cognitive bias issues where features get documented after the fact they are finished and every justification goes to actual implementation rather than initial business requirement. Just having someone else with a different perspective take a look at it most likely going to help produce better results. Even better if documentation is made up front and independently. It becomes a valuable artifact and validation criteria.

In the end

It seems, applicability of all the aforementioned heavily depends on how experienced the particular Dev or BA is. There’s no point arguing whether the project would be better off with or without BAs (or Devs for that sake). Everyone adds value in their own way. As long as everyone adds value.

Right?

Oops I did it again: git disaster recovery

Houston, we have a problem

So, you’ve accidentally pushed your commit before realising it breaks everything? Or maybe the merge you’ve been working on for the last 2 hours has turned out to have gone south in the most inexplicable and unimaginable way? Okay, no worries, we’re all use git nowadays, right?
When I end up in situation like this (and I find myself there quite more often than I’d like to admit), this is the decision tree I follow to get back up and running:

I am assuming you’ve got your working copy checked out and you have backed it up prior to running any commands pasted from the internet.

To rewrite or not to rewrite

Generally speaking you should avoid rewriting history if you could. Changing commit chain would affect other developers that might have been relying on some of the commits in their feature branches. This would mean more clean up for everyone in the team and more potential stuff-ups. Sometimes, however, there’s just no choice but to rewrite. For example when someone checks in their private keys or large binary files that just do not belong in the repo.

Revert last commit

This does not roll back. It in fact creates a new commit, that ends up negating the previous one.

git revert HEAD
git push

Reset master to previous commit

This is by far the most common scenario. Note the ^ symbol at the end. It tells git to get parent of that commit instead. Very handy.

git reset HEAD^ --hard
git push origin -f

Rebase interactively

It is likely that vi is the default text editor the shipped with your git installation. So before you go down this rabbit hole – make sure you understand the basics of text editing with vi.

git log --pretty=oneline
git rebase -i ^

you will be presented with a file with commit chain one per line along with action codes git should run over each commit. Don’t worry, quick command reference is just down the bottom. Once you have saved and exited – git will run commands sequentially producing the desired state in the end.

Go cherry picking

Enjoy your cherry picking responsibly. Cherry picking is generally frowned upon in the community as it essentially creates a duplicate commit with another SHA. But our justification here would be we’re ultimately going to drop the old branch so there’d be no double-ups in the end. My strategy here is to branch off a last known good state, cherry pick all good commits over to new branch. Finally swap branches around and kill the old one. This is not the end of story however as all other developers would need to check out the new branch and potentially rebase their unmerged changes. Once again – use with care.

git log --pretty=oneline
git branch fixing_master ^
git push

Restore deleted file

This is pretty straightforward. First we find the last commit to have touched the file in question (deleting it). And then we check out a copy from parent commit (which would be the last version available at the time):

$file='/path/to/file'
git checkout $(git rev-list -n 1 HEAD -- "$file")^ -- "$file"

Another take on mocking away HttpContext.Current

Static classes and methods are a pain to unit test

Back in days of WebForms, HttpContext was the one object to rule them all. It’s got session state, request and response, cache, errors, server variables and so much more for developers to inspect and play with. HttpContext.Current was by far the easiest way to tap into all of this. But guess what? Static member invocation does not make mocking it out easy.

MVC controllers are much more unit-test friendly

Although technically HttpContext hasn’t gone anywhere with the coming of MVC, it’s been neatly wrapped into a HttpContextWrapper and exposed as controller instance .Context property. Just mock it out and everything will be fine.

End of story? Well may be

If you wanted to completely abstract from all HTTP specifics and happen to not need and utility methods that come with it – you’re sweet.
If, however, for some reason you feel like relying on some utility methods to reduce amount of non-productive mocking, try this trick:

public class MockHttpContext: IDisposable {
  public HttpContext Current {
    get;
    set;
  }
  private AppDomain CurrentDomain {
    get;
  }

  public MockHttpContext(string url, string query = null, IPrincipal principal = null) {
    CurrentDomain = Thread.GetDomain();
    var path = CurrentDomain.BaseDirectory;
    var virtualDir = "/";

    CurrentDomain.SetData(".appDomain", "*");
    CurrentDomain.SetData(".appPath", path);
    CurrentDomain.SetData(".appId", "testId");
    CurrentDomain.SetData(".appVPath", virtualDir);
    CurrentDomain.SetData(".hostingVirtualPath", virtualDir);
    CurrentDomain.SetData(".hostingInstallDir", HttpRuntime.AspInstallDirectory);
    CurrentDomain.SetData(".domainId", CurrentDomain.Id);

    // User is logged out by default
    principal = principal ?? new GenericPrincipal(
      new GenericIdentity(string.Empty),
      new string[0]
    );

    Current = new HttpContext(
      new HttpRequest("", url, query),
      new HttpResponse(new StringWriter())
    ) {
      User = principal
    };
    HttpContext.Current = Current;
  }

  public void Dispose() {
    //clean up
    HttpContext.Current = null;
  }
}

First it looks very similar to this implementation taken from SO (well, this is where it’s been taken off to begin with). But then what’s up with all these CurrentDomain.SetData calls? This allows us to mock paths and transition between relative and absolute urls as if we were hosted somewhere.
Consider the code:

public static string ToAbsoluteUrl(this string relativeUrl) {
  if (string.IsNullOrEmpty(relativeUrl)) return relativeUrl;
  if (HttpContext.Current == null) return relativeUrl;

  if (relativeUrl.StartsWith("/")) relativeUrl = relativeUrl.Insert(0, "~");
  if (!relativeUrl.StartsWith("~/")) relativeUrl = relativeUrl.Insert(0, "~/");

  var url = HttpContext.Current.Request.Url;
  var port = !url.IsDefaultPort ? ":" + url.Port : string.Empty;

  return $ "{url.Scheme}://{url.Host}{port}{VirtualPathUtility.ToAbsolute(relativeUrl)}"; // and this is where the magic happens. Now static invocation of VirtualPathUtility does not fail with NullReferenceException anymore!
}

The code outside makes afew assumptions regarding the environment being mocked, but it should be a trivial task to introduce more parameters/settings and mock everything away.