Monitoring SQL Server: setting up ELK+G

In 9 cases out of 10 our clients have some sort of database that they want to interface with. And 8 out of those 9 cases the database is going to be SQL Server. Yes, this is us being biased, but you know what?

It does not matter

The important bit is – out clients like to know how the database is doing. Some are happy to pay for commercial APMs, others either have very specific needs or love the challenge to DIY.

We are here to help

One way to get better picture of what’s happening with the DB would be to keep grabbing vitals over time and plotting them on a graph of some sort. Grafana is a fantastic way to achieve that. It supports a whole bunch of backends (including SQL server) and allows insane amount of customisations.

Diversify

It is possible to store SQL telemetry in another SQL database on the same server (you could even set up SQL Agent jobs to do the polling – all nicely packaged). We however thought it might be a good idea to not store all data on the same machine. We’d like to not overstrain the main database in time of pinch and completely decouple analytics from critical business processes.

ELK G stack

One of many ways to approach this is to introduce a (somewhat) free and open source ElasticSearch into the mix. And mightly Logstash for data ingestion. This is where we’d normally go on to Kibana for dashboards and nice UI (and we did end up running it), but the main focus of this exercise will still fall onto Grafana.

Setting it up

Theres no point repeating official documentation for respective products, let’s instead write up a docker-compose file:

version: '3'
services:
    elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch:7.6.1
        environment:
            - node.name=elastic01
            - discovery.type=single-node  
            - bootstrap.memory_lock=true
            - "ES_JAVA_OPTS=-Xms512m -Xmx512m"        
        volumes:
            - ./elastic:/usr/share/elasticsearch/data
    logstash:
        image: docker.elastic.co/logstash/logstash:7.6.1
        volumes: 
            - ./logstash-pipeline:/usr/share/logstash/pipeline/
            - ./logstash-config/usr/share/logstash/config/
        depends_on:
          - elasticsearch
    kibana:
        image: docker.elastic.co/kibana/kibana:7.6.1
        environment:
          - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
        ports:
          - 5601:5601
        depends_on:
          - elasticsearch
    grafana:
        image: grafana/grafana
        ports:
          - 3000:3000
        depends_on:
          - elasticsearch

All that’s left to do is docker-compose up -d and run. Stay tuned for next posts in the series.

Custom Routing in .NET WebAPI

We all need to do weird things sometimes. One assignment we’ve got was to implement an API that would totally obfuscate all parameters in a Base64 encoded string. This will clearly go against stock standard routing and action mapping that ASP.NET WebAPI comes with out of the box. But that got us thinking about ways we can achieve it nonetheless.

By default

Normally, the router will:

  1. get the request URI,
  2. match it against given templates (those "{controller}/{action}" things), and
  3. invoke an {action} on {controller} with whatever parameters happen to be passed along

Then we realise

We’re constrained to full .net framework on the project and fancy .net core middleware are not a thing yet. Luckily for us custom Message Handler is a thing so theoretically we could bootstrap ourselves through that and override IHttpControllerSelector (and potentially IHttpActionSelector).

Setup

Writing code directly in global.asax is an option, but as it calls through to WebApiConfig.Register() by default:

 GlobalConfiguration.Configure(WebApiConfig.Register);

it’s probably a better place for things to do with WebAPI.

App_Start/WebApiConfig.cs

    public static class WebApiConfig
    {
        public static void Register(HttpConfiguration config)
        {
            // Web API configuration and services
            // Web API routes
            config.MessageHandlers.Add(new TestHandler()); // if you define a handler here it will kick in for ALL requests coming into your WebAPI (this does not affect MVC pages though)
            config.MapHttpAttributeRoutes();
            config.Services.Replace(typeof(IHttpControllerSelector), new MyControllerSelector(config)); // you likely will want to override some more services to ensure your logic is supported, this is one example

            // your default routes
            config.Routes.MapHttpRoute(name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new {id = RouteParameter.Optional});

            //a non-overlapping endpoint to distinguish between requests. you can limit your handler to only kick in to this pipeline
            config.Routes.MapHttpRoute(name: "Base64Api", routeTemplate: "apibase64/{query}", defaults: null, constraints: null
                //, handler: new TestHandler() { InnerHandler = new HttpControllerDispatcher(config) } // here's another option to define a handler
            );
        }
    }

and then define our handler:

TestHandler.cs

    public class TestHandler : DelegatingHandler
    {
        protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
        {
            //suppose we've got a URL like so: http://localhost:60290/api/VmFsdWVzCg==
            var b64Encoded = request.RequestUri.AbsolutePath.Remove(0, "/apibase64/".Length);
            byte[] data = Convert.FromBase64String(b64Encoded);
            string decodedString = Encoding.UTF8.GetString(data); // this will decode to values
            request.Headers.Add("controllerToCall", decodedString); // let us say this is the controller we want to invoke
            HttpResponseMessage resp = await base.SendAsync(request, cancellationToken);
            return resp;
        }
    }

Depending on what exactly we want handler to do, we might also have to supply a custom ControllerSelector implementation:

WebApiConfig.cs

// add this line in your Register method
config.Services.Replace(typeof(IHttpControllerSelector), new MyControllerSelector(config));

MyControllerSelector.cs

    public class MyControllerSelector : DefaultHttpControllerSelector
    {
        public MyControllerSelector(HttpConfiguration configuration) : base(configuration)
        {
        }

        public override string GetControllerName(HttpRequestMessage request)
        {
            //this is pretty minimal implementation that examines a header set from TestHandler and returns correct value
            if (request.Headers.TryGetValues("controllerToCall", out var candidates))
                return candidates.First();
            else
            {
                return base.GetControllerName(request);
            }
        }
    }

Applying this in real life?

Pretty neat theory. We however couldn’t quite figure out a way to take it to our customers that wouldn’t raise a few questions on whether we’re doing something shady there.

Programmatically submitting Google Forms with AngularJs

Google Forms is a viable way to do business. We’ve seen a few successful compaines that rely on it for day-to-day operations. The flow would normally involve users entering data on the go and someone at the back office analysing the responses with Google Spreadsheet.

Forms are flexible

One huge selling point if that we can design our own forms for all kinds of situations: racing bets, work time/attendance, baby feeding – we’ve seen a few exotic cases. Static form data is not enough, we can opt for Google Apps script.

One thing remains the same though

Look and feel of Google forms and default validations do leave much to be desired. What if there was a way to Swap the form UI out for a custom branded SPA with fancy lookaheads and what not?

There is a way

Surely, it all starts with making a form. We’ll go to google forms are design a new one. Expect to spend some time getting it right for your needs. For purposes of this demo we’ll be submitting a table (we’d cheat a bit and post JSON only though).

We’d also ensure that answers get submitted into a new spreadsheet:

Now we need to grab field names Google generated for a form (it is a simple HTML form after all!). Open up form preview, and go to dev tools console in the new tab

And run the following snippet in the console and note the outputs:

document.querySelectorAll('form').forEach((x) => {console.log(x.action)});
document.querySelectorAll('[name^="entry."]').forEach((x) => {console.log(x.name + '=' + x.closest('[role="listitem"]').querySelector('[role="heading"]').innerText)})

Oh, and one more thing…

Well, we’ve got the fields, but to succesfully submit the it we need to know where to submit to. Apparently, it’s a simple matter of picking up the form Id and crafting a URL: https://docs.google.com/forms/d/<your id here>/formResponse

And we are done…almost

Dissecting Google forms was fun. Now we need to somehow build our own frontend to the form. For our specific use case we wanted to show off how we would go about submitting a table dynamically populated with content. As I’ve got a soft spot for AngularJs, I figured I might as well got for it.

Building a custom form

There’s plenty of resources online on how to build SPAs, so I’d not elaborate much on that. There’s however a couple of considerations that in my opinion will make the form submission process seamless for an SPA experience. First and foremost – we’d like to stay on the same page when forms gets sent away – we’d also like to get notified when form gets submitted so our SPA can take own action. One way to do it is to submit a from into a hidden iframe and use its onLoad event to report back (that’s the method I ended up implementing in the example snippet).

Talk is cheap, show me the code

Working example of this technique can be found here: https://codepen.io/timur_kh/pen/oNXYNdL

Making Swagger to get the authorization token from URL query string

Swagger is extremely useful when developing and debugging Web APIs. Some dev environments however got a bit of security added on top which can get a bit too painful to work around.

Enter API key

It doesn’t need to be tedious! We’ll be looking at overriding Swagger-UI’s index page so we can plug a custom handler into onComplete callback. Solution is extremely simple:

  1. Grab latest index.html from Swashbuckle’s source repo (ideally, get the matching version)
  2. Tweak configObject to add an OnComplete callback handler so it will call preauthorizeApiKey when the UI is ready
  3. Override IndexStream in UserSwaggerUI extension method to serve the custom html

I ended up having the following setup (some bits are omitted for brevity):

wwwroot/swashbuckle.html

<!-- your standard HTML here, nothing special -->
<script>
    // some boilerplate initialisation
    // Begin Swagger UI call region
    configObject.onComplete = () => {

        // get the authorization portion of the query string
        var urlParams = new URLSearchParams(window.location.search);
        if (urlParams.has('authorization')) {
            var apikey = urlParams.get('authorization');

            // this is the important bit, see documentation
            ui.preauthorizeApiKey('api key', apikey );// key name must match the one you defined in AddSecurityDefinition method in Startup.cs
       }
    }
    const ui = SwaggerUIBundle(configObject);
    window.ui = ui        
}
</script>

Startup.cs

    public void ConfigureServices(IServiceCollection services)
    {
        .........
        services.AddSwaggerGen(c => {
            c.SwaggerDoc("v1", new Info { Title = "You api title", Version = "v1" });
            c.AddSecurityDefinition("api key", new ApiKeyScheme() // key name must match the one you supply to preauthorizeApiKey call in JS
            {
                Description = "Authorization query string expects API key",
                In = "query",
                Name = "authorization",
                Type = "apiKey"
            });

            var requirements = new Dictionary<string, IEnumerable<string>> {
                { "api key", new List<string>().AsEnumerable() }
            };
            c.AddSecurityRequirement(requirements);
        });
    }

    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        app.UseSwagger();
        app.UseSwaggerUI(c =>
        {
            c.IndexStream = () => File.OpenRead("wwwroot/swashbuckle.html"); // this is the important bit. see documentation https://github.com/domaindrivendev/Swashbuckle.AspNetCore/blob/master/README.md
            c.SwaggerEndpoint("/swagger/v1/swagger.json", "My API V1"); // very standard Swashbuckle init
        });
        app.UseMvc();
    }

After having finished all that, calling the standard swagger URL with ?authorization=1234567890 should automatically authorize the page.

Integration testing aide for MVC core routing

Sometimes unit tests just don’t cut it. This is where integration tests come in. This however brings a whole new set of issues with finding the beast way to isolate the aspects under test and mock everything else away.

Problem statement

Suppose, we’ve got an api and a test that needs to make an http call to our api endpoint, like so:

 [ApiController]
  public class TestController : ControllerBase {

    public IActionResult OkTest() {
      return Ok(true);
    }
  }
.....
public class TestControllerTests {

    private readonly HttpClient _client;

    public TestControllerTests() {
      _client = TestSetup.GetTestClient();
    }

    [Test]
    public async Task OkTest() {
      var path = GetPathHere(nameof(OkTest)); // should return "/api/test/oktest".
      var response = await _client.GetAsync(path);
      response.EnsureSuccessStatusCode();
    }
}

Solution

Knowing that ASP.NET Core comes with such a lightweight package now, and exposes so many extensibility points, one approach we found efficient was to build up the whole Host and query its properties:

private string GetPathHere(string actionName)
    {
        var host = Program.CreateWebHostBuilder(new string[] { }).Build();
        host.Start();
        IActionDescriptorCollectionProvider provider = (host.Services as ServiceProvider).GetService<IActionDescriptorCollectionProvider>();
        return provider.ActionDescriptors.Items.First(i => (i as ControllerActionDescriptor)?.ActionName == actionName).AttributeRouteInfo.Template;
    }

    [TestMethod]
    public void OkTestShouldBeFine()
    {
        var path = GetPathHere(nameof(ValuesController.OkTest)); // "api/test/oktest"
    }

Applicability

This is pretty basic case we’ve been dealing with, and the code makes quite a few assumptions. This approach however seems to hold up pretty well and surely will be our starting point next time round we test MVC actions!

Moq-ing around existing instance

We love unit testing! Seriously, it makes sense if you consider how many times a simple test had saved us from having to revisit that long forgotten project we’ve already moved on from. Not fun and not good for business.

Moq: our tool of choice

To be able to only test the code we want, we need to isolate it. Of course, there’s heaps libraries for that already. Moq is just one of them. It allows of to create objects based on given interfaces and set up the expected behaviour in a way that we can abstract away all code we don’t currently test. Extremely powerful tool

Sometimes you just need a bit more of that

Suppose we’re testing a object that depends on internal state that’s tricky to abstract away. We’d however like to use Mock to replace one operation without changing the others:

public class A
{
    public string Test {get;set;}
    public virtual string ReturnTest() => Test;
}
//and some code below:
void Main()
{
    var config = new A() {
        Test = "TEST"
    } ;

    var mockedConfig = new Mock<A>(); // first we run a stock standard mock
    mockedConfig.CallBase = true; // we will enable CallBase just to point out that it makes no difference  
    var o = mockedConfig.Object;
    Console.WriteLine(o.ReturnTest()); // this will be null because Test has not been initialised from constructor
    mockedConfig.Setup(c => c.ReturnTest()).Returns("mocked"); // of course if you set up your mocks - you will get the value
    Console.WriteLine(o.ReturnTest()); // this will be "mocked" now, no surprises
}

The code above illustrates the problem quite nicely. You’ll know if this is your case when you see it.

General sentiment towards these problems

“It can’t be done, use something else”, they say. Some people on StackOverflow suggest to ditch Moq completely and go for it’s underlying technology Castle DynamicProxy. And it is a valid idea – create a proxy class around yours and intercept calls to the method under test. Easy!

Kinda easy

One advantage or Moq (which is by the way built on top of Castle DynamicProxy) is that it’s not just creating mock objects, but also tracks invocations and allows us to verify those later. Of course, we could opt to write the reuiqred bits ourselves, but why reinvent the wheel and introduce so much code that no one will maintain?

How about we mix and match?

We know that Moq internally leverages Castle DynamicProxy and it actually allows us to generate proxies for instances (they call it Class proxy with target). Therefore the question is – how do we get Moq to make one for us. It seems there’s no such option out of the box, and simply injecting the override didn’t quite go well as there’s not much inversion of control inside the library and most of the types and properties are marked as internal, making inheritance virtually impossible.

Castle Proxy is however much more user firendly and has quite a few methods exposed and available for overriding. So let us define a ProxyGenerator class that would take the method Moq calls and add required functionality to it (just compare CreateClassProxyWithTarget and CreateClassProxy implementations – they are almost identical!)

MyProxyGenerator.cs

class MyProxyGenerator : ProxyGenerator
{
    object _target;

    public MyProxyGenerator(object target) {
        _target = target; // this is the missing piece, we'll have to pass it on to Castle proxy
    }
    // this method is 90% taken from the library source. I only had to tweak two lines (see below)
    public override object CreateClassProxy(Type classToProxy, Type[] additionalInterfacesToProxy, ProxyGenerationOptions options, object[] constructorArguments, params IInterceptor[] interceptors)
    {
        if (classToProxy == null)
        {
            throw new ArgumentNullException("classToProxy");
        }
        if (options == null)
        {
            throw new ArgumentNullException("options");
        }
        if (!classToProxy.GetTypeInfo().IsClass)
        {
            throw new ArgumentException("'classToProxy' must be a class", "classToProxy");
        }
        CheckNotGenericTypeDefinition(classToProxy, "classToProxy");
        CheckNotGenericTypeDefinitions(additionalInterfacesToProxy, "additionalInterfacesToProxy");
        Type proxyType = CreateClassProxyTypeWithTarget(classToProxy, additionalInterfacesToProxy, options); // these really are the two lines that matter
        List<object> list =  BuildArgumentListForClassProxyWithTarget(_target, options, interceptors);       // these really are the two lines that matter
        if (constructorArguments != null && constructorArguments.Length != 0)
        {
            list.AddRange(constructorArguments);
        }
        return CreateClassProxyInstance(proxyType, list, classToProxy, constructorArguments);
    }
}

if all of the above was relativaly straightforward, actually feeding it into Moq is going to be somewhat of a hack. As I mentioned, most of the structures are marked internal so we’ll have to use reflection to get through:

MyMock.cs

public class MyMock<T> : Mock<T>, IDisposable where T : class
{
    void PopulateFactoryReferences()
    {
        // Moq tries ridiculously hard to protect their internal structures - pretty much every class that could be of interest to us is marked internal
        // All below code is basically serving one simple purpose = to swap a `ProxyGenerator` field on the `ProxyFactory.Instance` singleton
        // all types are internal so reflection it is
        // I will invite you to make this a bit cleaner by obtaining the `_generatorFieldInfo` value once and caching it for later
        var moqAssembly = Assembly.Load(nameof(Moq));
        var proxyFactoryType = moqAssembly.GetType("Moq.ProxyFactory");
        var castleProxyFactoryType = moqAssembly.GetType("Moq.CastleProxyFactory");     
        var proxyFactoryInstanceProperty = proxyFactoryType.GetProperty("Instance");
        _generatorFieldInfo = castleProxyFactoryType.GetField("generator", BindingFlags.NonPublic | BindingFlags.Instance);     
        _castleProxyFactoryInstance = proxyFactoryInstanceProperty.GetValue(null);
        _originalProxyFactory = _generatorFieldInfo.GetValue(_castleProxyFactoryInstance);//save default value to restore it later
    }

    public MyMock(T targetInstance) {       
        PopulateFactoryReferences();
        // this is where we do the trick!
        _generatorFieldInfo.SetValue(_castleProxyFactoryInstance, new MyProxyGenerator(targetInstance));
    }

    private FieldInfo _generatorFieldInfo;
    private object _castleProxyFactoryInstance;
    private object _originalProxyFactory;

    public void Dispose()
    {
         // you will notice I opted to implement IDisposable here. 
         // My goal is to ensure I restore the original value on Moq's internal static class property in case you will want to mix up this class with stock standard implementation
         // there are probably other ways to ensure reference is restored reliably, but I'll leave that as another challenge for you to tackle
        _generatorFieldInfo.SetValue(_castleProxyFactoryInstance, _originalProxyFactory);
    }
}

Then, given we’ve got the above working, the actual solution would look like so:

    var config = new A()
    {
        Test = "TEST"
    };
    using (var superMock = new MyMock<A>(config)) // now we can pass instances!
    {
        superMock.CallBase = true; // you still need this, because as far as Moq is oncerned it passes control over to CastleDynamicProxy   
        var o1 = superMock.Object;
        Console.WriteLine(o1.ReturnTest()); // but this should return TEST
    }

Hacking custom Start Action in Visual Studio 2017

As part of our technical debt collection commitment we do deal with weird situations where simply running a solution in Visual Studio might not be enough. A recent example to that was a web application split into two projects:

  • ASP.NET MVC – provided a shell for SPA as well as all backend integration – the entry point for application
  • React with Webpack – SPA mostly responsible for the UI and user intercations. Project was declared as Class Library with no code to run, while developers managed all javascript and Webpack off-band

Both projects are reasonably complex and I can see why the developers tried to split them. The best intention however makes running this tandem a bit awkward execrise. We run  npm run build  and expect it to drop files into correct places for MVC project that just relies on compiled javascript to be there. Using both technology stacks in one solution is nothing new. We’d have opted for this template back at project inception. Now was too late however.

Just make VS run both projects

Indeed, Microsoft has been generous enough to let us designate multiple projects as start up:

This however does not help our particular case as React project was created as Class Library and there’s no code to run. We need a way to kick that npm run build command line every time VS ‘builds’ the React project… How do we do it?

Okay, let’s use custom Start Action

Bingo! We can absolutely do this and bootstrap us a shell which then would run our npm command. Technically we can run npm directly, but I could never quite remember where to look for the executable.

There’s a slight issue with this approach though: it is not portable between developers’ machines. There are at least two reasons for that:

  • Input boxes on this dialog form do not support environment variables and/or relative paths.
  • Changes made in ths window go to .csproj.user file, that by default is .gitignore’d (here’s a good explanation why it should be)

So this does not work:

There might be a way however

  1. First and foremost, unload the solution (not just project). Project .user settings are loaded on solution start so we want it to be clean.
  2. Open up .user file in your favourite text editor, mine looks like this:
Program
C:\Windows\System32\cmd.exe

/c start /min npm run build

And change the path to whatever your requirements are:

Program
$(WINDIR)\System32\cmd.exe
/c start /min npm run build

We could potentially stop here, but the file is still user-specific and is not going into source control.

Merging all the way up to project file

As it turns out, we can just cut the elements we’re after (StartAction, StartProgram and StartArguments) and paste them into respective .csproj section (look out for the same Condition on PropertyGroup, that should be it)

true
full
false
bin\Debug\
DEBUG;TRACE
prompt
4
Program
$(WINDIR)\System32\cmd.exe

/c start /min npm run build

Open the solution again and check if everything works as intended.

Landing pages made easy

Do more with less

Every now and then customers would like us to build them a landing page with as little development and running overhead as possible.
First thing that pops to mind when presented with such requirements is to go for static sites. Indeed, there’s nothing easier than chucking a bunch of html into a directory and having it served.
Problems however start to arise when we consider maintainability and reusability of the solution as well as keeping things DRY. On top of that customers would almost certainly want a way to do periodic updates and changes without having to pay IT contractors’ fees (some might argue this is job security, but we believe it’s a sneaky attempt at customer lock-in that will probably backfire).

So how do we keep everyone happy?

Meet Jekyll, the static site generator. To be fair Jekyll is not the only attempt at translating templates into web pages. We however prefer it over the competition because of its adoption by GitHub – this means Jekyll sites get free hosting on Github pages. And this is real important when talking low overheads.

Up and running

If you don’t normally develop on Ruby, you probably will need to install a few things to start editing with Jekyll. Luckily with Docker we don’t need to do much but pull an official image and expose a few ports:

#this is an example docker-compose.yml file feel free to alter as needed
version: "2"
services:
site:
command: jekyll serve --force_polling # this tells the container to spin up a web server and instructs Jekyll to constantly scan working directory for changes
image: jekyll/jekyll:3.8
volumes:
- ${PWD}/data:/srv/jekyll # this example assumes all jekyll files would go into /data subdirectory
- ${PWD}/bundle:/usr/local/bundle # this mapping is optional, however it helps speed up builds by caching all ruby gems outside of container
ports:
- 4000:4000 # this is where webserver is exposed

Save this docker-compose.yml file somewhere and open up command prompt. First we need to init the site structure, so run docker-compose run site jekyll new .
Finally start the server docker-compose up, open up http://localhost:4000 and start editing.

Jekyll basics

Repeating official documentation is probably not the best idea, so I’ll just mention a few points that would help to get started quickly:

  1. Jekyll loves Markdown. Either learn it (it’s really simple and rewarding, I promise) or use online editor
  2. Jekyll allows HTML. so don’t feel restricted by previous bullet point
  3. You’ve got a choice of posts, pages and collections with Jekyll. Depending on your goals one might be better suited than the others.
  4. Utilise data files and collections as needed. They are really helpful when building rich content.

How about a test drive?

Absolutely, check out a git demo we did for one of the training sessions: https://wiseowls.github.io/jekyll-git-demo/

Dev/BA love/hate relationship

Love/hate, really?

While love and hate might be overstated, some developers in the community keep wondering if they’d be better off without BAs. No matter how much I’d like to say “yes” outright, to make it somewhat fair, I did some research on both good and bad in Dev/BA relationship.
A quick google reveals some of the most obvious reasons why Developers need Business Analysts (and I guess should love them for?)

  1. BAs figure out requirements by talking to end users/business so devs don’t have to
  2. BAs are able to turn vague business rules and processes into code-able requirements
  3. BAs leverage their knowledge in business and SDLC to break the work into manageable chunks
  4. BAs have enough domain knowledge (and stakeholder trust) to be able to make some calls and unblock developers on the spot
  5. BAs are able to negotiate features/scope on behalf of developers
  6. BAs document scope of work and maintain it
  7. BAs document everything so that it can be used to validate project outcomes

To be fair the only one meaningful “hate” result I’ve got was

  1. BAs are difficult

Let’s address elephant in the room first

BAs are as difficult as anyone else. Hopefully we’re all professionals here so that should just be a matter of working with them rather than against them. We all have the same goal in the end. Read on.

And now comes the big question

How many “love” are points actually valid?

  1. Sounds like a lazy Dev reasoning

    If you’re passionate about what you do – working with users/business is an excellent opportunity to learn, grow and see how results of your work help real people solve real problems. If you’re not passionate about what you do – maybe you should do something else instead?

  2. Formalising requirements is a biggie indeed

    Not just because it takes time to sit down and document that process. Often business have no idea themselves, so it takes collective effort to get it across the line. And I doubt anyone can nail everything down perfectly right from the get go. Also truth is, everyone has some sort of domain knowledge. So given the Dev is up to scratch, it seems logical to shorten the feedback loop. Cutting the middle man ensures everyone is on the same page and there’s no unnecessary backwards and forwards. This can be dangerous though because it’s very easy to get carried away and let this phase run forever. This is where an impartial 3rd party would be required. Another reason to potentially involve a BA here would be domain knowledge should the Dev be lacking in that department.

  3. BAs presumably excel at knowing domain

    But Devs are likely far far better aware of all the technical intricacies of the solution being built, so naturally Devs would have much better idea how to slice the work. On a few occasions I personally found myself in these situations. You can’t just build a rich UX user facing feature without thinking about all the plumbing and backend work that needs to happen first. Some BAs can argue they know the guts of system just as well. Perfect – with modern software development values of cross-functional teams there’s no reason to not get coding then.

  4. This is a valid point…kind of

    Dependant on (inter-)personal relationships however. Personally I find that having a stakeholder proxy in the team tends to speed things up a bit. Unless stakeholder is readily available anyway.

  5. Being an impartial 3rd party BAs are indeed able to help

    From my experience Devs tend to overengineer, hiding behind things like “best practice” and “future-proofing”. Common sense seems to be a very elusive concept when it comes to crafting code. So having someone not personally interested in technology for the sake of technology is probably going to benefit the project in the long run.

  6. Developers do get carried away

    I’ve been there, as well as probably everyone else. It gets very tempting to slip another shiny trendy library into the mix when no one is watching. Generalising and abstracting everything seems to be one of the things that set good developers apart from outstanding ones. For the sake of project success, this obsession just needs to be kept in check with goals and timeframes as in the end clients pay for solutions not frameworks.

  7. Documenting software is key

    Unfortunately this is something developers are generally not very good at – most of us would jump at coding instead of writing a plan first. This leads to all sorts of cognitive bias issues where features get documented after the fact they are finished and every justification goes to actual implementation rather than initial business requirement. Just having someone else with a different perspective take a look at it most likely going to help produce better results. Even better if documentation is made up front and independently. It becomes a valuable artifact and validation criteria.

In the end

It seems, applicability of all the aforementioned heavily depends on how experienced the particular Dev or BA is. There’s no point arguing whether the project would be better off with or without BAs (or Devs for that sake). Everyone adds value in their own way. As long as everyone adds value.

Right?

Oops I did it again: git disaster recovery

Houston, we have a problem

So, you’ve accidentally pushed your commit before realising it breaks everything? Or maybe the merge you’ve been working on for the last 2 hours has turned out to have gone south in the most inexplicable and unimaginable way? Okay, no worries, we’re all use git nowadays, right?
When I end up in situation like this (and I find myself there quite more often than I’d like to admit), this is the decision tree I follow to get back up and running:

I am assuming you’ve got your working copy checked out and you have backed it up prior to running any commands pasted from the internet.

To rewrite or not to rewrite

Generally speaking you should avoid rewriting history if you could. Changing commit chain would affect other developers that might have been relying on some of the commits in their feature branches. This would mean more clean up for everyone in the team and more potential stuff-ups. Sometimes, however, there’s just no choice but to rewrite. For example when someone checks in their private keys or large binary files that just do not belong in the repo.

Revert last commit

This does not roll back. It in fact creates a new commit, that ends up negating the previous one.

git revert HEAD
git push

Reset master to previous commit

This is by far the most common scenario. Note the ^ symbol at the end. It tells git to get parent of that commit instead. Very handy.

git reset HEAD^ --hard
git push origin -f

Rebase interactively

It is likely that vi is the default text editor the shipped with your git installation. So before you go down this rabbit hole – make sure you understand the basics of text editing with vi.

git log --pretty=oneline
git rebase -i ^

you will be presented with a file with commit chain one per line along with action codes git should run over each commit. Don’t worry, quick command reference is just down the bottom. Once you have saved and exited – git will run commands sequentially producing the desired state in the end.

Go cherry picking

Enjoy your cherry picking responsibly. Cherry picking is generally frowned upon in the community as it essentially creates a duplicate commit with another SHA. But our justification here would be we’re ultimately going to drop the old branch so there’d be no double-ups in the end. My strategy here is to branch off a last known good state, cherry pick all good commits over to new branch. Finally swap branches around and kill the old one. This is not the end of story however as all other developers would need to check out the new branch and potentially rebase their unmerged changes. Once again – use with care.

git log --pretty=oneline
git branch fixing_master ^
git push

Restore deleted file

This is pretty straightforward. First we find the last commit to have touched the file in question (deleting it). And then we check out a copy from parent commit (which would be the last version available at the time):

$file='/path/to/file'
git checkout $(git rev-list -n 1 HEAD -- "$file")^ -- "$file"