Things they don’t tell you – Tagging containers in Azure DevOps

Here’s an interesting gotcha that has kept us occupied for a little while. Our client wanted us to build an Azure DevOps pipeline that would build a container, tag it, and launch the image to do more work. As the result was not really worth pushing up to image registries, we decided to go fully local.

Setting up agent pool

Our client had further constraint that prevented them from using managed agents so first thing we had to do was to define a local pool. The process was uneventful, so we thought we’re off to a good start.

Creating Azure DevOps pipeline

Our first stab yielded a pipeline definition along the following lines:

trigger:
  - none

jobs:
  - job: test
    pool:
      name: local-linux-pool
    displayName: Build Cool software
    steps:
      - task: Bash@3
        displayName: Prune leftover containers
        inputs:
          targetType: inline
          script: |
            docker system prune -f -a

      - task: Docker@2
        displayName: Build worker container
        inputs:          
          command: build
          Dockerfile: 'container/Dockerfile'
          tags: |
            supercool/test-app # tagging container would simplify our next step and avoid us headaches of trying to figure out correct ID

      - bash: |
          docker run --name builderContainer -it --rm supercool/test-app:latest # we assume latest is the correct tag here

Nothing fancy here. We clean the environment before each run (this would be optional but helped troubleshooting). Then we build a container from a Dockerfile we found in source control. To make sure we run the right thing on next step we want to tag it.

But then it went sideways…

Unable to find image 'supercool/test-app:latest' locally
docker: Error response from daemon: pull access denied for test-app, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.

This of course means docker could not detect a local image and went off to pull it from the default registry. And we don’t want that!

Upon further inspection we found command line that builds container DOES NOT tag it!

/usr/bin/docker build \
		-f /myagent/_work/1/s/container/Dockerfile \
		--label com.azure.dev.image.system.teamfoundationcollectionuri=https://dev.azure.com/thisorgdoesnotexist/ \
		--label com.azure.dev.image.system.teamproject=test \
		--label com.azure.dev.image.build.repository.name=test \
		--label com.azure.dev.image.build.sourceversion=3d7a6ed84b0e9538b1b29206fd1651b44c0f74d8 \
		--label com.azure.dev.image.build.repository.uri=https://thisorgdoesnotexist@dev.azure.com/thisorgdoesnotexist/test/_git/test \
		--label com.azure.dev.image.build.sourcebranchname=main \
		--label com.azure.dev.image.build.definitionname=test \
		--label com.azure.dev.image.build.buildnumber=20262388.5 \
		--label com.azure.dev.image.build.builduri=vstfs:///Build/Build/11 \
		--label image.base.ref.name=alpine:3.12 \
		--label image.base.digest=sha256:a296b4c6f6ee2b88f095b61e95c7dde451ba25598835b4978c9256d8c8ace48a \
		/myagent/_work/1/s/container

How does this happen?

Luckily, DockerV2 is Open Source and freely available on GitHub. Looking at the code, we notice an interesting error message: "NotAddingAnyTagsToBuild": "Not adding any tags to the built image as no repository is specified." Is that our clue? Seems like it may be. Let’s keep digging.

Further inspection reveals for tags to get applied, task must be able to infer image name:

 if (imageNames && imageNames.length > 0) {
        ....
        tagArguments.push(imageName);
        ....
    }
    else {
        tl.debug(tl.loc('NotAddingAnyTagsToBuild'));
    }

And that information must come from a repository input parameter.

Repository – (Optional) Name of repository within the container registry corresponding to the Docker registry service connection specified as input for containerRegistry

Looking at the documentation, this makes no sense

A further peek into the source code, however, reveals that developers have kindly thought about local tagging:

public getQualifiedImageNamesFromConfig(repository: string, enforceDockerNamingConvention?: boolean) {
        let imageNames: string[] = [];
        if (repository) {
            let regUrls = this.getRegistryUrlsFromDockerConfig();
            if (regUrls && regUrls.length > 0) {

                // not our case, skipping for brevity
            }
            else {
                // in case there is no login information found and a repository is specified, the intention
                // might be to tag the image to refer locally.
                let imageName = repository;
                if (enforceDockerNamingConvention) {
                    imageName = imageUtils.generateValidImageName(imageName);
                }
                
                imageNames.push(imageName);
            }
        }

        return imageNames;
    }

Now we can solve it

Adding repository input to our task without specifying containerRegistry should get us the desired result:

...
      - task: Docker@2
        displayName: Build worker container
        inputs:          
          command: build
          Dockerfile: 'container/Dockerfile'
          repository: supercool/test-app # moving this from tag input to repository does magic!
...

Looking at the logs we seem to have won:

/usr/bin/docker build \
		-f /myagent/_work/1/s/container/Dockerfile \
		#... ADO labels go here, irrelevant
		-t supercool/test-app \
		/myagent/_work/1/s/container

Conclusion

This scenario, however far-fetched it may appear, seems to be fully supported. At least on the code level. Documentation is lacking a little bit, but I understand how this nuance may be hard to convey in 1-2 paragraphs when there’s so much else to cover.

Azure Static Web Apps – Lazy Dev Environment

Playing with Static Web Apps is lots of fun. However, setting up a list of required libraries and tools can get a little bit daunting. On top of that, removing it will likely leave a messy residue.

Use VS Code Dev containers then

So, let us assume WSL and Docker are already installed (Microsoft should consider shipping these features pre-installed, really). Then we can quickly grab VS Code and spin up a development container.

Turns out, Microsoft have already provided a very good starting point. So, all we need to do is:

  1. start a blank workspace folder, hit F1
  2. type “Add Development Container” and select the menu item
  1. type something and click “Show All Definitions”
  1. Select “Azure Static Web Apps”
  1. Press F1 once more and run “Remote-Containers: Reopen Folder in Container”

At the very minimum

To be valid, Static Web Apps require an index.html file. Let’s assume we’ve got static frontend sorted. Now we also want to add an API:

vscode ➜ /workspaces/vs-dev-containers-demo $ mkdir api && cd api
vscode ➜ /workspaces/vs-dev-containers-demo/api $ func init
vscode ➜ /workspaces/vs-dev-containers-demo/api $ func new -l C# -t HttpTrigger -n HelloWorld

nothing fancy, but now we can start everything with swa start:

vscode ➜ /workspaces/vs-dev-containers-demo $ swa start --api api

VS Code would go ahead and download recommended extensions and language packs, so this should just work.

We want better dev experience

And this is where custom tasks and launch configurations would come in handy. We want VS Code to run swa emulator for us and attach to running instance of Functions:

{
  "version": "0.2.0",
  "compounds": [
    {
      "name": "Run Static Web App with API",
      "configurations": ["Attach to .NET Functions", "Run SWA emulator"],        
      "presentation": {
        "hidden": false,
        "group": "",
        "order": 1
      }
    }
  ],
  "configurations": [
    {
      "name": "Attach to .NET Functions",
      "type": "coreclr",
      "request": "attach",
      "processId": "${command:azureFunctions.pickProcess}",
      "presentation": {
        "hidden": true,
        "group": "",
        "order": 2
      }
    },
    {
      "name": "Run SWA emulator",
      "type": "node-terminal",
      "request": "launch",      
      "cwd": "${workspaceFolder}",
      "command": "swa start . --api http://localhost:7071",
      "serverReadyAction": {
        "pattern": "Azure Static Web Apps emulator started at http://localhost:([0-9]+)",
        "uriFormat": "http://localhost:%s",
        "action": "openExternally"
      },
      "presentation": {
        "hidden": true,
        "group": "",
        "order": 3
      }
    }
  ]  
}

Save this file under .vscode/launch.json, reopen project folder and hit F5 to enjoy effect!

Finally, all code should get committed to GitHub (refrain from using ADO if you can).

EF Core 6 – does our IMethodCallTranslator still work?

A year and a half ago we posted an article on how we were able to plug into EF Core pipeline and inject our own IMethodCallTranslator. That let us leverage SQL-native encryption functionality with EF Core 3.1 and was ultimately a win. A lot has changed in the ecosystem, we’ve got .NET 5 and and .NET 6 coming up soon. So, we could not help but wonder…

Will it work with EF Core 6?

Apparently, EF6 is mostly an evolutionary step over EF5. That said, we totally missed previous version. So it is unclear to what extent the EF team has reworked their internal APIs. Most of the extensibility points we used were internal and clearly marked as “not for public consumption”. With that in mind, our concerns seemed valid.

Turns out, the code needs change…

The first issue we needed to rectify was implementing ShouldUseSameServiceProvider: from what I can tell, it’s needed to cache services more efficiently, but in our case setting it to default value seems to make sense.

But that’s where things really went sideways:

Apparently adding our custom IDbContextOptionsExtension resets the cache and by the time EF arrives at Model initialisation, instance of DI container gets wiped, leaving us with a bunch of null references (including the one above).

One line fix

I am still unsure why EF so upset when we add new extension. Stepping through the code would likely provide me with the answer but I feel it’s not worth the effort. Playing around with service scopes I however noticed that many built-in services get registered using different extension method with Scoped lifecycle. This prompted me to try change my registration method signature and voila:

And as usual, fully functioning code sits on GitHub.

Azure Static Web Apps – custom build and deployments

Despite Microsoft claims “First-class GitHub and Azure DevOps integration” with Static Web Apps, one is significantly easier to use than the other. Let’s take a quick look at how much features we’re giving up by sticking to Azure DevOps:

GitHubADO
Build/Deploy pipelinesAutomatically adds pipeline definition to the repoRequires manual pipeline setup
Azure Portal support
VS Code Extension
Staging environments and Pull Requests

Looks like a lot of functionality is missing. This however begs the question whether we can do something about it?

Turns out we can…sort of

Looking a bit further into ADO build pipeline, we notice that Microsoft has published this task on GitHub. Bingo!

The process seems to run a single script that in turn runs a docker image, something like this:

...
docker run \
    -e INPUT_AZURE_STATIC_WEB_APPS_API_TOKEN="$SWA_API_TOKEN" \
    ...
    -v "$mount_dir:$workspace" \
    mcr.microsoft.com/appsvc/staticappsclient:stable \
    ./bin/staticsites/StaticSitesClient upload

What exactly StaticSitesClient does is shrouded with mystery, but upon successful build (using Oryx) it creates two zip files: app.zip and api.zip. Then it uploads both to Blob storage and submits a request for ContentDistribution endpoint to pick the assets up.

It’s Docker – it runs anywhere

This image does not have to run at ADO or Github! We can indeed run this container locally and deploy without even committing the source code. All we need is a deployment token:

docker run -it --rm \
   -e INPUT_AZURE_STATIC_WEB_APPS_API_TOKEN=<your_deployment_token> 
   -e DEPLOYMENT_PROVIDER=DevOps \
   -e GITHUB_WORKSPACE="/working_dir"
   -e IS_PULL_REQUEST=true \
   -e BRANCH="TEST_BRANCH" \
   -e ENVIRONMENT_NAME="TESTENV" \
   -e PULL_REQUEST_TITLE="PR-TITLE" \
   -e INPUT_APP_LOCATION="." \
   -e INPUT_API_LOCATION="./api" \
   -v ${pwd}:/working_dir \
   mcr.microsoft.com/appsvc/staticappsclient:stable \
   ./bin/staticsites/StaticSitesClient upload

Also notice how this deployment created a staging environment:

Word of caution

Even though it seems like a pretty good little hack – this is not supported. The Portal would also bug out and refuse to display Environments correctly if the resource were created with “Other” workflow:

Portal
AZ CLI

Conclusion

Diving deep into Static Web Apps deployment is lots of fun. It may also help in situations where external source control is not available. For real production workloads, however, we’d recommend sticking with GitHub flow.

Setting up Basic Auth for Swashbuckle

Let us put the keywords up front: Swashbuckle Basic authentication setup that works for Swashbuckle.WebApi installed on WebAPI projects (.NET 4.x). It’s also much less common to need Basic auth nowadays where proper mechanisms are easily available in all shapes and sizes. However, it may just become useful for internal tools or existing integrations.

Why?

Most tutorials online would cover the next gen library aimed specifically at ASP.Net Core, which is great (we’re all in for tech advancement), but sometimes legacy needs a prop up. If you are looking to add Basic Auth to ASP.NET Core project – look up AddSecurityDefinition and go from there. If legacy is indeed the in trouble – read on.

Setting it up

The prescribed solution is two steps:
1. add authentication scheme on SwaggerDocsConfig class
2. attach authentication to endpoints as required with IDocumentFilter or IOperationFilter

Assuming the installer left us with App_Start/SwaggerConfig.cs file, we’d arrive at:

public class SwaggerConfig
{
	public static void Register()
	{
		var thisAssembly = typeof(SwaggerConfig).Assembly;

		GlobalConfiguration.Configuration
			.EnableSwagger(c =>
				{
					...
					c.BasicAuth("basic").Description("Basic HTTP Authentication");
					c.OperationFilter<BasicAuthOpFilter>();
					...
				});
	}
}

and the most basic IOperationFilter would look like so:

public class BasicAuthOpFilter : IOperationFilter
{
	public void Apply(Operation operation, SchemaRegistry schemaRegistry, ApiDescription apiDescription)
	{
		operation.security = operation.security ?? new List<IDictionary<string, IEnumerable<string>>>();
		operation.security.Add(new Dictionary<string, IEnumerable<string>>()
							   {
								   { "basic", new List<string>() }
							   });
	}
}

Azure Static Web Apps – when speed to market matters

The more we look at the new (GA as of May 2021) Azure Static Web Apps the more we think it makes sense to recommend this as a first step for startups and organisations looking to quickly validate their ideas. Yes, there was Blob Storage-based Static Website hosting capability (we looked at it earlier) but the newcomer is much more compelling option.

Enforcing DevOps culture

It’s easy to “just get it done” when all you need is a quick landing page or generated website. We’ve all been there – it takes a couple of clicks on the Portal to spin up required resources. Then drag-and-drop files to upload content and you’re done. Problems however strike later when the concept evolves past MVP stage. Team realises no one cared to keep track of change history and deployments are a pain.

Static Web Apps complicate things a bit by requiring you to deploy off source control. In the longer, however, benefits of version control and deployment pipeline will outweigh initial 5-minute hold up. I would point out, that the Portal makes it extremely easy to use GitHub and all demos online seem to encourage it.

ADO support is a fair bit fiddlier: deployments will work just as well, but we won’t be getting automatic staging branches support any time soon.

Integrated APIs

Out of the box, Static Web Apps supports Azure Functions that effectively become an API for the hosted website. There are some conventions in place but popping a Functions project under /api in the same repository would bootstrap everything like deployments, CORS and authentication context. Very neat indeed. After deployment available function show up on the portal

What would probably make experience even better if there was a way to test the API straight away.

Global CDN

One small detail that is easy to overlook is the location of the newly created web app:

upon further investigation we discover that the domain name indeed maps to azurestaticapps.trafficmanager.net, and resolving it yields geographically sensible results. In our case we got Hong Kong, which is close, but could probably be further improved with rollout to Australia.

Application Insights support

Given how Azure Functions back the APIs here, it’s no surprise that Application Insights would come bundled. All we have to do – is to create an App Insights instance and select it. That however is also a limitation – only Functions are covered. Static content itself is not.

Clear upgrade path

Free plan decent for initial stages but comes with limitations, so after a while, you may consider upgrading. Switching to Standard tier enables extra features like BYO Functions, Managed Identity and custom Auth providers. This cover heaps more use cases so application can keep evolving.

AWS X-Ray (but a bit more salesy)

Some time ago we had the pleasure of putting together a proposal for a client who we know loves a good story. Engagement ended up not going ahead, but this got us thinking whether we can take a random piece of tech and write it up in a slightly more salesy way. Since we don’t usually work with AWS we thought it may be a good time to try AWS X-Ray.

3-2-1 Action!

It is hard to know what is happening around you when it’s dark…

You have built and deployed an app. You are pretty confident that everything is up and running and your team had gone through the core functionality a few times before the release. You take the phone, dial your head of marketing, and tell them to submit that promotional article on to HackerNews. You know this is going to attract heaps of attention, but you are ready to be the next big thing…

But how are you going to know if your users are happy with the service? Will you spot a performance bottleneck when it happens? Or you will allow the site to crash and ruin the first impression for everyone?

Do you have the tools to track how your users interact with the app: what features they use, what should you build next? Are your developers able to pick up the logs and exception traces and investigate issues as they arise? It is really hard to answer any of these questions without having visibility into your system. Application performance monitoring tools are expensive to build and hard to maintain.

Now to the technical part

On the management portal we’ve got a lambda, sitting behind an API gateway so that we can call it via HTTP. X-Ray integrates with Lambda out of the box, so all we have to do is enable it.

adding X-Ray support to Lambda (step 1)
adding X-Ray support to Lambda (step 2)

The customer facing application is a .NET MVC website hosted on Elastic Beanstalk. One advantage of doing it that way is that it also takes care of installing the X-Ray agent – we would need to do it ourselves otherwise.

enabling X-Ray on Elastic Beanstalk

Adding Xray support to .net projects is very straightforward. We’d just need to instrument our code a little bit: Firstly – We install official NuGet packages. Secondly we add a couple of lines of code to Startup class. If we also want to capture SQL that entity framework generates – we’d need to make sure we do not forget to add an interceptor here. And…that’s it – after deploying everything – we should be all set!

public class Startup
{
	public Startup(IConfiguration configuration)
	{
		Configuration = configuration;
		AWSXRayRecorder.InitializeInstance(Configuration);
		AWSSDKHandler.RegisterXRayForAllServices();
		AWSXRayRecorder.Instance.ContextMissingStrategy = ContextMissingStrategy.LOG_ERROR;
	}

	public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
	{
		app.UseXRay("x-ray-test-app");
		if (env.IsDevelopment())
		{
			app.UseDeveloperExceptionPage();
		}
		..........
	}
}

Now we can go ahead, put on our customer hat and browse the website. We’ll land on index page and check out the menu. We’ll also call a secret endpoint that generates an exception. Having generated enough activity we should be able to see the application map!

aws x-ray sample application map

As you can see, we’ve got all our components laid out in front of us. We’ve also got latencies of each link and amount of successful/failed requests.

Drilling down to traces allows us to see even more data on how users interact with our app. If there was a Database involved – we’d get to see SQL too. It doesn’t stop here – X-Ray exposes a set of APIs that allow developers build even more analysis and visualization apps.

To conclude

XRay is easy to use, it gives you insight to dive deep into how your system functions.

It is integrated with EC2, Container Service, Lambda and Beanstalk and lets the developers focus on what really matters – making your startup a success story.

EF core 3.1: dynamic GroupBy clause

Expanding on many ways we can build dynamic clauses for EF execution, let us look at another example. Suppose we’d like to give our users ability to group items based on a property of their choice.

As usual, our first choice would be to build a LINQ expression as we did with the WHERE case. There’s however a problem: GroupBy needs an object to use as a grouping key. Usually, we’d just make an anonymous type, but it is a compile-time luxury we don’t get with LINQ:

dbSet.GroupBy(s => new {s.Col1, s.Col2}); // not going to fly :/

IL Emit it is then

So, it seems we are left with no other choice but to go through TypeBuilder ordeal (which isn’t too bad really). One thing to point out here – we want to create properties that EF will later use for grouping key. This is where ability to interrogate EF as-built model comes very handy. Another important point – creating a property in fact means creating a private backing field and two special methods for each. We certainly started to appreciate how much the language and runtime do for us:

private void CreateProperty(TypeBuilder typeBuilder, string propertyName, Type propertyType)
{
 // really, just generating "public PropertyType propertyName {get;set;}"
	FieldBuilder fieldBuilder = typeBuilder.DefineField("_" + propertyName, propertyType, FieldAttributes.Private);

	PropertyBuilder propertyBuilder = typeBuilder.DefineProperty(propertyName, PropertyAttributes.HasDefault, propertyType, null);
	MethodBuilder getPropMthdBldr = typeBuilder.DefineMethod("get_" + propertyName, MethodAttributes.Public | MethodAttributes.SpecialName | MethodAttributes.HideBySig, propertyType, Type.EmptyTypes);
	ILGenerator getIl = getPropMthdBldr.GetILGenerator();

	getIl.Emit(OpCodes.Ldarg_0);
	getIl.Emit(OpCodes.Ldfld, fieldBuilder);
	getIl.Emit(OpCodes.Ret);

	MethodBuilder setPropMthdBldr = typeBuilder.DefineMethod("set_" + propertyName,
		  MethodAttributes.Public |
		  MethodAttributes.SpecialName |
		  MethodAttributes.HideBySig,
		  null, new[] { propertyType });

	ILGenerator setIl = setPropMthdBldr.GetILGenerator();
	Label modifyProperty = setIl.DefineLabel();
	Label exitSet = setIl.DefineLabel();

	setIl.MarkLabel(modifyProperty);
	setIl.Emit(OpCodes.Ldarg_0);
	setIl.Emit(OpCodes.Ldarg_1);
	setIl.Emit(OpCodes.Stfld, fieldBuilder);

	setIl.Emit(OpCodes.Nop);
	setIl.MarkLabel(exitSet);
	setIl.Emit(OpCodes.Ret);

	propertyBuilder.SetGetMethod(getPropMthdBldr);
	propertyBuilder.SetSetMethod(setPropMthdBldr);
}

after we’ve sorted this out – it’s pretty much the same approach as with any other dynamic LINQ expression: we need to build something like DbSet.GroupBy(s => new dynamicType {col1 = s.q1, col2 = s.q2}). There’s a slight issue with this lambda however – it returns IGrouping<dynamicType, TElement> – and since outside code has no idea of the dynamic type – there’s no easy way to work with it (unless we want to keep reflecting). I thought it might be easier to build a Select as well and return a Count against each instance of dynamic type. Luckily, we only needed a count, but other aggregations work in similar fashion.

Finally

I arrived at the following code to generate required expressions:

public static IQueryable<Tuple<object, int>> BuildExpression<TElement>(this IQueryable<TElement> source, DbContext context, List<string> columnNames)
{
	var entityParameter = Expression.Parameter(typeof(TElement));
	var sourceParameter = Expression.Parameter(typeof(IQueryable<TElement>));

	var model = context.Model.FindEntityType(typeof(TElement)); // start with our own entity
	var props = model.GetPropertyAccessors(entityParameter); // get all available field names including navigations			

	var objectProps = new List<Tuple<string, Type>>();
	var accessorProps = new List<Tuple<string, Expression>>();
	var groupKeyDictionary = new Dictionary<object, string>();
	foreach (var prop in props.Where(p => columnNames.Contains(p.Item3)))
	{
		var propName = prop.Item3.Replace(".", "_"); // we need some form of cross-reference, this seems to be good enough
		objectProps.Add(new Tuple<string, Type>(propName, (prop.Item2 as MemberExpression).Type));
		accessorProps.Add(new Tuple<string, Expression>(propName, prop.Item2));
	}

	var groupingType = BuildGroupingType(objectProps); // build new type we'll use for grouping. think `new Test() { A=, B=, C= }`

	// finally, we're ready to build our expressions
	var groupbyCall = BuildGroupBy<TElement>(sourceParameter, entityParameter, accessorProps, groupingType); // source.GroupBy(s => new Test(A = s.Field1, B = s.Field2 ... ))
	var selectCall = groupbyCall.BuildSelect<TElement>(groupingType); // .Select(g => new Tuple<object, int> (g.Key, g.Count()))
	
	var lambda = Expression.Lambda<Func<IQueryable<TElement>, IQueryable<Tuple<object, int>>>>(selectCall, sourceParameter);
	return lambda.Compile()(source);
}

private static MethodCallExpression BuildSelect<TElement>(this MethodCallExpression groupbyCall, Type groupingAnonType) 
{	
	var groupingType = typeof(IGrouping<,>).MakeGenericType(groupingAnonType, typeof(TElement));
	var selectMethod = QueryableMethods.Select.MakeGenericMethod(groupingType, typeof(Tuple<object, int>));
	var resultParameter = Expression.Parameter(groupingType);

	var countCall = BuildCount<TElement>(resultParameter);
	var resultSelector = Expression.New(typeof(Tuple<object, int>).GetConstructors().First(), Expression.PropertyOrField(resultParameter, "Key"), countCall);

	return Expression.Call(selectMethod, groupbyCall, Expression.Lambda(resultSelector, resultParameter));
}

private static MethodCallExpression BuildGroupBy<TElement>(ParameterExpression sourceParameter, ParameterExpression entityParameter, List<Tuple<string, Expression>> accessorProps, Type groupingAnonType) 
{
	var groupByMethod = QueryableMethods.GroupByWithKeySelector.MakeGenericMethod(typeof(TElement), groupingAnonType);
	var groupBySelector = Expression.Lambda(Expression.MemberInit(Expression.New(groupingAnonType.GetConstructors().First()),
			accessorProps.Select(op => Expression.Bind(groupingAnonType.GetMember(op.Item1)[0], op.Item2))
		), entityParameter);

	return Expression.Call(groupByMethod, sourceParameter, groupBySelector);
}

private static MethodCallExpression BuildCount<TElement>(ParameterExpression resultParameter)
{
	var asQueryableMethod = QueryableMethods.AsQueryable.MakeGenericMethod(typeof(TElement));
	var countMethod = QueryableMethods.CountWithoutPredicate.MakeGenericMethod(typeof(TElement));

	return Expression.Call(countMethod, Expression.Call(asQueryableMethod, resultParameter));
}

And the full working version is on my GitHub

Serverless face-off: AWS Lambda

We have discussed some high-level approach differences between Azure and AWS with regards to Lambda.

This time round we will expand on our scenario a bit and attempt to get our static site to call a serverless endpoint.

This article aims to cover brief step by step process of creating a simple backend for our static website to call.

There are a few things we’d have to keep in mind while going through this exercise:

  • no custom domains – setting HTTPS up with our own domain is a whole different topic – watch this space
  • even though we primarily develop on .NET platform, we’d resort to Node.JS here. Main reason for this choice being – inconsistencies between platform features: for example, cloud console built-in editor is only available for interpreted languages with AWS, while with Azure, .NET is a first-class citizen and gets full editing support.
  • no CI/CD – we want something very simple to get the point across.
  • online portal only – again, it’s best practice to codify infrastructure and deployment, but we set something up quick and dirty. Think Startup Weekend

Hosting static website on AWS

Starting with the frontend, we’d throw a small front page up into the cloud. Upload the files to S3 and make sure to enable read-only public access to the files in bucket as we’d not go through more fine-grained ACLs here.

Creating a quick Lambda

Now onto the main attraction. We’d head over to Lambda and proceed to “Author from scratch”. As we’ve already established, choice of runtime would impact our ability to move quickly and code it on the portal. So, we’d venture onto totally unknown Node.js territory here.

To get a response we need to define a function named handler and export it. In fact, the name is configurable, but we’ll stick to the default.

Since we don’t need to change a lot, here’s our complete test code:

exports.handler = async (event, context) => {
     const response = {
         statusCode: 200,
         isBase64Encoded: false,
         body: JSON.stringify({event, context}),
         headers: {
             "Content-Type": "application/json",
             "Access-Control-Allow-Headers" : "",             "Access-Control-Allow-Origin": "",
             "Access-Control-Allow-Methods": "OPTIONS,POST,GET"
         }
     };
     return response;
 };

Notice how we add a bunch of headers for CORS? Well, that’s a requirement for ELB to work. Can skip it if going API Gateway path. Deploy it, and let’s move on to defining how our function can get called.

Defining a trigger

There quite are quite a few ways this function can get invoked, but we only care about HTTP. Two obvious options for this would be to either stand up an API Gateway or use Elastic Load Balancer. Shall we try both?

API Gateway

API Gateway has been the preferred method of fronting lambdas since the beginning of time. It supports authentication and allows great control over the process. Since API Gateway is specifically designed to serve HTTP requests and forward them along, we only need to make a few choices like

  • whether we want full control over REST or simple HTTP will do and
  • what kind of authentication we’d like on the endpoints.

Elastic Load Balancer

As of late 2018, AWS also supports a more “lightweight” option of having ELB target a lambda when called on a predefined path. On the surface setting up ELB looks more involved as we’d have to configure a lot of networking. Don’t forget to open inbound port 80 in your security group!

Conclusion

Creating a lambda with AWS is extremely easy if you pick the right runtime. A choice of triggers makes it a bit harder to pick one that suits better. Here’s a list of points to consider:

API GatewayELB
CORSyes, can edit on the portalno, but need to modify lambda to return correct headers
SSLHTTPS onlyHTTP/HTTPS or both
AuthNyes (IAM)no
Throttlingyesno

and sample responses FYI:

API Gateway
ELB

Calling WCF services from .NET Core clients

Imagine situation: company runs a business-critical application that was built when WCF was a hot topic. Over the years the code base has grown and became a hot mess. But now, finally, the development team got a go ahead to break it down into microservices. Yay? Calling WCF services from .NET Core clients can be a challenge.

Not so fast

We already discussed some high-level architectural approaches to integrate systems. But we didn’t touch upon the data exchange between monolith and microservice consumers: we could post complete object feed onto a message queue, but that’s not always fit for purpose as messages should be lightweight. Another way (keeping in mind our initial WCF premise), we could call the services as needed and make alterations inside microservices. And Core WCF is a fantastic way to do that. If only we used all stock standard service code.

What if custom is the way?

But sometimes our WCF implementation has evolved so much that it’s impossible to retrofit off the shelf tools. For example, one client we worked with, was stuck with binary formatting for performance reasons. And that meant that we needed to use same legacy .net 4.x assemblies to ensure full compatibility. Issue was – not all of references was supported by .net core anyway. So we had to get creative.

What if there was an API?

Surely, we could write an API that would adapt REST requests to WCF calls. We could probably just use Azure API Management and call it a day, but our assumption here was not all customers are going to do that. The question is how to minimize the amount of effort developers need to expose the endpoints.

A perfect case for C# Source Generators

C# Source Generators is a new C# compiler feature that lets C# developers inspect user code and generate new C# source files that can be added to a compilation. This is our chance to write code that will write more code when a project is built (think C# Code Inception).

The setup is going to be very simple: we’ll add a generator to our WCF project and get it to write our WebAPI controllers for us. Official blog post describes all steps necessary to enable this feature, so we’d skip this trivial bit.

We’ll look for WCF endpoints that developers have decorated with a custom attribute (we’re opt-in) and do the following:

  • Find all Operations marked with GenerateApiEndpoint attribute
  • Generate Proxy class for each ServiceContract we discovered
  • Generate API Controller for each ServiceContract that exposes at least one operation
  • Generate Data Transfer Objects for all exposed methods
  • Use generated DTOs to create WCF client and call required method, return data back

Proxy classes

For .net core to call legacy WCF, we have to either use svcutil to scaffold everything for us or we have to have a proxy class that inherits from ClientBase

namespace WcfService.BridgeControllers {
public class {proxyToGenerate.Name}Proxy: ClientBase<{proxyToGenerate}>, {proxyToGenerate} {
    foreach (var method in proxyToGenerate.GetMembers()) 
    {
        var parameters = // craft calling parameters; // need to make sure we build correct parameters here
        public {method.ReturnType} {method.Name}({parameters}) {
            return Channel.{method.Name}({parameters}); // calling respective WCF method
    }
}

DTO classes

We thought it’s easier to standardize calling convention so all methods in our API are always POST and all accept only one DTO on input (which in turn very much depends on callee sugnature):

public static string GenerateDtoCode(this MethodDeclarationSyntax method) 
{
    var methodName = method.Identifier.ValueText;
    var methodDtoCode = new StringBuilder($"public class {methodName}Dto {{").AppendLine(""); 
    foreach (var parameter in method.ParameterList.Parameters)
    {
        var isOut = parameter.IsOut();
        if (!isOut)
        {
            methodDtoCode.AppendLine($"public {parameter.Type} {parameter.Identifier} {{ get; set; }}");
        }
    }
    methodDtoCode.AppendLine("}");
    return methodDtoCode.ToString();
}

Controllers

And finally, controllers follow simple conventions to ensure we always know how to call them:

sourceCode.AppendLine()
   .AppendLine("namespace WcfService.BridgeControllers {").AppendLine()
   .AppendLine($"[RoutePrefix(\"api/{className}\")]public class {className}Controller: ApiController {{");
...
 var methodCode = new StringBuilder($"[HttpPost][Route(\"{methodName}\")]")
                  .AppendLine($"public Dictionary<string, object> {methodName}([FromBody] {methodName}Dto request) {{")
                  .AppendLine($"var proxy = new {clientProxy.Name}Proxy();")
                  .AppendLine($"var response = proxy.{methodName}({wcfCallParameterList});")
                  .AppendLine("return new Dictionary<string, object> {")
                  .AppendLine(" {\"response\", response },")
                  .AppendLine(outParameterResultList);

As a result

We should be able to wrap required calls into REST API and fully decouple our legacy data contracts from new data models. Working sample project is on GitHub.