Azure Static Web Apps – adding PR support to Azure DevOps pipeline

Last time we took a peek under the hood of Static Web Apps, we discovered a docker container that allowed us to do custom deployments. This however left us with an issue where we could create staging environments but could not quite call it a day as we could not cleanup after ourselves.

There is more to custom deployments

Further inspection of GitHub actions config revealed there’s one more action that we could potentially exploit to get full advantage of custom workflows. It is called “close”:

name: Azure Static Web Apps CI/CD
....
jobs:
  close_pull_request_job:
    ... bunch of conditions here
    action: "close" # that is our hint!

With the above in mind, we can make an educated guess on how to invoke it with docker:

docker run -it --rm \
   -e INPUT_AZURE_STATIC_WEB_APPS_API_TOKEN=<your deployment token> \
   -e DEPLOYMENT_PROVIDER=DevOps \
   -e GITHUB_WORKSPACE="/working_dir" \
   -e IS_PULL_REQUEST=true \
   -e BRANCH="TEST_BRANCH" \
   -e ENVIRONMENT_NAME="TESTENV" \
   -e PULL_REQUEST_TITLE="PR-TITLE" \
   mcr.microsoft.com/appsvc/staticappsclient:stable \
   ./bin/staticsites/StaticSitesClient close --verbose

Running this indeed closes off an environment. That’s it!

Can we build an ADO pipeline though?

Just running docker containers is not really that useful as these actions are intended for CI/CD pipelines. Unfortunately, there’s no single config file we can edit to achieve it with Azure DevOps: we’d have to take a bit more hands on approach. Roughly the solution looks like so:

First, we’ll create a branch policy to kick off deployment to staging environment. Then we’ll use Service Hook to trigger an Azure Function on successful PR merge. Finally, stock standard Static Web Apps task will run on master branch when new commit gets pushed.

Branch policy

Creating branch policy itself is very straightforward: first we’ll need a separate pipeline definition:

pr:
  - master

pool:
  vmImage: ubuntu-latest

steps:
  - checkout: self    
  - bash: |
      docker run \
      --rm \
      -e INPUT_AZURE_STATIC_WEB_APPS_API_TOKEN=$(deployment_token)  \
      -e DEPLOYMENT_PROVIDER=DevOps \
      -e GITHUB_WORKSPACE="/working_dir" \
      -e IS_PULL_REQUEST=true \
      -e BRANCH=$(System.PullRequest.SourceBranch) \
      -e ENVIRONMENT_NAME="TESTENV" \
      -e PULL_REQUEST_TITLE="PR # $(System.PullRequest.PullRequestId)" \
      -e INPUT_APP_LOCATION="." \
      -e INPUT_API_LOCATION="./api" \
      -v ${PWD}:/working_dir \
      mcr.microsoft.com/appsvc/staticappsclient:stable \
      ./bin/staticsites/StaticSitesClient upload

In here we use a PR trigger, along with some variables to push through to Azure Static Web Apps. Apart from that, it’s a simple docker run that we have already had success with. To hook it up, we need a Build Validation check that would trigger this pipeline:

Teardown pipeline definition

Second part is a bit more complicated and requires an Azure Function to pull off. Let’s start by defining a pipeline that our function will run:

trigger: none

pool:
  vmImage: ubuntu-latest

steps:
  - script: |
      docker run --rm \
      -e INPUT_AZURE_STATIC_WEB_APPS_API_TOKEN=$(deployment_token) \
      -e DEPLOYMENT_PROVIDER=DevOps \
      -e GITHUB_WORKSPACE="/working_dir" \
      -e IS_PULL_REQUEST=true \
      -e BRANCH=$(PullRequest_SourceBranch) \
      -e ENVIRONMENT_NAME="TESTENV" \
      -e PULL_REQUEST_TITLE="PR # $(PullRequest_PullRequestId)" \
      mcr.microsoft.com/appsvc/staticappsclient:stable \
      ./bin/staticsites/StaticSitesClient close --verbose
    displayName: 'Cleanup staging environment'

One thing to note here is manual trigger – we opt out of CI/CD. Then, we make note of environment variables that our function will have to populate.

Azure Function

It really doesn’t matter what sort of function we create. In this case we opt for C# code that we can author straight from the Portal for simplicity. We also need to generate a PAT so our function can call ADO.

#r "Newtonsoft.Json"

using System.Net;
using System.Net.Http.Headers;
using System.Text;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives;
using Newtonsoft.Json;

private const string personalaccesstoken = "<your PAT>";
private const string organization = "<your org>";
private const string project = "<your project>";
private const int pipelineId = <your pipeline Id>; 

public static async Task<IActionResult> Run([FromBody]HttpRequest req, ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");
    string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
    dynamic data = JsonConvert.DeserializeObject(requestBody);	

    log.LogInformation($"eventType: {data?.eventType}");
    log.LogInformation($"message text: {data?.message?.text}");
    log.LogInformation($"pullRequestId: {data?.resource?.pullRequestId}");
    log.LogInformation($"sourceRefName: {data?.resource?.sourceRefName}");

    try
	{
		using (HttpClient client = new HttpClient())
		{
			client.DefaultRequestHeaders.Accept.Add(new System.Net.Http.Headers.MediaTypeWithQualityHeaderValue("application/json"));
			client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic", ToBase64(personalaccesstoken));

			string payload = @"{ 
		""variables"": {
			""System.PullRequest.SourceBranch"": {
				""isSecret"": false,
            	""value"": """ + data?.resource?.sourceRefName + @"""
			},
			""System.PullRequest.PullRequestId"": {
				""isSecret"": false,
            	""value"": "+ data?.resource?.pullRequestId + @"
			}
		}
	}";
            var url = $"https://dev.azure.com/{organization}/{project}/_apis/pipelines/{pipelineId}/runs?api-version=6.0-preview.1";
            log.LogInformation($"sending payload: {payload}");
            log.LogInformation($"api url: {url}");
			using (HttpResponseMessage response = await client.PostAsync(url, new StringContent(payload, Encoding.UTF8, "application/json")))
			{
				response.EnsureSuccessStatusCode();
				string responseBody = await response.Content.ReadAsStringAsync();
                return new OkObjectResult(responseBody);
			}
		}
	}
	catch (Exception ex)
	{
		log.LogError("Error running pipeline", ex.Message);
        return new JsonResult(ex) { StatusCode = 500 }; 
	}
}

private static string ToBase64(string input)
{
	return Convert.ToBase64String(System.Text.ASCIIEncoding.ASCII.GetBytes(string.Format("{0}:{1}", "", input)));
}

Service Hook

With all prep work done, all we have left to do is to connect PR merge event to Function call:

The function url should contain access key if that was defined. The easiest is probably to copy it straight from the Portal’s Code + Test blade:

It also may be a good idea to test connection on the second form before finishing up.

Conclusion

Once everything is connected, the pipelines should create/delete staging environments similar to what GitHub does. One possible improvement we could potentially do, would be to replace branch policy with yet another Service Hook to Function so that PR title gets correctly reflected on the Portal.

But I’ll leave it as a challenge for readers to complete.

Things they don’t tell you – Tagging containers in Azure DevOps

Here’s an interesting gotcha that has kept us occupied for a little while. Our client wanted us to build an Azure DevOps pipeline that would build a container, tag it, and launch the image to do more work. As the result was not really worth pushing up to image registries, we decided to go fully local.

Setting up agent pool

Our client had further constraint that prevented them from using managed agents so first thing we had to do was to define a local pool. The process was uneventful, so we thought we’re off to a good start.

Creating Azure DevOps pipeline

Our first stab yielded a pipeline definition along the following lines:

trigger:
  - none

jobs:
  - job: test
    pool:
      name: local-linux-pool
    displayName: Build Cool software
    steps:
      - task: Bash@3
        displayName: Prune leftover containers
        inputs:
          targetType: inline
          script: |
            docker system prune -f -a

      - task: Docker@2
        displayName: Build worker container
        inputs:          
          command: build
          Dockerfile: 'container/Dockerfile'
          tags: |
            supercool/test-app # tagging container would simplify our next step and avoid us headaches of trying to figure out correct ID

      - bash: |
          docker run --name builderContainer -it --rm supercool/test-app:latest # we assume latest is the correct tag here

Nothing fancy here. We clean the environment before each run (this would be optional but helped troubleshooting). Then we build a container from a Dockerfile we found in source control. To make sure we run the right thing on next step we want to tag it.

But then it went sideways…

Unable to find image 'supercool/test-app:latest' locally
docker: Error response from daemon: pull access denied for test-app, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.

This of course means docker could not detect a local image and went off to pull it from the default registry. And we don’t want that!

Upon further inspection we found command line that builds container DOES NOT tag it!

/usr/bin/docker build \
		-f /myagent/_work/1/s/container/Dockerfile \
		--label com.azure.dev.image.system.teamfoundationcollectionuri=https://dev.azure.com/thisorgdoesnotexist/ \
		--label com.azure.dev.image.system.teamproject=test \
		--label com.azure.dev.image.build.repository.name=test \
		--label com.azure.dev.image.build.sourceversion=3d7a6ed84b0e9538b1b29206fd1651b44c0f74d8 \
		--label com.azure.dev.image.build.repository.uri=https://thisorgdoesnotexist@dev.azure.com/thisorgdoesnotexist/test/_git/test \
		--label com.azure.dev.image.build.sourcebranchname=main \
		--label com.azure.dev.image.build.definitionname=test \
		--label com.azure.dev.image.build.buildnumber=20262388.5 \
		--label com.azure.dev.image.build.builduri=vstfs:///Build/Build/11 \
		--label image.base.ref.name=alpine:3.12 \
		--label image.base.digest=sha256:a296b4c6f6ee2b88f095b61e95c7dde451ba25598835b4978c9256d8c8ace48a \
		/myagent/_work/1/s/container

How does this happen?

Luckily, DockerV2 is Open Source and freely available on GitHub. Looking at the code, we notice an interesting error message: "NotAddingAnyTagsToBuild": "Not adding any tags to the built image as no repository is specified." Is that our clue? Seems like it may be. Let’s keep digging.

Further inspection reveals for tags to get applied, task must be able to infer image name:

 if (imageNames && imageNames.length > 0) {
        ....
        tagArguments.push(imageName);
        ....
    }
    else {
        tl.debug(tl.loc('NotAddingAnyTagsToBuild'));
    }

And that information must come from a repository input parameter.

Repository – (Optional) Name of repository within the container registry corresponding to the Docker registry service connection specified as input for containerRegistry

Looking at the documentation, this makes no sense

A further peek into the source code, however, reveals that developers have kindly thought about local tagging:

public getQualifiedImageNamesFromConfig(repository: string, enforceDockerNamingConvention?: boolean) {
        let imageNames: string[] = [];
        if (repository) {
            let regUrls = this.getRegistryUrlsFromDockerConfig();
            if (regUrls && regUrls.length > 0) {

                // not our case, skipping for brevity
            }
            else {
                // in case there is no login information found and a repository is specified, the intention
                // might be to tag the image to refer locally.
                let imageName = repository;
                if (enforceDockerNamingConvention) {
                    imageName = imageUtils.generateValidImageName(imageName);
                }
                
                imageNames.push(imageName);
            }
        }

        return imageNames;
    }

Now we can solve it

Adding repository input to our task without specifying containerRegistry should get us the desired result:

...
      - task: Docker@2
        displayName: Build worker container
        inputs:          
          command: build
          Dockerfile: 'container/Dockerfile'
          repository: supercool/test-app # moving this from tag input to repository does magic!
...

Looking at the logs we seem to have won:

/usr/bin/docker build \
		-f /myagent/_work/1/s/container/Dockerfile \
		#... ADO labels go here, irrelevant
		-t supercool/test-app \
		/myagent/_work/1/s/container

Conclusion

This scenario, however far-fetched it may appear, seems to be fully supported. At least on the code level. Documentation is lacking a little bit, but I understand how this nuance may be hard to convey in 1-2 paragraphs when there’s so much else to cover.

Azure Static Web Apps – Lazy Dev Environment

Playing with Static Web Apps is lots of fun. However, setting up a list of required libraries and tools can get a little bit daunting. On top of that, removing it will likely leave a messy residue.

Use VS Code Dev containers then

So, let us assume WSL and Docker are already installed (Microsoft should consider shipping these features pre-installed, really). Then we can quickly grab VS Code and spin up a development container.

Turns out, Microsoft have already provided a very good starting point. So, all we need to do is:

  1. start a blank workspace folder, hit F1
  2. type “Add Development Container” and select the menu item
  1. type something and click “Show All Definitions”
  1. Select “Azure Static Web Apps”
  1. Press F1 once more and run “Remote-Containers: Reopen Folder in Container”

At the very minimum

To be valid, Static Web Apps require an index.html file. Let’s assume we’ve got static frontend sorted. Now we also want to add an API:

vscode ➜ /workspaces/vs-dev-containers-demo $ mkdir api && cd api
vscode ➜ /workspaces/vs-dev-containers-demo/api $ func init
vscode ➜ /workspaces/vs-dev-containers-demo/api $ func new -l C# -t HttpTrigger -n HelloWorld

nothing fancy, but now we can start everything with swa start:

vscode ➜ /workspaces/vs-dev-containers-demo $ swa start --api api

VS Code would go ahead and download recommended extensions and language packs, so this should just work.

We want better dev experience

And this is where custom tasks and launch configurations would come in handy. We want VS Code to run swa emulator for us and attach to running instance of Functions:

{
  "version": "0.2.0",
  "compounds": [
    {
      "name": "Run Static Web App with API",
      "configurations": ["Attach to .NET Functions", "Run SWA emulator"],        
      "presentation": {
        "hidden": false,
        "group": "",
        "order": 1
      }
    }
  ],
  "configurations": [
    {
      "name": "Attach to .NET Functions",
      "type": "coreclr",
      "request": "attach",
      "processId": "${command:azureFunctions.pickProcess}",
      "presentation": {
        "hidden": true,
        "group": "",
        "order": 2
      }
    },
    {
      "name": "Run SWA emulator",
      "type": "node-terminal",
      "request": "launch",      
      "cwd": "${workspaceFolder}",
      "command": "swa start . --api http://localhost:7071",
      "serverReadyAction": {
        "pattern": "Azure Static Web Apps emulator started at http://localhost:([0-9]+)",
        "uriFormat": "http://localhost:%s",
        "action": "openExternally"
      },
      "presentation": {
        "hidden": true,
        "group": "",
        "order": 3
      }
    }
  ]  
}

Save this file under .vscode/launch.json, reopen project folder and hit F5 to enjoy effect!

Finally, all code should get committed to GitHub (refrain from using ADO if you can).

Azure Static Web Apps – custom build and deployments

Despite Microsoft claims “First-class GitHub and Azure DevOps integration” with Static Web Apps, one is significantly easier to use than the other. Let’s take a quick look at how much features we’re giving up by sticking to Azure DevOps:

GitHubADO
Build/Deploy pipelinesAutomatically adds pipeline definition to the repoRequires manual pipeline setup
Azure Portal support
VS Code Extension
Staging environments and Pull Requests

Looks like a lot of functionality is missing. This however begs the question whether we can do something about it?

Turns out we can…sort of

Looking a bit further into ADO build pipeline, we notice that Microsoft has published this task on GitHub. Bingo!

The process seems to run a single script that in turn runs a docker image, something like this:

...
docker run \
    -e INPUT_AZURE_STATIC_WEB_APPS_API_TOKEN="$SWA_API_TOKEN" \
    ...
    -v "$mount_dir:$workspace" \
    mcr.microsoft.com/appsvc/staticappsclient:stable \
    ./bin/staticsites/StaticSitesClient upload

What exactly StaticSitesClient does is shrouded with mystery, but upon successful build (using Oryx) it creates two zip files: app.zip and api.zip. Then it uploads both to Blob storage and submits a request for ContentDistribution endpoint to pick the assets up.

It’s Docker – it runs anywhere

This image does not have to run at ADO or Github! We can indeed run this container locally and deploy without even committing the source code. All we need is a deployment token:

docker run -it --rm \
   -e INPUT_AZURE_STATIC_WEB_APPS_API_TOKEN=<your_deployment_token> 
   -e DEPLOYMENT_PROVIDER=DevOps \
   -e GITHUB_WORKSPACE="/working_dir"
   -e IS_PULL_REQUEST=true \
   -e BRANCH="TEST_BRANCH" \
   -e ENVIRONMENT_NAME="TESTENV" \
   -e PULL_REQUEST_TITLE="PR-TITLE" \
   -e INPUT_APP_LOCATION="." \
   -e INPUT_API_LOCATION="./api" \
   -v ${pwd}:/working_dir \
   mcr.microsoft.com/appsvc/staticappsclient:stable \
   ./bin/staticsites/StaticSitesClient upload

Also notice how this deployment created a staging environment:

Word of caution

Even though it seems like a pretty good little hack – this is not supported. The Portal would also bug out and refuse to display Environments correctly if the resource were created with “Other” workflow:

Portal
AZ CLI

Conclusion

Diving deep into Static Web Apps deployment is lots of fun. It may also help in situations where external source control is not available. For real production workloads, however, we’d recommend sticking with GitHub flow.

Azure Static Web Apps – when speed to market matters

The more we look at the new (GA as of May 2021) Azure Static Web Apps the more we think it makes sense to recommend this as a first step for startups and organisations looking to quickly validate their ideas. Yes, there was Blob Storage-based Static Website hosting capability (we looked at it earlier) but the newcomer is much more compelling option.

Enforcing DevOps culture

It’s easy to “just get it done” when all you need is a quick landing page or generated website. We’ve all been there – it takes a couple of clicks on the Portal to spin up required resources. Then drag-and-drop files to upload content and you’re done. Problems however strike later when the concept evolves past MVP stage. Team realises no one cared to keep track of change history and deployments are a pain.

Static Web Apps complicate things a bit by requiring you to deploy off source control. In the longer, however, benefits of version control and deployment pipeline will outweigh initial 5-minute hold up. I would point out, that the Portal makes it extremely easy to use GitHub and all demos online seem to encourage it.

ADO support is a fair bit fiddlier: deployments will work just as well, but we won’t be getting automatic staging branches support any time soon.

Integrated APIs

Out of the box, Static Web Apps supports Azure Functions that effectively become an API for the hosted website. There are some conventions in place but popping a Functions project under /api in the same repository would bootstrap everything like deployments, CORS and authentication context. Very neat indeed. After deployment available function show up on the portal

What would probably make experience even better if there was a way to test the API straight away.

Global CDN

One small detail that is easy to overlook is the location of the newly created web app:

upon further investigation we discover that the domain name indeed maps to azurestaticapps.trafficmanager.net, and resolving it yields geographically sensible results. In our case we got Hong Kong, which is close, but could probably be further improved with rollout to Australia.

Application Insights support

Given how Azure Functions back the APIs here, it’s no surprise that Application Insights would come bundled. All we have to do – is to create an App Insights instance and select it. That however is also a limitation – only Functions are covered. Static content itself is not.

Clear upgrade path

Free plan decent for initial stages but comes with limitations, so after a while, you may consider upgrading. Switching to Standard tier enables extra features like BYO Functions, Managed Identity and custom Auth providers. This cover heaps more use cases so application can keep evolving.

Serverless face-off: Azure vs AWS overview

With the explosive growth of online services, we’ve seen over 2020, it’s clear the Public Cloud is going to pervade our lives increasingly. The Internet is full of articles listing differences between platforms. But when we look closer, it all seems to fall into same groups: compute, storage, and networking. Yes, naming is different, but fundamentals are pretty much identical between all major providers.

Last time

We explored a few differences between AWS S3 and Azure Storage. On paper both Azure and AWS offers are comparable: Azure has Functions and AWS calls theirs Lambda. But subtle differences begin to show up right from the beginning…

Creating resources Azure vs AWS

Without even getting into writing any code yet we are greeted by the first difference: AWS allows to either create standalone functions or to provision Lambda Apps that are basically CloudFormation templates for a function and all related resources such as CodeCommit repo, S3 Bucket and project pipeline for CICD. Azure on the other hand always prompts to structure functions by sitting them inside a Function App. The reason for doing that is, however, slightly different: Function App is a collection of functions that share the same App Service Plan.

Serverless Invocation

AWS does not assume any triggers and we’d need to add one ourselves. Adding an API Gateway as a trigger is totally possible and allows for HTTPS setup if need be. But because trigger is external to the function – we need to pay closer attention to data contract: API reference is helpful but the default API gateway response of 500 makes it hard to troubleshoot.

Portal editor functionality

Another obvious difference between the platforms is built-in code editor experience. In AWS it is only an option for interpreted language runtimes (such as Node.js, Python and Ruby):

finding code editor in AWS portal is very easy
if runtime is not supported, you would get a blue message

Azure has its own set of supported runtimes. And of course, things like .NET and PowerShell get full support. There’s however one gotcha to keep in mind: Linux hosting plans get limited feature set:

rich experience editing code in Azure
even though .net is a first party runtime - using Linux to host it ruins the experience

.NET version support

AWS supports .NET Core 2.1 and 3.1 and conveniently provides selection controls, while Azure by default only allows for version 3.1 for newly created function apps:

AWS is an open book: picking runtime version is easy
Azure makes it a no-choice and might look very limiting, but read on...

At first look such omission is very surprising as one would expect more support from Microsoft. This however is explained in the documentation: .NET version is tied to Functions Runtime version and there is a way to downgrade all the way down to v1.x (which runs on .NET 4.7!):

it is possible to downgrade Function Runtime version. but there are limitations and gotchas

Overall

AspectAWSAzure
Language support.NET Core 2.1, .NET Core 3.1, Go, Java, Node.Js, Python, Ruby, PowerShell Core.NET Core 3.1, .NET Core 2.2, .NET 4.7, Node.Js, Python, Java, PowerShell Core
OSLinuxWindows or Linux (depending on runtime and plan type)
TriggersAPI Gateway, ELB, heaps moreBuilt-in HTTP/Timer, heaps more
HierarchyFunction or Function appFunction App
Portal code editor support Node.JS, Ruby, PythonNode.js, .NET, PowerShell Core,

Cloud face-off: hosting static website

With the explosive growth of online services we’ve seen over 2020, it’s pretty clear the Public Cloud is going to pervade our lives more and more. The Internet is full of articles listing differences between platforms. But when we look closer, it all seems to pretty much fall into same groups: compute, storage and networking. Yes, naming is is different, but fundamentals are pretty much identical between all major providers.

Or are they?

Today we’re going to try and compare two providers by attempting to achieve the same basic goal – hosting a static web page.

the web page is going to be extremely simple:

<html>
     <body ng-app="myApp" ng-controller="myCtrl">
     <h2>Serverless API endpoint</h2>
     <input type="text" ng-model="apiUrl" /><button ng-click="fetch(apiUrl)" >Fetch</button>
         <h2>Function output below</h2>
         <pre>
             {{ eventBody| json }}
         </pre>
         <script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.8.2/angular.min.js" />
     </body>
 </html>

S3

With AWS, static website hosting is a feature of S3. All we need to do is create a bucket, upload our files and enable “Static website hosting” in Properties:

these warnings seems to hint at something…

One may think that this is all we have to do. But that huge information message on the page seems to be hinting at the fact that we might need to enable public access to the bucket. Let us test our theory:

indeed, there’d be no public access to our static assets by default

okay, this is fair enough – making a bucket secure by default is a good idea. One point to note here, is that enabling public access on this only bucket will not be enough. We will also need to disable “Block Public Access” on account level, which seems a bit too extreme at first. But hey, you cannot be too secure, right. AWS goes to great lengths to make it very obvious when customers do something potentially dangerous. Anyway, we’d go and enable the public access on account and bucket as instructed:

The first thing that jumps out here is a “Not Secure” warning. Indeed, S3 exposes the website via good old HTTP. And if we wanted secure transport – we’d have to opt for CloudFront.

Storage account

With Azure, the very first thing that we’d get to deal with would be the naming convention: Storage Account names can only contain lowercase letters and numbers. So right out of the gate, we’ll have to strip all dashes from our name of choice. Going through the wizard it’s relatively easy to miss the “Networking” section, but it’s exactly here we get to chose if our account will have public access or not. And by default it will be! So if you want it – you’ll have to tweak the security. Anyway, moving on to “Advanced” tab, we’re presented with another key difference: Storage Account endpoints use HTTPS by default.

Having created the account we should first enable the “Static Website hosting” option.

Reason for this order is that Azure creates a special container called $web, that we’ll then upload files to. But after we have done that – it’s pretty much done:

CORS

Both AWS and Azure allow configuring CORS for static websites. AWS is pretty upfront about it in their docs. Azure however makes a specific callout that CORS is not supported on static websites. My testing, however, indicates that it seems to work as intended. Consider the following scenario. We’d use the same website, but add a web font. And load it from across the other cloud: so Azure copy of the site will attempt to source the font from AWS and vice versa:

<link rel="stylesheet" href="css/stylesheet-aws.css" type="text/css" charset="utf-8" />
<style type="text/css">
body {
font-family: "potta_oneregular"
}
</style>

and the stylesheet would look like so (note the URL pointing to Azure):

@font-face {
     font-family: 'potta_oneregular';
     src: url('https://cloudfaceoffworkload.z26.web.core.windows.net/css/pottaone-regular-webfont.woff2') format('woff2'),
          url('https://cloudfaceoffworkload.z26.web.core.windows.net/css/pottaone-regular-webfont.woff') format('woff');
     font-weight: normal;
     font-style: normal;
 }

initially, this set up indeed results in a CORS error:

but the fix is very easy, just set up CORS in Azure:

and we get a fully working cross-origin resource consumption:

Doing it the other way around is a bit more complicated. Even though AWS allows us to configure policies, lack of HTTPS endpoint means browsers will likely refuse to load the fonts and we’d be forced onto CloudFront (which has its own benefits, but that would be a completely different story).

Summary

AspectAWS S3Azure Storage Account
Public by defaultnoyes
HTTPS with own domain namenoyes
Allows for human-readable namesyesnot really
CORS supportconfigurablenot supported by static websites (despite what the docs claim)