So now that Azure AD authentication with Storage is in Public Preview, let's explore it a little! Note this is limited to Blobs and Queues at the moment.

Do remember this is a preview, and heed the warning in the documentation:

This preview is intended for non-production use only. Production service-level agreements (SLAs) will not be available until Azure AD integration for Azure Storage is declared generally available.

I've built a small sample app that you can see on GitHub.

Why this is awesome

Because until now, the main authentication methods in Storage have been:

  1. Access keys
  2. SAS tokens

Access keys have one main problem. They give effectively admin access to the entire Storage account. And you have basically no visibility what is using the Storage account with the keys.

If you need to give someone constrained access, you need to use SAS tokens. The problems with SAS tokens:

  1. You need an access key to generate one
  2. Cannot be revoked without revoking the access key used to create it
  3. Okay, if you use Access Policies to store the access duration outside the token, you can revoke it quite easily
    • But you can only have 5 policies per blob container/file share/queue/table

So neither is a really good solution if you want to constrain access.

But now, we can use Azure AD access tokens to access Storage with full RBAC support. This means we can say that this Web App's service identity has the Storage Blob Data Reader role on the images blob container. It allows the app running in the Web App to read files from the blob container without any keys or other secrets stored in the app!

To revoke access, remove the role. This removes access almost immediately. You should probably now understand why I think this is awesome :)

Accessing Storage from local development environment

Three things that you need to do to access Storage from your local dev environment:

  1. Install the Microsoft.Azure.Services.AppAuthentication library in your app
  2. Add your user to the Data Reader / Data Contributor role on the appropriate resource (e.g. Storage Blob Data Contributor on the Storage account)
    • It is possible to assign the role at subscription, resource group, or resource level. Following the principle of least privilege is a good guideline here, only require access to the data in storage accounts that you need.
  3. Login to the Azure CLI as the user, and make sure to select the right subscription
    • Visual Studio 2017 users can alternatively go to Tools -> Options -> Azure Service Authentication and authenticate there

Note that it is not enough that your user is an Owner/Contributor on the subscription/resource group/Storage account. The user must be assigned to a Data Reader or Data Contributor role to get access to the data using Azure AD authentication.

Acquiring an access token is then quite easy:

private async Task<string> GetAccessTokenAsync()
    var tokenProvider = new AzureServiceTokenProvider();
    return await tokenProvider.GetAccessTokenAsync("");

Next let's see how we can get a blob from Storage by using the Storage SDK:

async Task<Stream> GetBlobWithSdk(string accessToken)
    var tokenCredential = new TokenCredential(accessToken);
    var storageCredentials = new StorageCredentials(tokenCredential);
    // Define the blob to read
    var blob = new CloudBlockBlob(new Uri($"https://{StorageAccountName}{ContainerName}/{FileName}"), storageCredentials);
    // Open a data stream to the blob
    return await blob.OpenReadAsync();

You can then use the standard Stream APIs to read the file.

So as a summary:

  1. Acquire an access token for the resource with the AppAuthentication library
  2. Attach it to the Storage request
    • The SDK does use a few abstractions to accomplish this, but it really is just adding an Authorization: Bearer ... header to all the calls

Using Azure AD Managed Service Identity

To use Managed Service Identity in the app, the only things we need to do are:

  1. Enable MSI on the service (e.g. App Service)
  2. Assign the generated service principal to a Data Contributor / Data Reader role (e.g. Storage Blob Data Reader)

That's it! The same code works under MSI as well :)

Inspecting the raw HTTP requests

So far we used the Storage SDK to make the call. Let's see how this works under the hood.

It's actually really simple. When reading a blob, the SDK made this HEAD request first:

User-Agent: Azure-Storage/9.2.0
x-ms-version: 2017-11-09
x-ms-client-request-id: 0156e792-2a6d-4096-84ad-852aaf04b038
x-ms-date: Thu, 24 May 2018 11:36:24 GMT
Authorization: Bearer eyJ0eXAi...

I'm assuming it does this so it knows how large the blob is, so it can make multiple requests for larger blobs. In this case my blob is very small (19 bytes), so it made this GET next:

User-Agent: Azure-Storage/9.2.0
x-ms-version: 2017-11-09
If-Match: "0x8D5C16965957B67"
x-ms-range: bytes=0-18
x-ms-client-request-id: 82a635d5-377b-40b2-87ad-fc3e46bb150d
x-ms-date: Thu, 24 May 2018 11:36:24 GMT
Authorization: Bearer eyJ0eXAi...

I whipped up a quick LinqPad script for testing a raw HTTP request using Azure AD authentication, and it's pretty easy:

using(var client = new HttpClient())
    var req = new HttpRequestMessage(HttpMethod.Get, $"https://{StorageAccountName}{ContainerName}/{FileName}");
    req.Headers.Add("x-ms-version", "2017-11-09");
    req.Headers.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);

    var res = await client.SendAsync(req);

    return await res.Content.ReadAsStreamAsync();

So from this we can see the minimum requirements for the request to succeed:

  1. Use x-ms-version to specify an API version of 2017-11-09 (I suppose a newer version will work as well)
  2. Add the access token as the Authorization header, same as any time you have used an Azure AD access token

While this is easy, it is a good idea to use the SDK as it offers various optimizations.


Needless to say we will be implementing this in all of our apps as soon as this comes to GA. Azure AD authentication improves so many things:

  • Access can be constrained down to a blob container per app
  • You have full visibility what has access to a given resource
  • Access can be revoked immediately per app (without affecting others like with access keys)
  • By using AAD MSI, your app stores zero secrets in its configuration

I can't wait for other services like Search and Cosmos DB to follow the example of other services like Storage, Azure SQL and Azure Key Vault. I dream of the day we can use Managed Service Identity as the central point of access control for all services that an app uses. No more fiddling with shared keys, and complete visibility to what an app can access.

Thank you so much for reading, and I hope to see you again soon! I have a larger article I've been writing for a while and hope to publish part 1 next week :)