A View Inside My Head

Jason's Random Thoughts of Interest


CodeRush: The Refactor Key

Do a search, and you will find that there are a few IDE Productivity Tools available for Visual Studio. As far as feature sets go, you'll also find that there is quite a bit of overlap in what these tools offer.  So, deciding on one is largely a matter of personal preference. 

However, by not using an IDE Productivity Tool at all, you are doing yourself a great disservice.  No, really!  You are wasting a lot of time by manually doing things instead of allowing software do it for you.  Since time is money, and these tools are not very expensive, you will quickly make up for the cost of the program in time savings alone (I think within the first week, if not the first day).

The one tool that I stand behind is CodeRush by DevExpress. Some of the finest people in the industry work for Devexpress, and over the years, some of these folks have become quite good friends of mine.  The company has an outstanding commitment to supporting the developer community year after year, and you will surely find them sponsoring a developer event near you.  Aside from the goodwill, their product suite is outstanding, so I'm proud to continue to use and recommend their entire product line whenever I can.

Sidebar: Personally, I think that DX has a bit of a branding problem with their IDE Productivity Tools, because CodeRush includes another product called Refactor! Pro.  I would rather see them not offer Rafactor! Pro by itself, and only sell CodeRush with Refactor! Pro, since in my mind they are together one product (and I would never recommend Refactor! to someone without CodeRush).  As such, I often refer to features from Refactor! as belonging to CodeRush.  But, I digress.

Learning any new tool can be a bit daunting at first, and CodeRush is certainly no exception.  DevExpress has produced a collection of short tutorial videos to help new users come up to speed.  It should be noted, though, that some of these videos are likely useful to experienced users as well: with so much functionality packed into the product, I find that it's common for someone to master a number of features, be familiar with just as many if not more features, and yet be totally unaware of half of the things that the tool has to offer.

Since you have to start somewhere, the first feature that I recommend that someone master is mapping and using their Refactor key.  This is a shortcut key that will open a CodeRush "context menu" to show available refactorings for the code where the cursor is currently located. 

Personally, this is the feature that I probably use the most, so finding an appropriate key binding that is easy to hit yet does not interfere with my ability to type code or use Visual Studio key bindings was important.  The default is Ctrl-backtick ( ` ), but I landed on the just backtick key to save me from needing to hold Ctrl.  I've heard of a few people who have picked F1 as a way of defeating the annoying help system in VS2008 and earlier.  It's all a matter of personal preference, so choose something that works for you.

To set the Refactor key shortcut, open the Options from the DevExpress menu in Visual Studio.  In the tree on the left, open IDE and click on Shortcuts.  After admiring the entire collection of shortcuts that are available, locate the "Refactor!" section in the list.  One of the defined shortcuts will be mapped to the "Refactor" command, and by clicking on this, you can set your own key to bind it to on the right.

Refactor key binding is set in the Visual Studio Devexpress Menu, Options, IDE - Shortcuts - Refactor! group

Trivia: Notice the Alternate Bindings that also appear in the same section. Number Pad 0 is the one preferred by Mark Miller.  Dustin Campbell has two bindings mapped: the default Ctrl-`, as well as Number Pad INS.

Now that you have a Refactor key, try moving your cursor to different places in your code and pressing it. 

Refactor key pressed on a variable

You'll still have to learn about different refactorings that are available in the tool, but hopefully the Refactor key will become your preferred portal for accessing those refactorings (instead of using smart tags or the mouse to right-click on code).

Happy Coding!

Including SQL Server Spatial Types in a Castle ActiveRecord Entity

This is just a quick post to capture the outcome of about an hour’s worth of trial and error for me.

Here’s a SQL Server table that contained data that my application needed:


Of particular interest is the [Geocode] field, which holds a Geography instance (POINT) representing the location of a Facility.  The [Geocode_Lat] and [Geocode_Lon] fields were from legacy requirements, and would be redundant when all is said and done.

Trying to include a Geography field in my ActiveRecord entity proved to be a challenge.  NHibernate has a Spatial extension, but setting it up and trying to use it proved to be somewhat of a nightmare (read: I couldn’t get it to work).  In reality, I wasn’t looking to use ActiveRecord to do spatial querying, but rather, was just looking to have the data passed between the database and my application.

SQL Server 2008 saves Geography and Geometry columns as binary data.  This isn’t WKB, but rather, an internal serialization of the data.


To have the data pass through ActiveRecord, we need to configure the entity class to treat this data as a byte[] array.  The application would then need to deserialize the bytes into a SqlGeography instance.  Likewise, when updating the entity, the application would need to serialize the SqlGeography instance to an array of bytes.

My Entity class defines the [Geocode] property as:

[Property(SqlType = "geography", ColumnType = "BinaryBlob")]
public byte[] Geocode { get; set; }

Then, I created a couple Extension Methods to handle the serialization/deserialization:

public static class ExtensionMethods
    public static SqlGeography AsGeography(this byte[] bytes)
        var geo = new SqlGeography();
        using (var stream = new System.IO.MemoryStream(bytes))
            using (var rdr = new System.IO.BinaryReader(stream))

        return geo;

    public static byte[] AsByteArray(this SqlGeography geography)
        if (geography == null)
            return null;

        using (var stream = new System.IO.MemoryStream())
            using (var writer = new System.IO.BinaryWriter(stream))
                return stream.ToArray();

And, finally, code to use property.  Here is a function that updates the geocode for a particular entity (notice the AsByteArray() extension method):

public static void SetGeocode(int FacilityID, double? Geocode_Lat, double? Geocode_Lon)
    Facility f = Facility.TryFind(FacilityID);

    if (f != null)
        f.Geocode_Lat = Geocode_Lat;
        f.Geocode_Lon = Geocode_Lon;
        f.Geocode = SqlGeography.Point(Geocode_Lat.GetValueOrDefault(),
                                       Geocode_Lon.GetValueOrDefault(), 4326).AsByteArray();


Disclaimer: The method described here satisfied my immediate needs, so I didn’t spend additional time exploring other options.  The code is provided AS IS, with no expressed warranty or guarantee.  There probably are far better ways to include spatial data in ActiveRecord entities, and as such, I’d be interested in hearing about your own experiences.

Windows Azure platform AppFabric Access Control: Using the Token

In previous articles (Part 1, Part 2, Part 3), ACS was configured to process token requests from different Issuers, and a Customer application obtained a SWT token from ACS using an intermediate service (that represented its particular Issuer). 

Now that the application has a token, ACS is not needed again until that token expires.  This article will show how to use that token in order for the Customer application to call the Bartender web service.

Note: OAuth WRAP defines terminology for parts of the system as follows:

  • The Customer application is known as the Client
  • The Bouncer (ACS) is known as the Authorization Server.
  • The Bartender web service is known as the Protected Resource.

Passing a Token from Client to Protected Resource

We had to conform to OAuth WRAP specifications while acquiring a token from ACS, but there’s no actual requirement for how a Client must pass that token to the Protected Resource.  An Architect or Developer could come up with any contract that they would like to use.  However, a best practice would be to continue using the OAuth WRAP specification as guidance.

WRAP defines three different ways that a Protected Resource can accept a token:

  • In the HTTP header
  • In the Query String
  • In the Form contents

OAuth WRAP suggests that a Protected Resource at least support the HTTP header method, but makes no mandates.  If a Protected Resource is able to implement all three, then it would be in the position to support the widest variety of Clients.  However, if the Protected Resource is unable to use HTTP headers but could use a value passed in the Query String, then there would be nothing in the WRAP specification to prevent only that implementation.

Regardless of the method used, an unauthorized request (perhaps due to a bad or missing token) will result in a HTTP status of 401 Unauthorized, and a header in the response containing: WWW-Authenticate: WRAP

Tokens passed in the HTTP header are expected to use the “Authorization” header, with this header’s value containing the text: WRAP access_token="tokenstring".  Note that tokenstring is a placeholder for the actual SWT token obtained from ACS and does not need to be URL Encoded. 

Example (using a System.Net.WebClient class):

string headerValue = string.Format("WRAP access_token=\"{0}\"", HttpUtility.UrlDecode(token)); 
var client = new WebClient();            
client.Headers.Add("Authorization", headerValue);


Tokens passed in either the Query String or in Form Contents are expected to use the name/value pair of: wrap_access_token=tokenstring.  Again, tokenstring is a placeholder for the actual SWT token obtained from ACS, and should be URL Encoded as needed in order to properly construct the HTTP Request (this may be done automatically by the framework that is used). 

Example (using a System.Net.WebClient class):

var values = new NameValueCollection();
values.Add("wrap_access_token", token);
var client = new WebClient();
client.UploadValues("BartenderWebservice", "POST", values));

Accepting and Validating a Token

SWT tokens are opaque to the Client.  That is, since the Client cannot modify the token, they shouldn’t really care what’s in it or how it’s structured.  The Protected Resource, however, must deconstruct, validate, and then use the claims that are contained within the tokens.  If anything is invalid or malformed, then the Protected Resource should deny access by returning a 401 Unauthorized status in the response.

The Windows Azure platform AppFabric SDK V1.0 contains plenty of boilerplate code in the samples demonstrating how to parse and validate a SWT token (so I won’t be repeating that code here).  In essence, all of the following needs to pass validation before the Client is authorized to access the Protected Resource:

  1. HMACSHA256 signature is valid
  2. Token has not yet expired
  3. The Issuer is trusted (i.e., contains the URL of the ACS server)
  4. The Audience is trusted (i.e., contains the URL of the Protected Resource)
  5. Additional required claims are present

For the nightclub example, the Customer’s Birthdate will be presented as a claim (#5 in the list above).  So, in addition to checking the technical aspects of the token, a business rule must also ensure that the Customer is of legal age to order a drink from the Bartender (in the United States, this means at least 21 years old).


I wrote this series of blog posts because I myself was having a hard time grasping the Hows and Whys of using Windows Azure platform AppFabric Access Control to secure a Protected Resource.  What I discovered was that by using claims based identity in conjunction with a federated identity model, you can incorporate a very scalable authorization scheme into your system without requiring complex code and/or special prerequisite software.  Technically, an organization could build this same functionality into a system without using Azure AppFabric.  However, Windows AppFabric Access Control offers the benefit of already being internet-facing and hosted in a high availability environment, and already includes a rich implementation of OAuth WRAP that supports many different kinds of Issuers.

Further Reading:

Windows Azure platform AppFabric Access Control: Obtaining Tokens

In this, the third part of a series of articles examining Windows Azure platform AppFabric Access Control (Part 1, Part2), I will continue developing the [somewhat contrived] nightclub web service scenario and demonstrate how to obtain a token from ACS that can be used to call into a Bartender web service.

A Quick Review

Our distributed nightclub system has four major components: A Customer application coupled with some external identity management system, a Bartender web service, and ACS, which acts as our nightclub’s Bouncer.

A Customer application ultimately represents a user that wishes to order a drink from the Bartender web service.  The Bartender web service must not serve drinks to underage patrons, so it needs to know the user’s age (this is known as a “claim”).  The nightclub has no interest in maintaining a comprehensive list of customers and their ages, so instead, a Bouncer ensures that the user’s claimed age comes from a trusted source (this is known as an “issuer”).  If the claim’s source can be verified, the Bouncer will give the customer a token containing its own set of claims that the Bartender web service will recognize.  In the end, the Bartender doesn’t have to concern itself with all of the possible issuers – it only needs to recognize a single token generated by the Bouncer (ACS).



Anatomy of a Token Request

ACS uses a subset of the OAuth Web Resource Authorization Protocol (WRAP) to request tokens.  The tokens themselves use the Simple Web Token (SWT) format.  The latest specification documents for both WRAP and SWT can be found in the Files section of the “OAuth WRAP WG” Google Group: http://groups.google.com/group/oauth-wrap-wg/files

The power of ACS derives from the simplicity of WRAP/SWT.  All of the token requests are performed as regular HTTP POSTs, and the response is a URL Encoded string of name/value pairs (one of those being the token itself).  Even if your Issuer (Identity Management System) does not natively know how to request a token from ACS, which is very likely, then it is easy to either build this into the application itself, or write a small service to perform this work. 

At the HTTP protocol level, a token request may resemble:


POST https://jasondemo.accesscontrol.windows.net/WRAPv0.9/ HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Host: jasondemo.accesscontrol.windows.net
Content-Length: 155
Expect: 100-continue




HTTP/1.1 200 OK
Content-Length: 315
Content-Type: application/x-www-form-urlencoded
Server: Microsoft-HTTPAPI/2.0
x-ms-request-id: ca165b72-5d9f-4196-9272-1b80c8c3e02a
Date: Mon, 22 Mar 2010 13:26:44 GMT



The response displayed here contains two name/value pairs: wrap_access_token and wrap_access_token_expires_in.  If the token request was invalid, then a HTTP 401 status would have been returned instead of a 200.

Notice that “wrap_access_token_expires_in” contains the value 43200, which is the number of seconds that we defined in the Token Lifetime field of the “BouncerPolicy” token policy (in Part 2).  This value is provided as a convenience to indicate how long the token will be valid.  Taking a cue from systems that use lease lifetimes, like DHCP, it might be a good practice to acquire another token at 50% of the Token Lifetime (21600 seconds in this case, or 6 hours). 

As the name suggests, “wrap_access_token” contains the SWT token itself.  This is a URL Encoded set of name/value pairs that also includes a HMACSHA256 signature to prevent the token from being tampered with.  This token does not need to be parsed by the client application; it can be treated as a magic string.  However, there is nothing special about its contents:

Birthdate=1979-05-25T00%3a00%3a00 &Issuer=https%3a%2f%2fjasondemo.accesscontrol.windows.net%2f &Audience=http%3a%2f%2fmyserver.domain%2fBartender &ExpiresOn=1269309326 &HMACSHA256=NLmcGYITc8aWvhNp6Ge54TGJxhJCVX4Q67DDLBKoIy4%3d


“Birthdate” is the result of the Passthrough Claim Mapping (Part 2) that took the inbound claim called “DOB” and mapped it to this outbound claim called “Birthdate”.  ACS has no idea that this is a date – it just knows that it’s a string of characters that will be understood by the destination.  The other claims that are present in this SWT token (Issuer, Audience, ExpiresOn, and HMACSHA256) are all system claim types that are reserved for use by SWT. 

Note the value of “Issuer” in this token.  In our Request, the stated Issuer was “Ohio”, which was a trusted Issuer that we defined when configuring the Service Namespace using AcmBrowser.  The Issuer of this SWT token is now the Service Namespace.  Our Bartender web service has no direct trust with the Ohio Issuer, but it will trust tokens signed by the Service Namespace (which trusts the Ohio Issuer).  Just as ACS can trust multiple Issuers, it’s also possible that the Bartender web service could trust multiple ACS Issuers (maintaining a symmetric key for each one to verify the HMACSHA256 signature).  There is a lot of flexibility in how ACS can be incorporated into your own system’s architecture.


Because the “Ohio” issuer was created in ACS using a Symmetric 256-bit key, we have two choices for how to use this key in order to request a token from ACS.  We could send the key itself in plaintext, sort of like a password (in fact, this uses a WRAP profile that is intended for username/password authentication).  Alternatively, we could create a SWT of our own, using the key to sign it, and then send that token to ACS as part of the token request.

A simple WRAP request that provides the Issuer’s name and password (symmetric key) in plaintext would contain the following name/value pairs:

  • wrap_name (Issuer Name, as defined in the Issuer definition in ACS)
  • wrap_password (Current Key, as defined in the Issuer definition in ACS)
  • wrap_scope (Applies To, as defined in the Scope definition)
  • additional claims (will be processed by the defined Scope Rules for claim mapping)

private static string RequestTokenFromACS_usingPassword(DateTime dob) { var values = new NameValueCollection(); values.Add("wrap_name", "Ohio"); values.Add("wrap_password", "LVMjImkJjIBDrJHbTzyrioeajIFpV27tW2uTuCCOYFY="); values.Add("wrap_scope", "http://myserver.domain/Bartender"); values.Add("DOB", dob.ToString("s")); var client = new WebClient { BaseAddress = "https://jasondemo.accesscontrol.windows.net" }; return Encoding.UTF8.GetString(client.UploadValues("WRAPv0.9", "POST", values)); }

Using a SWT within a WRAP request is just as simple, but the difficulty is in creating the SWT itself.  The WRAP request would contain the following name/value pairs:

  • wrap_assertion_format (SWT in this case)
  • wrap_assertion (the SWT token that we generate and sign using the Current Key)
  • wrap_scope (Applies To, as defined in the Scope definition)

The SWT token itself would contain the following name/value pairs:

  • Issuer (Issuer Name, as defined in the Issuer definition in ACS)
  • Audience (STS Endpoint, as listed on the Service Namespace overview page)
  • ExpiresOn (integer containing the number of seconds after the Epoch date of January 1, 1970 00:00 UTC that the token will expire)
  • additional claims (will be processed by the defined Scope Rules for claim mapping)
  • HMACSHA256 (URL Encoded Base64 signature of the unsigned token)

private static string RequestTokenFromACS_usingSWT(DateTime dob) { string requestToken = GetRequestToken(dob); var values = new NameValueCollection(); values.Add("wrap_assertion_format", "SWT"); values.Add("wrap_assertion", requestToken); values.Add("wrap_scope", "http://myserver.domain/Bartender"); var client = new WebClient { BaseAddress = "https://jasondemo.accesscontrol.windows.net" }; return Encoding.UTF8.GetString(client.UploadValues("WRAPv0.9", "POST", values)); } private static string GetRequestToken(DateTime dob) { StringBuilder sb = new StringBuilder(); sb.AppendFormat("DOB={0}&", dob.ToString("s")); sb.Append("Issuer=Ohio&"); sb.Append("Audience=https://jasondemo.accesscontrol.windows.net/WRAPv0.9&"); sb.AppendFormat("ExpiresOn={0:0}", (DateTime.UtcNow.AddMinutes(10) - new DateTime(1970, 1, 1)).TotalSeconds); return AddSignature(sb.ToString()); } private static string AddSignature(string unsignedToken) { HMACSHA256 hmac = new HMACSHA256(Convert.FromBase64String("LVMjImkJjIBDrJHbTzyrioeajIFpV27tW2uTuCCOYFY=")); var sig = hmac.ComputeHash(Encoding.ASCII.GetBytes(unsignedToken)); string signedToken = String.Format("{0}&HMACSHA256={1}",  unsignedToken,  HttpUtility.UrlEncode(Convert.ToBase64String(sig))); return signedToken; }

Where to Execute this Code

The code to request a token can easily be implemented within an application that uses that token to call into a protected resource.  In the nightclub example, this code could be incorporated right into the Customer application.  However, this would mean that the claim (Birthdate) would originate from the application itself.  It also means that the application would need to have access to the Issuer key, making it possible for a nefarious user to obtain the key and bypass the application to directly access the protected resource.  These may not be concerns for your specific system, though.

A better solution for the nightclub example, which tries to mimic the real-life State ID system, would be to create a service for each Issuer that their Customer applications call into.  This service would then be able to obtain claim information from a source of authority (such as a LDAP directory), obtain a token from ACS, and then return that token to the Customer application.  The Customer application would never communicate with ACS directly, so there would be no reason for the application to have the Issuer key (making it harder for that nefarious user to discover the key).

From Here

This article demonstrated how to obtain a token from ACS.  In the next article, I will show how to use this token in order to access a protected resource (such as our Bartender web service) and how to validate a token when a protected resource receives one. 

(Continued in Part 4)

Further Reading:

Windows Azure platform AppFabric Access Control: Service Namespace

In the previous article, I provided an example scenario of how claims-based access control might map to the real-world event of a customer entering a bar.  This article will build upon that concept, eventually enabling a Customer application to obtain a token from our Bouncer (ACS), and then use that token to order a drink from the Bartender web service.

Now let’s walk through how to create a Service Namespace in Windows Azure, which is needed to permit your application and/or web service to utilize AppFabric Access Control.

Service Namespace

At this time, Windows Azure platform AppFabric provides two services for developers to use: Service Bus and Access Control.  Within a Project (i.e., the unit of billing for an Azure account), a “Service Namespace” is used to organize a set of these AppFabric service endpoints.  Services within a Service Namespace share the same DNS naming convention, geography (datacenter where the services are hosted) and owner key (security for managing the services). 

To create a new Service Namespace, start by logging into the Windows Azure portal (http://windows.azure.com).  Navigate to the AppFabric section, choose the Project in which the Service Namespace will be created, and then click the “Add Service Namespace” link.  The next screen will prompt for a Name, a Region, and the size of the ServiceBus Connection Packs for the new Service Namespace: 


  • The name must be unique among all Service Namespaces in existence because this name will be incorporated into DNS aliases.  The Windows Azure portal provides a “Validate Name” function to ensure that the name entered meets the required syntax criteria and that it has not already been used.
  • Region maps to the Microsoft data center where the services will run.  Ideally, this should be as close to the users as possible.
  • “ServiceBus Connection Packs” is to support the Windows Azure platform AppFabric Service Bus.  Since this article is only examining Access Control, the number of ServiceBus connections can be set to zero.

Once you click the “Create” button, Windows Azure will generate the services and DNS entries for your new Service Namespace.  However, looking at the information page on the portal for the Service Namespace reveals nothing that is noticeably useful for configuring these services:


Configuring Access Control

On the portal, the only actions that you can perform are “Generate a New [Management] Key” and “Delete the Service Namespace”.  How then do we configure Access Control?

Access Control, like most things in Windows Azure, is configured via a set of web services.  While robust tooling is something that is currently being developed (or something that you could develop on your own), there are a few demo tools included as source code in the Windows Azure platform AppFabric SDK V1.0, including a Windows (GUI) application called Access Control Service Management Browser (AcmBrowser).

By default, after installing the SDK, you can find the Visual Studio solution for the AcmBrowser application in:

C:\Program Files\Windows Azure platform AppFabric SDK\V1.0\Samples\AccessControl\ExploringFeatures\Management\AcmBrowser

To use this application, you will need the Service Namespace’s name (the one that you entered during the creation process) and Management Key for that namespace (this is “Current Management Key” string on the screenshot above).  After providing these values to AcmBrowser, you can then click the “Load from the Cloud” button to see what is already configured.  As you would expect, new namespaces will have nothing pre-configured, as indicated by the “Resources” tree on the left showing Issuers, Scopes, and Token Policies, but these nodes contain no children.



Token Policy

The first thing that we need to create is a Token Policy.  This item defines a length of time, in seconds, that a token will be valid until it expires, as well as a symmetric key that is used to sign tokens.  This symmetric key will be shared with our Bartender web service, but not the Customer application (else the customer would be able to generate tokens themselves, bypassing ACS and being capable of spoofing any claim).

To create a Token Policy using AcmBrowser, right click on the “Token Policy” node and choose “Create…”:


In this example, tokens created using the “BouncerPolicy” will expire 12 hours (60 * 60 * 12 = 43200 seconds) after they have been generated, and the Signing Key was created by clicking the “Generate” button.  Should this key ever be compromised, a new key can be generated in the same way.



ACS is not an identity provider.  You don’t maintain users in ACS, but rather, maintain a list of trusted identity providers (Issuers) that can supply claims on behalf of their users.  With this federated model, the claims presented may be named different things, but convey the same information.  For example, a claim containing a Birthdate may be named “DOB” by one Issuer and “Date of Birth” by another Issuer, but both sources supply the same type of information. 

The web service that will eventually consume these claims, however, shouldn’t be concerned with knowing all of the variations.  Instead, ACS provides Claim Mapping capabilities that can rename claims, or express a new claim based on the presence of a known claim in the token request.  Before we can configure the claim mappings, though, we need to define a Scope and Issuers.

During the token request, one piece of information that is required is the URL where the token will eventually be used (known as the “Applies To”).  ACS uses this in order to know what set of rules to apply when generating that token. 

A Scope is the mechanism that ties a URL to a Token Policy, and is created within AcmBrowser by right clicking on the “Scopes” node and choosing “Create…”.  In this example, our Scope is defined with the Applies To URL of the Bartender web service and associated to the BouncerPolicy Token Policy.  Note that this “Applies To” URL is actually a substring of the real web service URL; any URL that is provided during the token request that starts with this same “Applies To” string will match to this scope, and it’s possible that an actual URL will match multiple Scopes (allowing for chaining of rules from the longest substring match to the shortest).




In a Claims-Based Identity and Access Control system, the destination application or web service does not concern itself with end-user authentication.  Instead, the user authenticates itself with its own identity management system (known as an Issuer), and this system then becomes a trusted source of claims about that user.  This Issuer might be a standalone application, a custom web service running on an intranet server, another AppFabric Service Namespace, or even an Active Directory Federation Services 2.0 system.  As a result, there are different ways to prove that information presented during a token request came from a trusted Issuer.

In the nightclub example, the customer presents a driver’s license that was issued by their state of residence.  Keeping with this concept, let’s define a few Issuers in ACS named after states (Ohio, Michigan, Indiana, etc) by first right clicking on the “Issuers” node in AcmBrowser and selecting “Create…”:


ACS is able to recognize and validate a token request originating from an Issuer in many different ways.  In this example, the “Ohio” Issuer is defined using a Symmetric 256-bit Key (other choices that could have been used include X509 and FedMetadata).  After sharing this secret with the Issuer, the “Issuer Name” and “Current Key” can be provided as claims during a token request, much like a username and password, or the key can be used by the Issuer to sign a token request.  But, more on that in another article.


[Scope] Rules

During the token request, an arbitrary number of claims will be presented to ACS.  The application that will finally consume the token generated by ACS may only require a subset of those claims, if any.  To decide what claims should be included in the output token, we need to create a set of Rules (one per output claim).

To create rules for a Scope, you first expand the Scope’s node in AcmBrowser, right-click on the “Rules” node, and then choose “Create…”:


A Rule can be defined as either PassThrough or Simple.  A PassThrough rule is used to rename a claim from whatever the Issuer called it to whatever the final application will expect it to be called (the value will be preserved).  A Simple rule, on the other hand, will generate a claim (name and value) in the output if a specific claim (name and value) was present in the input.  Note that the input and output claim values in a Simple rule must be explicitly defined – you cannot use pattern matching, etc.

For the example scenario, we’ll assume that the Ohio Issuer uses “DOB” as the name of the “Birthdate” claim that the Bartender web service will be expecting.  A PassThrough rule is used to rename that claim so that the resulting token will include a claim called “Birthdate” instead (the provided value will be preserved).



Wrapping Up

At this point, everything has been defined in memory on your machine, and nothing has actually been saved to the cloud.  To commit your changes, you will need to click on the “Save to Cloud” button in AcmBrowser.  Note that AcmBrowser is not sophisticated enough to merge with an existing set of configuration, so if your Service Namespace is not already empty, then you will need to click the “Clear Service Namespace in Cloud” button prior to saving to the cloud.  You can also load and save local copies of the configuration using the tool, which is useful for backup purposes as well as working on a particular configuration over the course of multiple days without disturbing what is already running in the cloud.

In the next part of this series, I’ll show code for how everything finally comes together for this contrived example.

(Continued in Part 3)

Further Reading: