A View Inside My Head

Jason's Random Thoughts of Interest

NAVIGATION - SEARCH

Windows Azure platform AppFabric Access Control: Introduction

The Windows Azure platform AppFabricAccess Control service was one aspect of the Windows Azure platform that I found a bit challenging to understand, primarily because identity is not a domain that I regularly work with.  This post, the first in a planned series of articles, will explore what the Access Control service is and why it is useful.

 

Three Geeks Walk Into a Bar…

nightclub_access_control Let’s examine a common real-world scenario: Ordering a drink from a bartender at a nightclub.

Before the nightclub opens, the bouncers inform the bartenders about the token that customers will bear on that night to indicate that they are of legal age.  This could be a stamp on the back of the hand, a colored wristband, etc.

At the door, a customer will present their identification to the bouncer that shows, among other things, their date of birth.  This ID could be a state-issued driver's license, a passport, or even a school identification card.  The point is that the nightclub didn’t issue that ID, but they recognize the authority that did issue it, and will accept the claim (the birth date) that is displayed on that ID.

If the bouncer determines that the ID is authentic and hasn’t been tampered with, then he will give the customer the token of the night (stamped hand or colored wristband), and the customer is free to enter the bar.

Once inside, the customer only needs to show the token to the bartender in order to buy drinks.  They do not need to show their ID.

The next night, the token will change, so a customer cannot use a token obtained the night before.

 

Federated Access Control

Azure_access_control Now let’s look at a similar scenario: Calling a web service from an application.  Only, in this case, the web service should not fulfill requests from unauthorized clients.  Furthermore, it’s not the web service’s responsibility to authenticate the client; it is simply expecting the client to bear some verifiable proof that it is already authorized to use the service.

Before calling the Web Service, the Service Consumer (application) must first obtain a token that is issued by an Access Control Service (ACS).  This is done by sending a number of claims to the ACS, with one of these claims being secrets belonging to an Issuer that the application is associated with. 

If the Issuer is recognized and trusted by the ACS, then a token will be created.  This token (which is a collection of name/value pairs) will contain claims, an expiration date, and a signature that can be used to ensure that the token was not modified after it was created.

Once the Service Consumer has a valid token, it can then call the Web Service and provide that token in a location where the service expects to find it, such as in a HTTP header.  The Web Service validates that the token is well-formed, has not expired, and has not been modified since it was created.  If anything fails validation, then the processing aborts.  Otherwise, the web service returns data to the Service Consumer, possibly using claims contained in the token as input in the process.

This scenario is considered to be Federated because the ACS doesn’t actually maintain a list of usernames and passwords.  Instead, it maintains a list of trusted Issuers with the expectation is that the Issuer is responsible for authenticating its own users. 

 

Correlation

In these examples, the Web Service is analogous to the nightclub’s Bartender: it has something to provide to the Service Consumer (Customer), but the Service Consumer must present an appropriate token that is generated by the Access Control Service (Bouncer).

The web service example above is intentionally vague in the part where a token is obtained.  There are a few different ways that an Issuer can be identified in a token request, and while passing the Issuer’s secret in plain text is one of those ways, it certainly shouldn’t be taken lightly.  Whoever has the Issuer’s key can spoof any of the claims, and that might prove to be a challenge for the service.  In the example where we need a Date of Birth claim to be presented, it would probably be a bad idea to allow the customer themselves to say “I’m from Ohio, here’s a blank driver's license that meets all of the standards of a proper ID, and, oh yeah… I’m writing on here that I am 21 years old.” 

Instead, claims should originate from the Issuer in some way that cannot be tampered with by the application (using the assumption that the application itself should not be trusted).  With the plain text method, this might require having a separate service that runs within the Issuer’s domain and is aware of its users and also ACS.  This service would broker the ACS token request for the application, automatically providing any claim data that might be needed (like the Date of Birth) from the Issuer’s own user database.  The application would be provided with the same token, but would never have the Issuer’s secret that is required to obtain the token directly from ACS.

(Continued in Part 2)

Further Reading:
   

Spatial Data and the Entity Framework

July 28, 2011 Note: This is an outdated post.  Recently, the ADO.NET team has released a CTP with Spatial support as a first class citizen of the Entity Framework!!!  See the following posts that I wrote as I explored the new API:

http://www.jasonfollas.com/blog/archive/2011/07/20/entity-framework-spatial-first-look.aspx

http://www.jasonfollas.com/blog/archive/2011/07/21/entity-framework-spatial-dbgeography-members.aspx

http://www.jasonfollas.com/blog/archive/2011/07/27/entity-framework-spatial-a-real-world-example.aspx


The Entity Framework does not support using User Defined Types (at least in the SQLCLR sense of the term) as properties of an entity. Yesterday, Julie Lerman contacted me to see if we could find a workaround to this current limitation, particularly for the SQL Server Spatial Types (Geometry and Geography).

Whenever I hear of someone wanting to use Spatial data in their application, my first thought is always “what do they want to do with the data once they have it?”  This is because most of the time (in my limited observation), an application does not need the spatial data itself, but rather, it just needs to use that data in the predicate of a query (i.e., the query results contain no spatial information).  For example, an application might want all zipcodes that are within 50 km of a point, but the application doesn’t need the actual shapes that define each zip code.

But, assuming that the developer knows what they are doing and has a legitimate reason to include a spatial type in the results, then how can they use the Entity Framework to get the spatial data into their application?  That was our quest.

Entity Framework Primitive Types

Admittedly, I know very little about EF.  So, my approach to this problem spent a lot of time using .NET Reflector to try to understand what the EF designer was doing behind the scenes (this also proved to be a a good way to understand EF better!).  The first thing that I wanted to figure out is how EF determines which primitive type to use for each SQL Server type. 

I downloaded and imported the States data from the US Census Data for SQL Server 2008 project on Codeplex.  Then, I used the Entity Data Model Designer in VS2010 to generate a model based on my database which resulted in an entity without the geometry property.  Looking at the XML for the .edmx file, I saw the following:

<!--Errors Found During Generation:warning 6005: The data type 'geometry' is not supported; the column 'geom' in table 'Spatial.dbo.State' was excluded.--> <EntityTypeName="State">  <Key>  <PropertyRef Name="StateID"/>  </Key>  <Property Name="StateID" Type="int" Nullable="false"/>  <Property Name="StateName" Type="nvarchar" Nullable="false" MaxLength="50"/> </EntityType>

 

I don’t believe that EF is hating on “geometry” specifically (the 6005 warning).  Rather, I think that if the SQL Server type cannot be mapped to a .NET type from the BCL, then it simply does not know how to handle it.  Certainly, they don’t want to try to map to a type that is not included in the .NET Framework itself (as would be the case for the Spatial data types).

But, what is EF using to determine the mappings?

I looked long and hard, but couldn’t quite figure out the mechanism that gets invoked when the model is generated.  But, I think the key might lie in the Microsoft.VisualStudio.Data.Providers.SqlServer.SqlMappedObjectConverter.GetFrameworkTypeFromNativeType() method:

// Disassembly by Reflector
protected override Type GetFrameworkTypeFromNativeType(string nativeType)
{
switch (this.GetProviderTypeFromNativeType(nativeType))
{
case 0:
return typeof(long);

case 1:
case 7:
case 0x13:
case 0x15:
return typeof(byte[]);

case 2:
return typeof(bool);

case 3:
case 10:
case 11:
case 12:
case 0x12:
case 0x16:
return typeof(string);

case 4:
case 15:
case 0x1f:
case 0x21:
return typeof(DateTime);

case 5:
case 9:
case 0x11:
return typeof(decimal);

case 6:
return typeof(double);

case 8:
return typeof(int);

case 13:
return typeof(float);

case 14:
return typeof(Guid);

case 0x10:
return typeof(short);

case 20:
return typeof(byte);

case 0x20:
return typeof(TimeSpan);

case 0x22:
return typeof(DateTimeOffset);
}
return typeof(object);
}

 

For SQL Server, the Native Types come from the System.Data.SqlDbType enumeration:

// Disassembly by Reflector
public enum SqlDbType
{
BigInt = 0,
Binary = 1,
Bit = 2,
Char = 3,
Date = 0x1f,
DateTime = 4,
DateTime2 = 0x21,
DateTimeOffset = 0x22,
Decimal = 5,
Float = 6,
Image = 7,
Int = 8,
Money = 9,
NChar = 10,
NText = 11,
NVarChar = 12,
Real = 13,
SmallDateTime = 15,
SmallInt = 0x10,
SmallMoney = 0x11,
Structured = 30,
Text = 0x12,
Time = 0x20,
Timestamp = 0x13,
TinyInt = 20,
Udt = 0x1d,
UniqueIdentifier = 14,
VarBinary = 0x15,
VarChar = 0x16,
Variant = 0x17,
Xml = 0x19
}

 

My conclusion here was that if the SQL Server type could only be mapped to System.Object in the BCL (using the GetFrameworkTypeFromNativeType() method), then EF will not support using that field as a property of the entity.  This coincides with the fact that to ADO.NET, the Geometry (and Geography) type is a User Defined Type (0x1d).

UPDATE: After all of this, I discovered that in System.Data.Entity.dll, there is a method that is probably a better candidate for what is actually used: System.Data.SqlClient.SqlProviderManifest.GetEdmType().  This method contains a similar switch{} as the code listed above, only it is EDM-specific instead of returning BCL types.  Feel free to examine it using Reflector if you're curious about its contents.

The Workaround

Having figured out that piece of the puzzle, I was left with trying to figure out a workaround.  If ADO.NET was unable to map a Geometry to a type in the BCL, then could we cast the Geometry as something that would be mappable?

SQL Server serializes spatial objects to binary when it saves the data in a table (documented here: http://msdn.microsoft.com/en-us/library/ee320529.aspx):

EF-Spatial-1

This binary data can be used to deserialize (“rehydrate”) the object in .NET code, which is exactly what SQL Server does when it needs to use the spatial objects.  So, we just need to find a way for EF to pull these down as a byte array.

Looking back at the GetFrameworkTypeFromNativeType function from above, it appears that EF will likely recognize Binary, Image, Timestamp, and Varbinary all as SQL Server types that need to map to byte arrays.  Perfect!

So, by creating a view in SQL Server that casts the Geometry column as a Varbinary(MAX), EF would recognize it as a type that could be mapped as an entity’s property.

CREATE VIEW vStates
AS SELECT StateID
, StateName
, CAST(geom AS VARBINARY(MAX)) AS geom
FROM dbo.State


 

Note: Julie had come up with this same solution at the same time, as our emails crossed paths reporting to one another.

Regenerating the EF model (using this view instead of the table) proved my assumption: the “geom” column now appeared as a Binary property of the vStates entity.

However, we’re not quite done yet.  The point of this exercise was to get an instance of the spatial type to use in our .NET application.  To do that, the Read(BinaryReader) instance method on SqlGeometry (or SqlGeography) must be invoked (using a MemoryStream as the intermediate between the byte[] and the BinaryReader).

The entire logic to retrieve the contents of the table and instantiate one of the Spatial types is as follows:

var entities = new SpatialEntities();
var vStates = entities.vStates;

// pull one of the entities from the collection
var geo2 = vStates.ToArray()[16];
var sqlGeom = new Microsoft.SqlServer.Types.SqlGeometry();

// Deserialize the bytes to rehydrate this Geometry instance
using (var stream = new System.IO.MemoryStream(geo2.geom))
{
using (var rdr = new System.IO.BinaryReader(stream))
{
sqlGeom.Read(rdr);
}
}

// Now let's prove that we have it. Dump WKT to Debug.
System.Diagnostics.Debug.Write(sqlGeom.ToString());

 

Output:

GEOMETRYCOLLECTION (LINESTRING (-99.530670166015625 39.132522583007812, -99.530670166015625 39.13250732421875), LINESTRING (-99.791290283203125 39.131988525390625, -99.791290283203125 39.131973266601562), …

So it worked!

Finally, an extension method would make this code a bit more general purpose:

public static class Extension
{
    public static Microsoft.SqlServer.Types.SqlGeometry AsSqlGeometry(thisbyte[] binary)
    {
        var ret = new Microsoft.SqlServer.Types.SqlGeometry();

        using (var stream = new System.IO.MemoryStream(binary))
        {
            using (var rdr = new System.IO.BinaryReader(stream))
            {
                ret.Read(rdr);
            }
        }

        return ret;
    }
}

 

The test code above then becomes a bit more readable after the refactoring:

var entities = new SpatialEntities();
var vStates = entities.vStates;

// pull one of the entities from the collection
var geo2 = vStates.ToArray()[16];
var sqlGeom = geo2.geom.AsSqlGeometry();

// Now let's prove that we have it. Dump WKT to Debug.
System.Diagnostics.Debug.Write(sqlGeom.ToString());

 

Helpful information:

Knowledge++ [4]

I recently developed a spatially-aware .NET application that did not use SQL Server 2008 as the backend (this enterprise was still on SS2005, but we needed the spatial support in the application today).  While the application worked properly on my laptop, it was a huge failboat when deployed to the server environment.

I had previously posted that you can get the Microsoft.SqlServer.Types library from MS Downloads, but it turns out that this alone is not sufficient to allow your application to run.  You also need to ensure that the SQL Server 2008 Native Client is also installed (regardless of whether you're accessing a SS2008 instance or not). Update!You actually don't... read below.

Both the Types library and the Native Client can be downloaded from the following:

http://www.microsoft.com/downloads/details.aspx?FamilyID=228de03f-3b5a-428a-923f-58a033d316e1&DisplayLang=en

My discovery source: https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=355402&wa=wsignin1.0

UPDATE: Per Isaac Kunen (in this blog post's comments as well as offline discussion), the missing component from the Types library is simply an updated version of the C Runtime.  The fix of using the Native Client is a hack in this case because its MSI actually installs the updated CRT (which the MSI for the Types library should have done also - it's a goof that MS Downloads hasn't been updated with an updated version of the Types API after the above Connect feedback was answered...

The Microsoft Visual C++ 2008 redistributable by itself can be downloaded from:

http://www.microsoft.com/downloads/details.aspx?FamilyID=A5C84275-3B97-4AB7-A40D-3802B2AF5FC2&displaylang=en

 

Using SQL Server Spatial Objects as ADO.NET Parameter Values

I've previously mentioned that the SQL Server 2008 Spatial data types are freely available for use in your .NET applications, regardless of whether you have SQL Server 2008 or not.  This allows you to incorporate some powerful spatial capabilities right into your application. 

(Look for "Microsoft SQL Server System CLR Types" on this page: http://www.microsoft.com/downloads/details.aspx?FamilyID=228DE03F-3B5A-428A-923F-58A033D316E1&displaylang=en )

However, in most usage scenarios, there will come a time when you have an instance of a SQL Server spatial object in your .NET application, and need to commit it to your SQL Server 2008 database.  How would you do this, without losing fidelity or resorting to serialization of the object to WKT first?

The solutions is to create a Parameter object of type System.Data.SqlDbType.Udt.  Then set the UdtTypeName parameter to the SQL Server-recognized type name (i.e., for SqlGeometry, you would simply use Geometry).

The following code demonstrates executing an UPDATE statement that sets the value of a Spatial field to a newly constructed object.

using (SqlConnection conn = new SqlConnection("Server=.;Integrated Security=true;Initial Catalog=scratch")) { using (SqlCommand cmd = new SqlCommand("UPDATE fe_2007_us_zcta500 SET Boundary=@boundary WHERE id=@id", conn)) { SqlParameter id = cmd.Parameters.Add("@id", System.Data.SqlDbType.Int); SqlParameter boundary = cmd.Parameters.Add("@boundary", System.Data.SqlDbType.Udt); boundary.UdtTypeName = "geometry"; SqlGeometry geom = SqlGeometry.Parse("POLYGON((0 0, 0 1, 1 1, 1 0, 0 0))"); boundary.Value = geom; id.Value = 123; conn.Open(); cmd.ExecuteNonQuery(); conn.Close(); } }

SqlGeography: Ring Orientation of Polygon Interior Rings (Holes)

I have mentioned before how the Ring Orientation for the exterior ring of a Polygon is significant when instantiating a SqlGeography object.  In this case, a Counter-Clockwise orientation is required so that as an observer walks along the path, the interior of the Polygon is always to their left.

Ring Orientation for SqlGeographyBut, what I have never really seen documented (or paid attention to, at least) is the fact that the interior rings, or holes, of a Polygon also have specific Ring Orientation requirements. 

In keeping with the "Left-handed" rule, interior rings must be defined in a Clockwise manner - the opposite orientation of the shape's exterior ring.  This is because holes within a Polygon are considered to be part of the exterior of the shape, so the observer walking in a Clockwise direction is still keeping the Polygon's interior to their left.

(I should note here that the Ring Orientation for SqlGeography is the exact opposite of ESRI's ShapeFile format, which is why Ring Orientation has been on my mind for the past few days).