# A View Inside My Head

## Spatial: Determining Ring Orientation

A Ring is a list of points such that the starting point and ending point are the same (forming a closed shape).  The order the you define the points that make up a Ring  - known as Ring Orientation - is significant, for various data formats (including SQL Server's Geography type) imply special meaning for rings that are defined in a clockwise manner as opposed to a counter-clockwise manner.

Given a list of points with no additional context, it can be difficult to determine the Ring Orientation being used.

For example, suppose that you have a generic list of points that represent the boundary of a postal code, and that you wish to use these points in order to construct a Polygon instance using the SqlGeography type.  SqlGeography happens to use "Left-handed" ordering, so that as an observer walks along the set of points in the order defined, the "inside" of the polygon is always to their left.  This also implies that the exterior ring of a Polygon is defined in a counter-clockwise manner.

If you try to define a polygon with an area greater than a single hemisphere (this is a nice way to say "if you screw up and use the wrong orientation"), then the SqlGeography type will throw an exception.  So, aside from using Try-Catch, what can you do?

While researching solutions to this problem, I stumbled upon a paper entitled "A Winding Number and Point-in-Polygon Algorithm" from the Colorado State University.  It turns out that a simple algorithm with O(n) complexity can be used to determine if a point is within a Polygon, and a side effect also provides the Ring Orientation.  The key to this algorithm is determining the trend of the ring at each crossing of an axis.

Since I was only interested in Ring Orientation (and not point enclosure detection), I didn't need to use this particular algorithm.  Instead, I took inspiration from the winding concept, and created a simpler derivative algorithm:

1. Iterate the point collection and determine the extreme "left" and "right" points
2. Normalize the line segments connected to these points so that they each have the same "X" dimension length
3. Compare the "Y" values of the normalized segments to establish the trend through that extreme point (i.e., is the "previous" segment above or below the "next" segment)
4. In the spirit of the Winding algorithm, use opposite orientations for the left and right points so that the results coincide with one another
5. A negative result (negative indicates Clockwise orientation, positive result indicates Counter-Clockwise orientation, and a result of zero would be undefined

I've actually written (and posted) several versions of this algorithm, each time discovering some edge case exception that would cause me to take down the post and rewrite the algorithm.  I believe the code below works for all simple polygons on a Cartesian coordinate system (read: I have more testing to see if this will work with an ellipsoidal model, like SqlGeography).

Note: The following code is generic in nature, and as such, I've defined my own Point structure instead of using a SqlGeometry or SqlGeography, etc.

```struct Point
{
public double X { get; set; }
public double Y { get; set; }
}

enum RingOrientation : int
{
Unknown = 0,
Clockwise = -1,
CounterClockwise = 1
};

RingOrientation Orientation(Point[] points)
{
// Inspired by http://www.engr.colostate.edu/~dga/dga/papers/point_in_polygon.pdf

// This algorithm is to simply determine the Ring Orientation, so to do so, find the
// extreme left and right points, and then check orientation

if (points.Length < 4)
{
throw new ArgumentException("A polygon requires at least 4 points.");
}

if (points[0].X != points[points.Length - 1].X || points[0].Y != points[points.Length - 1].Y)
{
throw new ArgumentException("The array of points is not a polygon.  The first and last point must be identical.");
}

int rightmostIndex = 0;
int leftmostIndex = 0;

for (int i = 1; i < points.Length; i++)
{
if (points[i].X < points[leftmostIndex].X)
{
leftmostIndex = i;
}
if (points[i].X > points[rightmostIndex].X)
{
rightmostIndex = i;
}
}

Point p0; // Point before the extreme
Point p1; // The extreme point
Point p2; // Point after the extreme

double m; // Holds line slope

double lenP2x;  // Length of the P1-P2 line segment's delta X
double newP0y;  // The Y value of the P1-P0 line segment adjusted for X=lenP2x

RingOrientation left_orientation;
RingOrientation right_orientation;

// Determine the orientation at the Left Point
if (leftmostIndex == 0)
p0 = points[points.Length - 2];
else
p0 = points[leftmostIndex - 1];

p1 = points[leftmostIndex];

if (leftmostIndex == points.Length - 1)
p2 = points[1];
else
p2 = points[leftmostIndex + 1];

m = (p1.Y - p0.Y) / (p1.X - p0.X);

if (double.IsInfinity(m))
{
// This is a vertical line segment, so just calculate the dY to
// determine orientation

left_orientation = (RingOrientation)Math.Sign(p0.Y - p1.Y);
}
else
{
lenP2x = p2.X - p1.X;
newP0y = p1.Y + (m * lenP2x);

left_orientation = (RingOrientation)Math.Sign(newP0y - p2.Y);
}

// Determine the orientation at the Right Point
if (rightmostIndex == 0)
p0 = points[points.Length - 2];
else
p0 = points[rightmostIndex - 1];

p1 = points[rightmostIndex];

if (rightmostIndex == points.Length - 1)
p2 = points[1];
else
p2 = points[rightmostIndex + 1];

m = (p1.Y - p0.Y) / (p1.X - p0.X);

if (double.IsInfinity(m))
{
// This is a vertical line segment, so just calculate the dY to
// determine orientation

right_orientation = (RingOrientation)Math.Sign(p1.Y - p0.Y);
}
else
{
lenP2x = p2.X - p1.X;
newP0y = p1.Y + (m * lenP2x);

right_orientation = (RingOrientation)Math.Sign(p2.Y - newP0y);
}

if (left_orientation == RingOrientation.Unknown)
{
return right_orientation;
}
else
{
return left_orientation;
}
}

void Test()
{
// Simple triangle - left extreme point is vertically "in between" line segments
Point[] points = new Point[]
{
new Point(5,-1),
new Point(0,0),
new Point(5,1),
new Point(5,-1)
};

System.Diagnostics.Debug.Assert(Orientation(points) == RingOrientation.Clockwise);
Array.Reverse(points);
System.Diagnostics.Debug.Assert(Orientation(points) == RingOrientation.CounterClockwise);

// Case where both line segments are above the left extreme point
points = new Point[]
{
new Point(2,1),
new Point(0,0),
new Point(1,1),
new Point(2,1)
};

System.Diagnostics.Debug.Assert(Orientation(points) == RingOrientation.Clockwise);
Array.Reverse(points);
System.Diagnostics.Debug.Assert(Orientation(points) == RingOrientation.CounterClockwise);

// Case where both line segments are below the left extreme point
points = new Point[]
{
new Point(2,-1),
new Point(0,0),
new Point(1,-1),
new Point(2,-1)
};

System.Diagnostics.Debug.Assert(Orientation(points) == RingOrientation.CounterClockwise);
Array.Reverse(points);
System.Diagnostics.Debug.Assert(Orientation(points) == RingOrientation.Clockwise);

// Case where line segment is vertical (slope cannot be determined)
points = new Point[]
{
new Point(0,0),
new Point(0,1),
new Point(1,1),
new Point(1,0),
new Point(0,0)
};

System.Diagnostics.Debug.Assert(Orientation(points) == RingOrientation.Clockwise);
Array.Reverse(points);
System.Diagnostics.Debug.Assert(Orientation(points) == RingOrientation.CounterClockwise);

// Case where angle thru left extreme point is a right angle
points = new Point[]
{
new Point(0,0),
new Point(1,1),
new Point(1,-1),
new Point(0,0)
};

System.Diagnostics.Debug.Assert(Orientation(points) == RingOrientation.Clockwise);
Array.Reverse(points);
System.Diagnostics.Debug.Assert(Orientation(points) == RingOrientation.CounterClockwise);

// Real-world case from a SHP file
points = new Point[]
{
new Point(-156.92467299999998,20.738695999999997),
new Point(-156.924636,20.738822),
new Point(-156.924608,20.73894),
new Point(-156.92458,20.739082),
new Point(-156.92460599999998,20.739234),
new Point(-156.924551,20.739326),
new Point(-156.924507,20.739241999999997),
new Point(-156.924482,20.739082),
new Point(-156.924466,20.738854999999997),
new Point(-156.924387,20.738602999999998),
new Point(-156.924308,20.738325),
new Point(-156.924239,20.738063999999998),
new Point(-156.92424,20.737887999999998),
new Point(-156.924285,20.737811999999998),
new Point(-156.924475,20.73762),
new Point(-156.92458299999998,20.737603999999997),
new Point(-156.924754,20.737579),
new Point(-156.924851,20.737731),
new Point(-156.924956,20.738101),
new Point(-156.924909,20.738343999999998),
new Point(-156.924818,20.738487),
new Point(-156.92467299999998,20.738695999999997)
};

System.Diagnostics.Debug.Assert(Orientation(points) == RingOrientation.Clockwise);
Array.Reverse(points);
System.Diagnostics.Debug.Assert(Orientation(points) == RingOrientation.CounterClockwise);

}```

## Truck

Last week, I travelled to Philadelphia to work out of my company's office in Exton for a week.  Before leaving, I took my youngest daughter for a hike in the park, which was a two-fold treat for her: she got to spend time with Daddy, and as an extra bonus, she got to ride in Daddy's truck.  Being a two-seater, it is not often used unless I'm spending some 1-on-1 time with one of the kids.

After returning from out hike, I parked it on the street in front of the house.  It looked something like this picture that I took in 2003 right before buying it:

[Sorry, photo no longer available... But it was pretty sweet looking!]

Well, fast forward to very early Friday Morning.  I was sleeping in my hotel room, when my cellphone began to ring.  I think the alarm clock took quite a few swipes of my fist before I realized that it was not the loud noise that was bothering me.  I stumbled out of bed and picked up the phone, only to hear my wife tell a tale of firetrucks and flames and the entire neighborhood observing some bonfire that was taking place on the street in front of my house.

I had actually had a bad dream just a bit earlier, and was relieved to find out that it was only a dream.  I think part of me expected the same to happen in this case, but no such luck.  My truck - the one that I had just paid off a few months ago - was ablaze.

Now, the truck looks a little more like this:

Notice if you will that there is no hood on the truck. It was a steel hood, and is nowhere to be found.  So, it was either removed by the firefighters and they took it with them, or it simply melted away.

The worst part now is that we have to wait until Monday (2 more days as I write this) for the insurance company to tow the shell away to their evaluation center...  once there, the lady told me, they would then make the determination as to whether it could be repaired or not.  I just giggled to myself.

But, until then, there's a tarp-wrapped burned-out truck serving as a landmark for those trying to locate my house.  It's the one with the nose sitting on the asphalt.

UPDATE: The truck was hauled away shortly ago.  While speaking with the neighbors who came out to watch, I learned that another neighbor filmed it AND UPLOADED TO YOUTUBE!  Thanks, Andy!

## SQL Server 2008: Spatial Data, Part 8

In this, the eighth part in a series on the new Spatial Data types in SQL Server 2008, I'll step away from the database and do a little spatial coding using .NET.

## Redistributable .NET Library

Up to this point in the series, I have demonstrated a lot of interesting (?) things that you can do with the new Spatial data types (Geometry and Geography) in SQL Server 2008.  You might be thinking, "That's swell, and all, but I wish I could do some of that stuff without needing to be tethered to a database."  Well, you know what?  You can!

I mentioned in a previous post that the Spatial data types were implemented as SQLCLR User-Defined Types.  I've since been corrected by Isaac Kunen, who stated that they are more accurately described as System-Defined Types, with the difference being that these are automatically installed and available for use as part of SQL Server 2008, regardless of whether the ENABLE CLR bit has been activated.  Semantics aside, these types are merely classes within a .NET assembly, and Microsoft is making this freely available as part of a Feature Pack for SQL Server (which will be redistributable as part of your stand-alone application, according to Isaac):

(Look for "Microsoft SQL Server System CLR Types," which includes the two Spatial types plus the HierarchyID type.  This link is for RC0, and may not be applicable to future versions as the product is finalized and released.)

## Builder API

A new feature that was included with the first Release Candidate (RC0) is the Builder API.  This is a collection of interfaces and classes that helps you to construct spatial types by specifying one point at a time until all points have been added.

The Builder API is not only useful for creating new instances of spatial data, but also for consuming existing instances one point at a time (maybe to convert an instance into another format).  Documentation is light at the moment, so I'm still trying to grok exactly how to best utilize it.

For my first experiment with the API, I obtained some Zip Code Boundary data in ASCII format from the U.S. Census Bureau:

My goal was to parse the data, and then create a new SqlGeography instance for each zip code.  (Note: SqlGeography is the .NET class name that T-SQL refers to simply as Geography).  The SqlGeographyBuilder class proved to be perfect for accomplishing this task.

At its core, the SqlGeographyBuilder implements the IGeographySink interface.  If you wanted to consume an existing SqlGeography instance, you could implement IGeographySink in your own class, and then invoke the SqlGeography's Populate() instance method, passing in your object as the parameter.  The Populate() method takes care of calling the appropriate IGeographySink methods within your class.

In this case, I'm not starting with an existing SqlGeography instance, so my code will need to call the methods of the SqlGeographyBuilder in the correct order:

After EndGeography() has been invoked, the new instance is available via the ConstructedGeography property of the SqlGeographyBuilder class.

Simple enough, right?  Yeah, I'm still a little lost myself...  But, here's some code to help demonstrate what's going on!

First, let's look at the ASCII data.  A single zip code's boundary might be defined as:

```      1469      -0.824662148292608E+02       0.413848583827499E+02
-0.824602851767940E+02       0.413864290595145E+02
-0.824610630000000E+02       0.413860590000000E+02
-0.824685900000000E+02       0.413841470000000E+02
-0.824686034536111E+02       0.413843846804627E+02
-0.824605990000000E+02       0.413863160000000E+02
-0.824602851767940E+02       0.413864290595145E+02
END```

The very first line happens to contain an identifier (maps to a second file that lists the actual USPS zip code).  The coordinate listed in the first line is not actually part of the boundary, but rather appears to be the population center of that area.  The actual boundary begins with the second line, and continues until you encounter the "END".  Also, in case you couldn't tell, coordinates in this data are in Longitude-Latitude order.

Since a Zip Code is a polygon, and since we are working with SqlGeography, we must be aware of ring ordering.  That is, the exterior ring of a polygon must be defined in a counter-clockwise order so that as you "walk the ring", the interior is always to your left.  If you reverse the order, then SqlGeography assumes that you're trying to define a polygon containing the entire world except for the small area inside of the polygon.

Well, in this case, the order of the points of the Zip Code boundary is defined in clockwise order... so, we must be aware of this and call into the SqlGeographyBuilder in the opposite order (so the last point defined in the ASCII data is the first point used while building our new instance).

To accomplish this, I simply parse the Lat/Long coordinates as "double" types, and then push them onto a stack.  Then, I pop the stack and call into the Builder API with each point.  At the end, I obtain the new SqlGeography instance from the ConstructedGeography property.

(Note: This is demonstrative code - some things should probably be cleaned up/refactored/error handled... You have been warned)

```  public SqlGeography ParseAsGeography(string zipcode_points)
{

Stack<double[]> Points = new Stack<double[]>();

while (line != null  && line != "END")
{
if (line != String.Empty)
{
Points.Push(ParseLatLngValues(line));
}

}

return CreateGeography(Points);
}```
```  private
double[] ParseLatLngValues(string line)
{
//      -0.838170700000000E+02       0.409367390000000E+02double[] ret = newdouble[2];

string lng = System.Text.RegularExpressions.Regex                            .Matches(line, "\\S+")[0].Value;
string lat = System.Text.RegularExpressions.Regex                            .Matches(line, "\\S+")[1].Value;

double.TryParse(lat, out ret[0]);
double.TryParse(lng, out ret[1]);

return ret;
}```

```  private SqlGeography CreateGeography(Stack<double[]> points)
{
SqlGeographyBuilder builder = new SqlGeographyBuilder();
builder.SetSrid(4326);
builder.BeginGeography(OpenGisGeographyType.Polygon);

double[] point = points.Pop();

builder.BeginFigure(point[0], point[1]);

while (points.Count > 0)
{
point = points.Pop();
}

builder.EndFigure();
builder.EndGeography();

return builder.ConstructedGeography;
}```

## Coding in SQL Server: An Evolution

Tuesday at the NWNUG meeting, Steven Smith spoke on various ways to squeeze performance out of your ASP.NET applications.  This was a fantastic talk, and gave me plenty to think about (since ASP.NET is not my forte, I only consider myself to have an intermediate skillset on this topic).

One suggestion that he made involved caching database writes.  That is, instead of immediately writing logging-type information to the database for every request, which is a relatively expensive operation considering the small payload size, that you could accumulate them in a short-term cache, and then perform the write operation periodically.  Fewer database calls = faster performance.

In his example, he spoke of his advertisement server that might serve many impressions per second, but he doesn't want each impression to incur an expensive database write.  So, he keeps track of the activity locally, and then persists to the database every 5 seconds using a single database call containing multiple data points.

The code that Steve demonstrated utilized XML to contain the data within a single block of text (read: can be passed in as a single parameter to a stored procedure):

<ROOT>
<Activity customerId="ALFKI" viewCount="5" />
<Activity customerId="ANATR" viewCount="7" />
</ROOT>

Now, consuming XML from T-SQL is an area that I know very well, so I cringed a little bit when Steve showed the actual stored procedure code itself:

```CREATE PROCEDURE dbo.BulkLogCustomerViews
@@doc text -- XML Doc...
AS

DECLARE @idoc int

-- Create an internal representation (virtual table) of the XML document...
EXEC sp_xml_preparedocument @idoc OUTPUT, @@doc

UPDATE TopCustomerLog
SET TopCustomerLog.ViewCount = TopCustomerLog.ViewCount
+ ox2.viewCount
FROM OPENXML (@idoc, '/ROOT/Activity',1)
WITH ( [customerId]  NCHAR(5)
, viewCount int
) ox2
WHERE TopCustomerLog.[customerId] = ox2.[customerId]

-- Perform INSERTS
INSERT INTO TopCustomerLog
( CustomerID
, ViewCount
)
SELECT [customerId]
, viewCount
FROM OPENXML (@idoc, '/ROOT/Activity',1)
WITH ( customerId  NCHAR(5)
, viewCount  int
) ox
WHERE NOT EXISTS (SELECT customerId FROM TopCustomerLog
WHERE TopCustomerLog.customerId = ox.customerId)

-- Remove the 'virtual table' now...
EXEC sp_xml_removedocument @idoc ```

Now, to Steve's credit, this code works just fine, and can probably be used as-is on all versions of SQL Server from 7.0 through 2008.  But, since we really don't write ASP applications consisting entirely of Response.Write any longer, I'd like to see Steve update his demo to use more modern techniques on the database as well.  ;-)

The first thing that he could do is update the procedure to utilize the XML data type that was first introduced in SQL Server 2005.  This would simplify the code a little bit, and would get rid of the dependency on the COM-based MSXML.dll, which the sp_xml_preparedocument and OPENXML() uses.

```CREATE PROCEDURE dbo.BulkLogCustomerViews
@doc xml
AS

UPDATE TopCustomerLog
SET    TopCustomerLog.ViewCount = TopCustomerLog.ViewCount
+ ox2.viewCount
FROM   (
SELECT T.activity.value('@customerId', 'nchar(5)')
as CustomerID,
T.activity.value('@viewCount', 'int') viewCount
FROM   @doc.nodes('/ROOT/Activity') as T(activity)
) ox2
WHERE  TopCustomerLog.[customerId] = ox2.[customerId]

-- Perform INSERTS
INSERT INTO TopCustomerLog
( CustomerID
, ViewCount
)
SELECT [customerId]
, viewCount
FROM (
SELECT T.activity.value('@customerId', 'nchar(5)')
as CustomerID,
T.activity.value('@viewCount', 'int') viewCount
FROM   @doc.nodes('/ROOT/Activity') as T(activity)
) ox
WHERE NOT EXISTS ( SELECT customerId FROM TopCustomerLog
WHERE TopCustomerLog.customerId = ox.customerId )```

Note that the XML data type in SQL Server doesn't need to be a well-formed document.  In this case, Steve could just pass in series of "Activity" elements (no "ROOT" element would be required by SQL Server, so he would also be able to simplify the .NET code that actually creates the XML string):

`<Activity customerId="ALFKI" viewCount="5" /><Activity customerId="ANATR" viewCount="7" />`

Consequently, the XPath (XQuery, actually) within the nodes() method of the stored procedure code would need to change as well:

@doc.nodes('Activity') as T(activity)

But, we can kick this up a notch and use some SQL Server 2008 features as well. First, there's new "Upsert" capabilities (MERGE statement) that tries to simplify what Steve does with the UPDATE followed by INSERT:

```CREATE PROCEDURE dbo.BulkLogCustomerViews
@doc xml
AS

MERGE TopCustomerLog AS target
USING (SELECT T.activity.value('@customerId', 'nchar(5)')
as CustomerID,
T.activity.value('@viewCount', 'int') as viewCount
FROM   @doc.nodes('Activity') as T(activity)) AS source
ON    (target.CustomerID = source.CustomerID)
WHEN  MATCHED
THEN UPDATE SET target.ViewCount = target.ViewCount + source.viewCount
WHEN NOT MATCHED
THEN INSERT (CustomerID, ViewCount)
VALUES (source.CustomerID, source.viewCount);```

One more thing that could be done to further simplify this T-SQL is to use a Table-valued Parameter instead of the XML.  This would allow Steve to pass a fully populated table of data into the stored procedure and consume it directly by the MERGE statement.

The first step is to create a T-SQL type that defines the table structure of the parameter (this is a one-time operation, unless the table structure changes):

```CREATE TYPE CustomerViewType AS TABLE
(
CustomerID nchar(5) NOTNULL,
ViewCount intNOTNULL
);```

Now, a parameter can be defined of this type, and used just like any other table-value variable:

```ALTER PROCEDURE dbo.BulkLogCustomerViews
AS

MERGE TopCustomerLog AS target
USING @views AS source
ON    (target.CustomerID = source.CustomerID)
WHEN  MATCHED
THEN UPDATE SET target.ViewCount = target.ViewCount
+ source.viewCount
WHEN NOT MATCHED
THEN INSERT (CustomerID, ViewCount)
VALUES (source.CustomerID, source.viewCount);```

On the ADO.NET side, the table-valued parameter could be represented as a DataTable object (other options also exist), and can be assigned directly as the value of the stored procedure's parameter object:

```// Create a data table, and provide its structure
DataTable customerViews = new DataTable();

// Fill with rows

using (SqlConnection conn = new SqlConnection("..."))
{
SqlCommand cmd = conn.CreateCommand();
cmd.CommandType = System.Data.CommandType.StoredProcedure;
cmd.CommandText = "dbo.BulkLogCustomerViews";

SqlParameter param

conn.Open();

cmd.ExecuteNonQuery();
}```

## SQL Server 2008 RC0 Install: Sql2005SsmsExpressFacet

This morning's goal was to quickly install SQL Server 2008 RC0, and then move on with some project work.  Let's just say that my project work should resume by this afternoon...

In the interest of disk space, I removed an existing installation of SQL Server 2005 Developer Edition.  And then the installation of 2008 RC0 began by installing the "Microsoft.NET Framework 3.5 SP1 (Beta)"...  which is probably "install-smell" for me needing to pave my machine when the product finally RTM's.  But, I digress...

The installation went pretty smoothly until it came time for the "System Configuration Check" that takes place after you select everything that you would like to install, but before the files actually get installed.  In my case, this check failed because "The SQL Server 2005 Express Tools are installed.  To continue, remove the SQL Server 2005 Express Tools."  (This is the "Sql2005SsmsExpressFacet" rule of the installation)

Thank you, Microsoft, for that succinct failure message that includes instructions for resolution...  Except, I didn't have the SQL Server 2005 Express Tools installed.  They didn't show up in my Programs list, in the Start menu, or on my C: drive at all.  How am I to uninstall something that isn't installed?  Hrmmm....

After about an hour's search around my hard drive, I finally went into the registry, and discovered the following key:

HKLM\Software\Microsoft\Microsoft SQL Server\90\Tools\ShellSEM

Note: Jan Sotola reports that the affected 64-bit version key is:

HKLM\Software\Wow6432Node\Microsoft\...
...\Microsoft SQL Server\90\Tools\ShellSEM

Contained within was some registry information belonging to Red Gate SQL Prompt.  Apparently, despite my removing of the SQL 2005 Express Tools some time ago, this registry key was not removed because the Red Gate information was still there.

On a hunch, I renamed the key to "ShellSEM.old", and the SQL Server 2008 installation carried on.

UPDATE: Shortly after posting this, Theo Spears from Red Gate sent the following email:

"I apologise for this issue; the SQL Prompt team here has been working to address it. You and your readers may be interested to hear that we now have a version which works with SQL Server 2008 RC0, and no longer blocks the installation. To get a copy send us an email at support@red-gate.com"

I should clarify that my little rant above was not targeted at Red Gate, but I'm so happy to hear that they are proactively working to resolve this little issue.  I would have just liked for Microsoft to use more than a single registry key as evidence of a conflicting product installation, that's all.