Mad King DataGrid

Suicide KingThis post is about scrolling in WPF and the egocentric DataGrid control. I’ll give the project background but if all you’re interested in is the final solution, scroll on down a bit. I’ll leave a trail of headers so you should be able to find your way easily enough.

The Project

I’ve been working on a little application to help track spells for my wife’s character in our weekly Pathfinder games (my character might find a use for it as well but that’s just gravy). Since I had already spent a couple months dinking around with acquiring (and normalizing into XML) the reference version of the spells, I figured all I really needed was a UI to present and track them. I mean, I have a strong object graph and a library of spells, UI is really all that’s left, right?

For the UI, I decided to lean on WPF—mostly so I could come to understand the many data binding techniques I’d learned at TechEd this year. Plus, I had some ideas of how I wanted to leverage a composite UI for the presentation. This turns out to have been an awesome idea and allowed me to do exactly what I wanted, though it sometimes took a while to figure out how to do the cool stuff.

As part of the project, I also wanted to get some experience with actually using MVVM in a live project and set myself the task of using as little code-behind as possible.

The Last Fly in the Ointment

As I said above, things went well. I had to learn my way around a few problems, but things mostly worked the way I thought they should and while the end result would be cleaner if I’d started out knowing what I know now, I’m pretty proud of the results. I got the UI to alter dynamically depending on whether the character class had Domains. Or bloodlines. Or neither. I had some fun building in filtering capabilities. And you can even choose the sources you accept into your spell list.

The one thing that had me tearing my hair out, in the end, was that the list of spells didn’t scroll right. Since the spells need to be divided by level, I have a ListView with a custom data template. Each item in the ListView is a whole level’s worth of spells. The custom data template has a WrapPanel for the level header information (spells per day, spell DC—things that vary by level) and then a DataGrid for the spall data itself. Here’s a screenshot if you’re curious:

Spell Manager Labeled Screenshot

For mouse wheel scrolling, if the mouse was on the level header (the yellow bit), or if you scroll all the way to the right where the DataGrid ceases to rule, it’d scroll just fine. Actually, that’s a lie. It’d scroll just fine once you turned off ScrollViewer.CanScroll on the ListView (if you don’t, then the ListView scrolls it in chunks, a level/item at a time).

This got very annoying because as far as I’m concerned, if there’s a scroll bar, the mouse wheel should move it. And since the list tends to be long, I was forever trying to scroll with my wheel.

Why This is Happening—Mad King DataGrid

By poking at it repeatedly with a debugger and wiring up random events, I came to understand that the DataGrid was eating my MouseWheel event. And since that’s the bottom control on the event route (WPF starts the event routing at the bottom of the tree), the MouseWheel wasn’t being seen by itself (the DataGrid), its parent (the StackPanel), or its parent’s parent (the ListView). My guess is that DataGrids expect to be the king of scrolling and figure they own the MouseWheel events because you, the developer, will only screw it up if they let you.

Where to Even Start—ScrollViewer

So I went digging to see if there wasn’t something I could do to wrest MouseWheel events from the DataGrid. It was clear that I’d need to do something to manually handle mouse events and, also manually, dictate scrolling behavior during same. The only control I could find that exposed the right methods is the ScrollViewer control as explained in this Stack Overflow answer. That’s exactly what I need, but to use it, I’d need to insert a ScrollViewer into the control hierarchy. And the only place to do that, such that it’d actually work the way you’d expect, is to insert it into the ListView somehow. This turned out to be trickier than it looks.

Where to Finish

My initial impulse was to replace the ItemsPanel on the ListView. Unfortunately, the ItemsPanel has to be just that—a panel. ScrollViewer isn’t a panel and any panel you insert there is going to have the same problem as the ListView itself. It took me a while to work out how to do it (because the documentation for template-type properties is long on detail and short on information), but I finally figured out that overriding the ControlTemplate was the way to go. A little back and forth and I came out with this simple-after-the-fact solution.

	<ListView Margin="8,155,8,8" ItemsSource="{Binding Source={StaticResource characterViewModelClassesViewSource},  Path=SpellLevels, UpdateSourceTrigger=PropertyChanged}" ItemTemplate="{DynamicResource spellLevelDataTemplate}"  ScrollViewer.CanContentScroll="False" IsTextSearchEnabled="False" Name="spellsListView">
<ScrollViewer PreviewMouseWheel="ScrollViewer_PreviewMouseWheel">
<StackPanel IsItemsHost="True"/>

With the ScrollViewer in place, the event itself was simple, too.

private void ScrollViewer_PreviewMouseWheel(object sender, MouseWheelEventArgs e) 
ScrollViewer scv = (ScrollViewer
  scv.ScrollToVerticalOffset(scv.VerticalOffset - e.Delta);
e.Handled =

If you actually followed the Stack Overflow link, you’ll recognize that I lifted the code pretty much wholesale.

So now, my ListView eats the MouseWheel event (via PreviewMouseWheel), but only after it actually moves the scroll bar correctly.

author: Jacob | posted @ Monday, July 18, 2011 6:07 PM | Feedback (0)

Banishing Shadows in eConnect

FlashlightWhile I prefer working through GP Web Services, sometimes the functionality you want/need simply isn’t there. Many people drop from Web Services directly to table access, but I prefer seeing if I can’t get what I need through eConnect instead. Generally speaking, I can.

Unfortunately, the eConnect SDK documentation is sparse and only really useful for simple cases. Raw schema files and cryptic explanations are par for the course and this can be frustrating.

Pulling a Customer Card

So, I’m working on an application that requires that I take a Universal Locator Id and walk that back to the customer I need to ship something to. For reasons buried in the depths of time, the UL is stored on the COMMENT2 field of the Customer Card. Since GP Web Services doesn’t expose COMMENT2 in its search criteria, I dropped back to eConnect.

Simple right? I mean, the request document has a “WhereClause” element for just that purpose so how hard can it be? Here’s the documentation:






Allows users to pass in a custom "where clause" built from columns of the parent table of the requested document type.

So I cranked up my eConnect library project and came up with the following code:

private RQeConnectOutType getRequest(string CustomerUL)
eConnectOut outDoc = new eConnectOut()
DOCTYPE = "Customer",
INDEX1FROM = "A001",
INDEX1TO = "Z001",
WhereClause = string.Format("COMMENT2 = '{0}'", CustomerUL)
RQeConnectOutType outType = new RQeConnectOutType()
eConnectOut = outDoc
return outType;

The only problem is, this returns every customer record. All the time. And it does so for every alternative expression I could think of. Questioning my assumptions on this to get it to work was an exercise in futility.

And no amount of search-fu helped me find a satisfactory answer to getting this to work. As far as the internet is concerned, everybody who tries this either gives up or doesn’t have to be told how to get it working.

Getting it to Work

Well, I eventually got it to work, and I thought I’d share so the next schmuck who goes through this doesn’t have to suffer through the despair I did to get there (at least, not if their search engine of choice can point him here).

The key to this working is to tell eConnect that you don’t want to work from the “shadow” tables. What’s a shadow table, you ask? Well, I can’t be certain, but I think that it refers to the e_Connect_Out table stuck in your database by eConnect. This table has summary records that you might want to work with in eConnect, with details on where to get the full record. I can’t tell you for certain because the SDK documentation doesn’t actually have a section explaining shadow tables.

The problem is that when working with the shadow tables, things that aren’t in an index field aren’t really available for filtering purposes. Thus, my WhereClause referring to COMMENT2 doesn’t do a thing because the shadow table doesn’t know from COMMENT2. The fix for this is to use the woefully misnamed element “FORLIST”. Here’s what the documentation says:






0=Return items from the shadow table. Use ACTION to specify the type of returned data.

1=Returns a list of items directly from the actual tables; does not use shadow tables

Well, hoodyhoo! That’s exactly what I needed (though I didn’t know it until I tried it). So here’s the code that actually works:

private RQeConnectOutType getRequest(string CustomerUL)
eConnectOut outDoc = new eConnectOut()
DOCTYPE = "Customer",
INDEX1FROM = "A001",
INDEX1TO = "Z001",
WhereClause = string.Format("COMMENT2 = '{0}'", CustomerUL)
RQeConnectOutType outType = new RQeConnectOutType()
eConnectOut = outDoc
return outType;

I’ve bolded the only change. Not only does this work, but it works fast. So now you know. If you want your filter to work in the WhereClause and it just won’t take, try pointing it at the real tables. I know that “FORLIST” is the obvious place to do this, but I thought I’d point it out, anyway (in case I forget).

author: Jacob | posted @ Thursday, August 19, 2010 6:05 PM | Feedback (3)

Updating Sales Invoices With Dynamics GP Web Services

Duct tape road fix.As you may have figured out from past posts, I like working with Dynamics GP Web Services when building integrations that involve our business systems. That isn’t to say that there aren’t rough spots occasionally. My latest wrestling match with it involved updating Sales Invoices. Since I couldn’t find information on this issue at all, I thought I’d post my struggle and solution for others in the same situation in future to find.

The Setup

We have multiple bins configured on our sales invoices so that we can coordinate our warehouse folks and get orders shipped quickly. That means that for each line on the invoice, inventory can come from one or more bins (if needed) to fulfill the order. Since GP isn’t very flexible in its automatic bin assignment, we have an addin that we run to allocate bins. I programmed the addin.

Sending a sales document through GP Web Services with multiple bins attached to each line item was impossible to do on Sales Orders because of a bug in the update procs that I found a couple months ago (where the bins simply aren’t updated). As a result, we have to wait until an invoice is created for the document to be able to assign bins through the web services. A pain, but not a huge deal, really.

The Specs

Dynamics version: 10.00.1368
SQL Server version: 2005
Visual Studio: 2010 (10.0.30319.1)
Target Framework: .Net Framework 3.5
Transport: WCF using basicHttpBinding

The Problem

The thing is, I was having trouble making my bin allocation re-entrant. It could be that you might want to run bin allocation twice on a batch of invoices if, say, you had shortages in the available bins on one or two orders but were able to move things around a bit to make it work. It seems reasonable. Then, too, we found that another process that was updating shipping dates on invoice batches was causing problems, as well.

A little experimentation showed that updating an invoice through the web services that already had multiple bins allocated caused Dynamics GP to lose track of the allocations entirely. Fortunately, it did so in such a way that the inventory wasn’t messed up—i.e. it reverted back to the bins they had come from. Unfortunately, I could find no way to allow my bin allocations to be re-entrant and/or for my batch date change application to touch my Sales Invoices without bad things happening.

I found that even if you sent an absolutely unaltered invoice back to GP in a web services update operation, it’d lose bin allocations if they existed—whether your update included those bin allocations or not.

The Solution

I finally got a wacky idea that turns out to work a dream. It’s less than ideal, but hey, it works without having to drop down to eConnect or anything even less friendly. The trick is to fake Dynamics out with a superfluous update if you have an order with bins already allocated. Here’s the code:

public List<CustValidationItem> CommitSalesInvoice()
GPService.SalesInvoice si = originalSalesDocument as GPService.SalesInvoice;
if (si == null)
si = new GPService.SalesInvoice();
int allocatedCount = si.Lines.Count(l => l.QuantityFulfilled.Value > 0M);
if (allocatedCount == si.Lines.Count())

In this code, originalSalesDocument is completely untouched and exactly the invoice as read from the web service earlier. If you were to break after fakeoutUpdate() is called, the invoice would have no bins allocated for any of the line items.

Note that I have a potential bug in there if an invoice is ever partially allocated (because then allocatedCount would be less than the total number of lines), but I’m considering that a feature for now (because it will de-allocate all bin allocations on that invoice and send up red flags throughout our system).

And just to put all the cards on the table, here’s fakeoutUpdate in its entirety:

private void fakeoutUpdate(GPService.SalesInvoice invoice)
    // If bins are already allocated, the next update deletes the allocation regardless of what is sent.
    // This update is to de-allocate the order while preserving the bins we know about already so they'll
    // save properly later in the process.
    GPManager.Service.UpdateSalesInvoice(invoice, GPManager.GetContext(), UpdateSalesPolicy);

There’s another potential bug if you get another user updating the same sales invoice from a different machine between the time the fake is run and the actual update. Fortunately, the total time between the two updates is literally microseconds. Indeed, our order processor says that there is no noticeable difference in the time it takes to commit updates between the new process and the old one. Subjective, I know, but good enough.

The Takeaway

I find it odd that such simple and universal bugs have made it into the release of the product. It speaks poorly of the testing done at Dynamics headquarters. These aren’t bugs that should have been able to slip through a minimally competent QA process. It’s clear that nobody tested updating invoices in a multiple bin scenario. At least, not in a way that checked that the bin allocations survive the update.

I also find the lack of information out there disturbing. It makes me feel like I’m the only developer using Dynamics GP Web Services for anything beyond the most basic functions. I know that can’t be true, but you couldn’t prove it by the amount of chatter on the interwebs. I’ve noticed before that business line developers are under-represented out here and this reinforces that feeling. The only Dynamics blogs I’ve been stumbled across seem to be vendors and/or supply channel/party line outlets. These blogs talk enough about the abomination that is Dexterity, but almost none at all about alternatives like eConnect or GP Web Services.

author: Jacob | posted @ Tuesday, August 10, 2010 4:05 PM | Feedback (0)

Spammers Are Vermin

Cockroaches My apologies if you’ve tried to access my personal blogs recently. I’ve been inundated by comment spammers and it has been a tremendous pain in the buttocks getting them straightened out. For a while, I was getting only a half dozen or so a day. Short comments about what an amazing blog/post it was and that they’d definitely be back and/or bookmark/subscribe.

I could manually delete them without too much inconvenience for a while. Lately, though, there’s been a staggering increase in these weasels so I’ve adopted measures a little more… drastic.

A Comment Filter BlogEngine.Net Extension

I noticed that most of these spammers shared some distinctive characteristics. Many of them put down the same email address, for example. I also noticed that there were only three or four websites generally involved. Since the spam exists for the purpose of Google pagerank manipulation, the website is probably the important thing to note.

Now, I looked for a BE.Net extension that’d do this already. Unfortunately, most of the comment filters I found were tied into Akismet or some other blog filter service. That’s more overhead than I really want (in terms of configuration, registering, and complexity etc.). All I really need is something to check the email address, website, and maybe IP address against a known blacklist I can maintain myself. That shouldn’t be difficult, right?

Adventures in Comment Filtering

On the surface, these things weren’t that hard to accomplish. BlogEngine.Net has some quirks, though, that got in my way until I figured them out. For those interested, I’m going to explain them here. If you want to skip the gory details, head down to the next section. Or if you just want the extension, download it, pop it into the App_Data/Extensions folder and season to taste.

Finding the Right Event

My first impulse was to look at the Comment object for useful events to extend. Comment.Validating looked like a good candidate so I tried that one out. Unfortunately, that event never got hit on my blog. It took me a bit to realize that this is because I don’t actually validate comments. Validating comments is a setting where a comment doesn’t show up until it is approved. Since I only do blog maintenance once a day or so, I don’t want to prevent comments from showing up for that long. Validating comments would pretty much stop discussions in their tracks and I don’t want that.

Once I remembered that comments are managed on the Page object, things went much better. The Page.AddingComment event turned out to be the one I wanted.

ExtensionParameter Fun

This is the one that held me up the longest. ExtensionParameters can be assigned types that include things like “DropDown” and “ListBox”. That seemed like exactly the kind of thing I could use for my filters. You see, each filter will be of a limited number of valid types: “Website”, “Email”, “IP Address”, or “Length” (I added Length when I noticed that all these messages are really short and I might want to account for that in my filter).

Unfortunately, these ParamType values are a complete red herring for tabular data storage. I noticed that BE.Net wasn’t actually storing my selection when I tried to add filter entries. The thing is that BE.Net stores tabular values on each parameter in the DataStore and only maintains a link to them by the order in which they appear. So my parameters in the DataStore look like this once saved:

  <Label>Filter Type</Label>

It looks to me like list types (DropDown, ListBox, etc.) were mainly implemented with scalar settings in mind rather than tabular settings as this needs to be. This is unfortunate, but I can’t see an easy way to alter the architecture to enable list types easily. I could create my own custom admin page for the extension (and I still may) but that’s more work than I wanted to do to get this running.

The Extension

So my comment extension has been up and working for a day or two now and things have calmed down a lot. This is a good thing. I can’t say that it is extensively tested for the simple reason that I don’t get many legitimate comments on a regular basis.

Configuration is pretty simple as long as you don’t typo the Filter Type value. Each filter is its own entry in the tabular list on top.

CommentFilterConfiguration (Click image to enlarge)

Talking Back to Spammers

When I noticed that it still looks to the user like their comment is saved (because the comment is still part of the page object, it just isn’t saved to the DataStore), I had an inspiration. Since the comment is still displayed to the person who posted it (though not to anyone else), that’s an opportunity to make sure that someone running afoul of my length requirement doesn’t end up wondering what happened. Plus, it gives me a chance to tell spammers that they’ve been noticed (yeah, that’s of dubious value and I may rethink this, but for now, it just makes me feel better). If you enlarged the image above, you’ll see that there are templated values that will be used to replace the comment content. I can be as nasty as I want and the only ones who see it will be the spammers—though you’ll probably want to take it easy on those who stumble on your length filter (if any).

Spammers Should Die

A day or so after this filter went into effect I started to get new messages. These are clever little plays for sympathy saying things like “my comment got eaten but anyway… <regular spiel here>”. Or another “my blog is getting lots of comment spam, do know any way to help?” The website links were still classic spam sites so these weren’t real users looking for help. Cheeky little locusts, aren’t they? Seriously, someone with the right skills needs to hunt these bastards down and rearrange key organs into innovative new patterns.

author: Jacob | posted @ Tuesday, July 14, 2009 1:33 AM | Feedback (4)

Multi-blog Obsession

TwoBlogsThe multi-blog data provider for BlogEngine.Net has been taking up a lot of my brain space lately—to the point that I’m able to announce that it is installed and working “in the wild” on a hosted site (though not in anything like a heavy-load situation). I now have a copy of both my dev site and my personal site up and running from the same directory (and the same database). Frankly, I didn’t think it’d be as easy as it was. This success prompted me to create a 2.0 release (that is now up on the CodePlex site).

Getting Static

My main fear was with the heavy use of static variables in BlogEngine.Net. You see, BE.Net loads all the data into memory using static List variables. I found this out when I went looking for the best way to store a BlogId (so that it didn’t have to parse against an Url every time a request came through).

While there are pros and cons to keeping your entire blog in memory (pro: speed and ease, con: memory bloat and a large delay on any request that triggers a data load), my concern was how an application would react when it had to serve two sets of data. Fortunately, it seems that even when two sites share an application pool on IIS, they still keep their static spaces separate. I’m not sure what I was going to do if it didn’t but I was spared the tragedy.


Installing the blog provider mainly involves copying the binary into the /bin directory and then updating the web.config to point to the right driver. There are three providers in your web.config that are affected.

Blog Provider

The blog provider handles the blog data. Settings, posts, categories and suchlike. Add the provider and update the “defaultProvider” tag and you’re ready to go.


  <blogProvider defaultProvider="SQLBlogProvider">
      <add name="SQLBlogProvider" type="BlogEngine.SQLServer.SqlBlogProvider, BlogEngine.SQLServer" connectionStringName="BE"/>


Membership Provider

The membership provider handles user authentication and management (stuff like changing passwords and such). Technically, you don’t need to change this, but if you don’t the users will be the same across blogs (not a problem if you aren’t multi-blogging). I frankly haven’t tested if a mixed-configuration actually works but it should. Again, add the provider and update the “defaultProvider” tag and you’re ready to go.


<membership defaultProvider="LinqMembershipProvider">
    <add name="LinqMembershipProvider" type="BlogEngine.SQLServer.LinqMembershipProvider, BlogEngine.SQLServer" passwordFormat="Hashed" connectionStringName="BE"/>


Role Provider

The role provider handles authorization and what users are assigned to which roles. Again, you don’t technically have to change this if you don’t need it. Also again, it’s simply a matter of adding the provider and changing the “defaultProvider” tag.

<roleManager defaultProvider="LinqRoleProvider" enabled="true" cacheRolesInCookie="true" cookieName=".BLOGENGINEROLES">
    <add name="LinqRoleProvider" type="BlogEngine.SQLServer.LinqRoleProvider, BlogEngine.SQLServer" connectionStringName="BE"/>


Multiple-blog Configuration

To set stuff up for multiple blogs, you’ll need to run a script or two in your database and add a tag to all the providers. There are two script files (included in both the binary and source files), one for setting up the initial database changes (DatabaseSchemaChanges.sql—mostly adds tables) and another for adding the base values for a new blog (AddNewBlog.sql).

I wanted to make this easier by having the driver do the updates for you. That may still happen in the future, but since BlogEngine.Net itself requires manually running a script if you want to use the database provider I decided not to sweat it too hard. Presumably, anyone running in a database has to be running scripts manually anyway so this isn’t going to be a show stopper.

The provider will run just fine after running either script, even if you aren’t using multiple blogs. In other words, just because the database changed doesn’t mean that the single-blog installation is hosed. The exception to this is the “be_Settings” table. If you’re going to run for a while with a single-blog after running the first script, you’ll want to add a default to the BlogId column so it doesn’t choke when you insert and update settings.DefaultBlogId

Both scripts are “templated” so you can change key factors (a table prefix on the first and a couple of blog values in the second). Filling in the template is a matter of hitting ctrl-shift-M in Query Analyzer or SQL Server Management Studio. That’ll bring up a prompt for what values you want those template variables to have.TemplatePrompt

The final thing to setup is to add a multiblog attribute on the providers. That’ll make your providers look something like this.


<add name="SQLBlogProvider" type="BlogEngine.SQLServer.SqlBlogProvider, BlogEngine.SQLServer" connectionStringName="BE" multiblog="true"/>



The provider selects the blog it wants to deliver based on three configured values.

  • Host is the base address. The provider matches the Host value against the end of the host (so will match “”, “” and “”).
  • Path is the rest of the Url. The provider matches the Path value against the start of the requested path.
  • Port is the port (if any) in the Url. Honestly, I threw this one in there as much for my testing as for any real-world use I expect it to see.


One thing I added (at the provider level) is that when a post comes in without any tags, the provider takes a moment to scan for tags in the post body. This is a feature I did the initial work for in Subtext so porting it over was a matter of a couple minutes. Any time a post is inserted into the database, the provider checks if it has tags yet. If no tags are present, it will scan the content for appropriate anchor markup (like those produced for Technorati tags). That means that on import, my posts all had their tags correctly populated—saving me a lot of extra work (or face losing tags on imported posts). That I was able to avoid the brain-damaged tag handling of BlogEngine.Net is just a bonus (they lower-case tags on creation and then re-capitalize them when serving them up).

Other Stuff

As I said, this should get you set up. Since I used this blog provider from the start on both my blogs, I can verify that the import tool works just fine in a multi-blog configuration. As far as BlogEngine.Net is aware, it’s doing the same stuff it always has. Indeed, the only change I made from BlogEngine.Net’s standard v1.4.5 release was in UrlRewrite.cs to allow links produced by Subtext to still work (so I don’t throw errors on old links).


else if (url.Contains("/POST/") || url.Contains("/ARCHIVE/"))


I submitted a patch at one time to have this hit the base source code but apparently it wasn’t deemed worthy.

Also, I found that running the provider in IIS7 is a bit tricky. Since BlogEngine.Net loads extensions from the database on application start you’ll get errors if you are configured for “Integrated” mode. That’s because “Integrated” mode (quite properly) fires the application start event before the HttpContext.Request is populated (which is what I’m using to determine what blog is being requested). Setting the application pool to “Classic” mode will solve this “problem”.IIS7ClassicMode

Looking Forward

My blogs are still running Subtext at their base addresses. I’m still not quite ready to take the plunge on BlogEngine.Net.  I am, however, undoubtedly one step closer.

author: Jacob | posted @ Thursday, April 02, 2009 5:26 PM | Feedback (17)

Multiple Blog Data

PartialSchema So I have a working LINQ to SQL provider for BlogEngine.Net. Now what? Given a little spare time, how about I see if I can’t use it to support running multiple blogs from the same installation? More importantly, see if I can use it to support running multiple blogs from the same database?

Doing just that turns out not to be all that difficult.


The current architecture for BlogEngine.Net’s data already has a bit more cohesion than it technically needs. All the objects have their own individual Ids and those Ids are used to relate objects to each other (though there is one exception). Since every object already has its own Id (usually a Guid), splitting objects into separate blogs isn’t the chore it might otherwise have been.

There are two options when it comes to dividing items up into multiple blogs. First, each object can have a column added to its table to indicate which blog it is associated with. Second, you can create a cross-reference table that associates a blog Id with the object Id for the blog.

My initial impulse in most cases would be to add a BlogId column to the tables where it is needed. The reason is simple: objects belonging to the blog are in a true parent-child relationship and that relationship is generally best expressed as a field on the child indicating its parent. The relationship can (and really should) be enforced with a foreign key constraint on the column to ensure that the relationship is intact.

Having cross-reference tables is a bit more problematic and carries with it some maintenance and performance concerns. Not only does it force a join when you want to read the objects for a specific blog, but it means that insert, update, and delete commands now have to involve two tables instead of just one. One advantage of cross-reference tables is that they’re easier to extract back out if you need to devolve your data. Additionally, foreign key constraint integrity is triggered when the cross-reference entry is created instead of on your blog objects themselves—making your touch a bit lighter if you have other actors in the system.

Complicating Things 
No decision is best for every occasion, and when it came time to design how I wanted multiple blogs to work, I was really reluctant to mess with the native tables of BlogEngine.Net. I’m not sure if my hesitation is a matter of respect for a project I’m not involved in or if I’m just being unreasonably squeamish, but I eventually chose to go the cross-reference route. My main reasoning is that I wanted my intrusions to remain light and easily devolved.

I ♥ Linq

Now, normally, adding a super-structure on top of an existing infrastructure is a real pain. Editing all your SQL statements manually becomes an exercise in precision string manipulation and if you’re working through stored procedures… ugh. Linq made this really easy.

Here’s an example from the FillProfiles method of the blog provider.

var profileData = from p in context.Profiles
                  select p;
if (isMultiBlog)
    profileData = from p in profileData
                  join bp in context.BlogProfiles on p.ProfileID equals bp.ProfileId
                  where bp.BlogId == Utils.GetBlogId()
                  select p;

The initial select is good for the general case. It pulls all the objects from the Profiles table. Adding a filter when we have multiple blogs is added in the if clause. Note that the second select references the first (“from p in profileData”). Linq knows that the second “from” is a refinement of the first and puts them together logically. Since Linq defers execution of the query until it’s actually used, the query sent to the server includes the full constraint (i.e. filtering happens on the database). Here’s the statement that’s actually sent.

SELECT [t0].[ProfileID], [t0].[UserName], [t0].[SettingName], [t0].[SettingValue]
FROM [dbo].[be_Profiles] AS [t0]
INNER JOIN [dbo].[be_BlogProfiles] AS [t1] ON [t0].[ProfileID] = [t1].[ProfileId]
WHERE [t1].[BlogId] = @p0

This method ensures that you only take the hit of the join if you are in a multi-blog setup. And without pulling everything to the client.


I had some fun with the Settings table because it is an exception to BlogEngine.Net’s Id rigor. It has interesting impact on the Linq situation, but I think I’ll give it its own (short) post later.

Beta Available

So I tested this in my own home-grown environment and it seems to work as expected. In consequence, I’ve created a new release at the project homepage. I’m calling it a beta, though it barely warrants the label. I worry that it has only been tested in a single environment. If you’re a hearty soul and a BE.Net user, please give it a go. I’ll be spending some time getting it set up and tested in an actual public setting with my personal blogs here shortly. As always, I welcome feedback either at codeplex or comments or via email.

author: Jacob | posted @ Friday, March 27, 2009 5:00 PM | Feedback (0)

WCF With GP Web Services

I’m at Convergence this week in New Orleans. If you’re unfamiliar with the conference (and don’t want to follow the nifty link), all you really need to know is that it’s Microsoft’s convention for their business solutions products. For me, that means Dynamics Great Plains.

I bring this up because in the last session I attended yesterday, Louis Maresca mentioned a problem I remembered having with GP Web Services. GP WS has a serious problem when you first instantiate the proxy object: it can take seconds (over 30 on our older systems—I put a timer in just to verify) to instantiate the web service proxy. The reason for this is that .Net queries the service to pull down the available methods and objects on instantiation. Since there are very many of them available in GP Web Services, the query and xml serialization adds up to quite a lot.

Now, his solution was very clever, but involved creating a new web service to slim down the contract retrieval. My solution was to saddle up and use WCF. You see, WCF doesn’t do silly things like query for contracts it already knows full well about. I cracked how to use WCF with GP Web Services about a year ago and I haven’t looked back since.

In that session last night I realized that others might want to know what it took to get it working (and thus a blog post was born…) I’m not going to go though creating the WCF bit. It’s pretty straight forward and explained all over.

Crap, I find I can’t actually proceed without at least giving an overview.

  1. Right-click your project.
  2. Select “Add Service Reference”.
  3. Fill out the dialog:


Okay, now that I got that out of my system there are two things that prevent WCF and GP Web Services from playing nice together.


Since GP WS uses your windows identity to validate things like roles and permissions, your client needs to send the correct identity or “bad things can happen”™. In VS 2005 web services, this was a simple matter of setting .UseDefaultCredentials to true. In WCF it’s a good bit more complicated. It’s a mirror of remotely printing Reporting Services using WCF, though, so techniques used there are applicable (though slightly different).

First, you have to let the binding know the correct security mode and transport. I did this in a basicHttpBinding in the <security> section:

<security mode="TransportCredentialOnly">
  <transport clientCredentialType="Ntlm" />

I came at this setting obliquely and after much trial and error. I’m not sure why clientCredentialType=“Windows” didn’t work against GP WS when it worked with Reporting Services. Probably something quirky in our environment.

This alone is not enough, however. The binding setting is just the contract. To actually use the correct credentials, your proxy has to be told what to do. Not difficult, but easy to overlook when you’re coming from a 2005 web services background. Here’s all it takes:

DynamicsGPSoapClient service = new DynamicsGPSoapClient();
    = TokenImpersonationLevel.Impersonation;

Once that’s all taken care of, you’re set to go. Those two lines of code are processed pretty much instantaneously on even our slowest clients, so problem solved. Almost.


Error handling hung me up for a while and was the final hurdle to being able to truly implement WCF with GP WS. I was so excited when I finally figured it out that I blogged it at the time. The key point is that GP WS wants to check a user’s authorization to view errors before giving up the details of what happened so you have to hit the web service again for details. Thus, while the status message is informative, you only get a GUID for detail in the initial error. This is not a bad thing, but it leads to difficulties when putting together your excuses to the user—particularly when WCF doesn’t make it easy to get at the details of an untyped FaultException.

Simple as That

From here, everything is pretty much the same. You have your objects in the domain you specified in the “Add Service Reference” dialog given above (GPService in my screenshot). Your proxy object has the methods you can use.

author: Jacob | posted @ Wednesday, March 11, 2009 9:20 PM | Feedback (0)

Gratuitous Use of Linq

PowerShovel Every now and then I get to doing something just because... well, because I can. These projects usually atrophy before becoming anything usable and serve more as a way to explore and practice than anything else. Usually. My latest tangent actually got to a state where I can let it loose in the wild and it’ll probably actual do what it is supposed to do.


Let me be perfectly clear up front: I don’t actually use BlogEngine.Net at all. Anywhere. I’m still a Subtext guy when it comes to blogging software. BlogEngine.Net still lacks critical features and that prevents me from using it as yet (primarily running multiple blogs from a single installation/database).

That said, BlogEngine.Net is a lovely little product with a lot to like about it. The extensibility model is extremely easy to use and flexible. The theming doesn’t suck. And its architecture is easy to navigate/understand even while it makes investments in areas I consider likely to payoff.

Data Access

While I like the product, the data access bugs me more than a little. It uses a provider model and includes built-in XML and database providers. These are good things. For flexibility, the database provider uses System.Data.Common and the DbProviderFactory with string-built commands. This structure allows BlogEngine.Net to be able to use any database that has a .Net data provider (including things like MySQL, SQLite, or VistaDB). Incidentally, they downplay (unintentionally?) this feature on their website saying in their FAQ that they support XML and “the SQL Server provider”.

At any rate, here’s their SelectPage implementation as an example


public override Page SelectPage(Guid id)
            Page page = new Page();
            string connString = ConfigurationManager.ConnectionStrings[connStringName].ConnectionString;
            string providerName = ConfigurationManager.ConnectionStrings[connStringName].ProviderName;
            DbProviderFactory provider = DbProviderFactories.GetFactory(providerName);
            using (DbConnection conn = provider.CreateConnection())
                conn.ConnectionString = connString;
                using (DbCommand cmd = conn.CreateCommand())
                    string sqlQuery = "SELECT PageID, Title, Description, PageContent, DateCreated, " +
                                        "   DateModified, Keywords, IsPublished, IsFrontPage, Parent, ShowInList " +
                                        "FROM " + tablePrefix + "Pages " +
                                        "WHERE PageID = " + parmPrefix + "id";
                    cmd.CommandText = sqlQuery;
                    cmd.CommandType = CommandType.Text;
                    DbParameter dpID = provider.CreateParameter();
                    dpID.ParameterName = parmPrefix + "id";
                    dpID.Value = id.ToString();
                    using (DbDataReader rdr = cmd.ExecuteReader())
                        if (rdr.HasRows)
                            page.Id = rdr.GetGuid(0);
                            page.Title = rdr.GetString(1);
                            page.Content = rdr.GetString(3);
                            if (!rdr.IsDBNull(2))
                                page.Description = rdr.GetString(2);
                            if (!rdr.IsDBNull(4))
                                page.DateCreated = rdr.GetDateTime(4);
                            if (!rdr.IsDBNull(5))
                                page.DateModified = rdr.GetDateTime(5);
                            if (!rdr.IsDBNull(6))
                                page.Keywords = rdr.GetString(6);
                            if (!rdr.IsDBNull(7))
                                page.IsPublished = rdr.GetBoolean(7);
                            if (!rdr.IsDBNull(8))
                                page.IsFrontPage = rdr.GetBoolean(8);
                            if (!rdr.IsDBNull(9))
                                page.Parent = rdr.GetGuid(9);
                            if (!rdr.IsDBNull(10))
                                page.ShowInList = rdr.GetBoolean(10);
            return page;


I want to reiterate that I’m not ripping on their choice to use this data access methodology. If you want the flexibility to use any .Net supported data provider without a third-party dependency, this is how you get it. You could optimize some of the stringiness, but with trade-offs.

Linq, Linq, Baby

That said, I don’t mind being tied to SQL Server and if I’m going to muck with the data layer (like, say, if I’m going to attempt multiple blogs from a single application instance) I want something simple that I can use without all that extra goo. I looked for any hint that this might have been done already, but I couldn’t find anything. It looks like I’m the only one with this particular manifestation of brain damage.

Since configurable table prefixes are a desirable feature and since that’s easier to do in Linq to SQL I figured it’d be best to go that route. It was good to get the table prefix stuff into a working project and have it work about as I expected it to.

Implementing the BlogEngine.Net blog provider turns out to be pretty easy in Linq. Ditto the Role and Membership providers. I tried to stay as close as possible to the DbBlogProvider. Even so, I found that some of the admin components are picky enough that even little things could bite me (the category editing page blows up if you leave category descriptions null, for example).

For compare and contrast, here’s my SelectPage method using Linq to SQL


public override Page SelectPage(Guid id)
    Page page = null;
    using (Data.BlogDataContext context = getNewContext())
        Data.Page pageData = (from p in context.Pages
                              where p.PageID == id
                              select p).FirstOrDefault();
        if (pageData != null)
            page = new Page()
                Id = pageData.PageID,
                Title = pageData.Title,
                Description = pageData.Description,
                Content = pageData.PageContent,
                Keywords = pageData.Keywords,
                DateCreated = pageData.DateCreated.HasValue ? pageData.DateCreated.Value : DateTime.MinValue,
                DateModified = pageData.DateModified.HasValue ? pageData.DateModified.Value : DateTime.MinValue,
                IsPublished = pageData.IsPublished.HasValue ? pageData.IsPublished.Value : false,
                IsFrontPage = pageData.IsFrontPage.HasValue ? pageData.IsFrontPage.Value : false,
                Parent = pageData.Parent.HasValue ? pageData.Parent.Value : Guid.Empty,
                ShowInList = pageData.ShowInList.HasValue ? pageData.ShowInList.Value : false
    return page;

BlogEngine.Net for SQL Server

So I got the thing working and thought I’d open it for “the community”. Since BlogEngine.Net is on CodePlex, that was a natural choice. Anyone sharing my peculiar proclivity is invited to head on over, take a poke and let me know what can change for the better. Or better yet, submit a patch. Or better, better yet, join the project. (also: CodePlex’s recent transparent svn compatibility is awesome! When did that happen?)

If you do poke at the project, some forewarning. First, I don’t have unit tests for this and don’t plan on any (I’m not against using them, I just don’t want the headache of creating them myself). Data access is more in the realm of “integration testing”, so I’m not sure there’s much you can really do that’s actually useful. It might be different if BlogEngine.Net had a suite of tests I could use to validate my providers against, but...

Second, I haven’t gone out of my way to do a lot of commenting. This is deliberate. These methods are short, should be self-explanatory, and since they should be hidden behind a provider I didn’t even bother with the typical XDoc stuff. Anything directly accessing them such that intellisense comes to bear is doing something wrong...

Room to Improve

In the end, this is the ground-work to remove an (admittedly trivial) barrier in working with BlogEngine.Net. I hope to help move as much as possible into the database in order to support things like multi-blog configurations. BlogEngine.Net hasn’t been very disciplined in its storage layer and XML really is the default medium. Things like referrer tracking don’t use a provider at all so you really can’t (yet) get away from XML files in your App_Data directory (and hence, read/write permission configuration). Since I want that to change, I’m going to see if I can’t get that going a bit.

Things to do
  • Decide on a database update methodology (so far, I’m using the already-existing tables and hence piggy-backing on the BlogEngine.Net scripts)
  • Replace the built-in ReferrerModule (use/create provider model access for referrers?)
  • Comb the project for any other errant XML dependencies
  • Work on multi-blogging configuration

author: Jacob | posted @ Wednesday, March 04, 2009 1:11 PM | Feedback (7)

So You Think You're An Admin?

Access Button I had an interesting problem crop up trying to run my own application this week. We have a routine that uses an excel spreadsheet to import orders into Dynamics GP that includes some twists that aren’t handled well by Integration Manager. Since the application runs from the network (using ClickOnce) and because these orders can be substantial and represent a commitment of corporate resources, we want some control over who can run them. Specifically, we use Active Directory group membership with hard-coded/defined groups.

One of the groups I want to allow is Domain Admins. And yes, this is a kludge. All three members of our small IT shop are Domain Admins—mainly so that we can act as backup when the others are unavailable. It’s a handy kludge, though, so lump it. Unfortunately, when running from my machine (running Vista), the user token being used to check Identity.IsInRole() wasn’t admitting that I am, in fact, in the Domain Admins domain group

This is, by the way, the first I’ve run into an inversion of Works on My Machine™.

The Problem

It wasn’t terribly difficult to figure out what was wrong. The key to the problem is that Vista UAC (which I actually rather like because I want to know when programs undertake certain activities) creates a “split token” when you login using an account with admin privileges. The user actually runs using the filtered token that removes the dangerous things and only elevates (with user notification and approval) when those privileges are actually needed.

So when I asked WindowsIdentity.IsInRole("COMPANY\Domain Admins") it told me that it’s never heard of that role and certainly I’m not a member of it. This was disconcerting.

Now the problem goes away if you start Visual Studio with “Run as Administrator” or start an application with a shortcut with that setting. Which works fine (not great) while developing (if you remember to start VS as administrator) but eventually I got tired of it and sometimes I want to run the deployed app from my box. There’s just one small hitch. Remember that I mentioned that we deploy the app to the network using ClickOnce? It turns out that there’s no good way to start a ClickOnce app with elevated privileges. Googling around (and even checking Stack Overflow) I found some people who wrote what were essentially batch files or ran services that could then be used to elevate processes either to run ClickOnce apps or to allow ClickOnce apps to do stuff that requires elevation. But really, that’s a lot of hassle for something I just knew had to be simpler.

The Solution

After beating my head on the problem for a bit, I eventually took a step back and asked myself that crucial dev question: “What am I actually trying to accomplish here?” I need to remind myself to do that sooner when I find myself “brought to Point Non Plus” (as Georgette Heyer’s characters might say). It turns out to be a good question and one that led to the “Duh” moment I share with you now.

Since I’m not actually doing anything that requires admin privileges, going for process elevation is a complete waste of time. All I really want to know is if the current user is part of a specific Active Directory group. Didn’t I see something about .Net Framework 3.5 and managed domain objects? Why yes! Yes I did!

The nifty little buggers are in System.DirectoryServices.AccountManagement and if you do anything with Active Directory domains you owe it to yourself to give this namespace a once over. Here’s what I ended up with:

bool isAllowed = false;
WindowsIdentity wi = WindowsIdentity.GetCurrent();
using (PrincipalContext pc = new PrincipalContext(ContextType.Domain, "COMPANY"))
    UserPrincipal up = UserPrincipal.FindByIdentity(pc, wi.Name);
    GroupPrincipal gp = GroupPrincipal.FindByIdentity(pc, "Domain Admins");
    if (up.IsMemberOf(gp))
        isAllowed = true;

This worked right out of the chute. Well, getting the right AD group membership didn’t want to work when using the Principal.IsMemberOf(PrincipalContext, IdentityType, string) overload, but pulling down the actual GroupPrincipal looks cleaner anyway, I find.

author: Jacob | posted @ Friday, February 13, 2009 1:56 PM | Feedback (3)

Professional Values

Dome Scratcher One of the things I am seeing less of lately is the understanding that reasonable people can and will disagree with one another—without either of them being any the less reasonable or intelligent for doing so. It seems to me that people become so invested with the “rightness” of their ideas that they deny the possibility that those who disagree with them may be equally intelligent and well-informed. You see it a lot in politics, but I think that this attitude has crept into development discussions as well.

I saw a manifestation of this in action after a recent Stack Overflow podcast wherein Joel had the temerity to question Robert Martin’s SOLID principles (SOLID was the topic of a recent podcast with Scott Hanselman that Joel had apparently heard). I highly recommend both podcasts. It didn’t take long for some bright stars in the Alt.Net universe to talk about Joel jumping the shark or the state of his imperial wardrobe. I’m not sure why the impulse to denigrate those who disagree with you is so strong, but it appears to be nearly universal. We go from the belief that somebody is wrong to the conclusion that they are incompetent or ignorant without taking time to draw breath, really.

I think this impulse is not only wrong, but damaging. It represents a voluntary limitation of our ability to engage in important dialog and stretch our understanding.


One crutch of those who participate in this hidden hubris is the belief that people who disagree must be missing data. You can see this most commonly when someone tells you to “educate yourself” or its milder form “try it and you’ll see”. The underlying message in those statements is that if you knew what they know, you’d do what they do.

And that may be correct. It really could be the case that someone is missing data and if they had that data they might agree. Since software development is so change-driven, much of development blogging is motivated by the desire to teach others things they might not have heard about before. You have to honor the often unrewarded efforts of those who take the time to put information out there for the benefit of those seeking education.

The problem is that telling someone who questions your tools or methods to educate themselves is arrogant, even if you are completely sincere and honestly well-meaning. You see, there are really only two possibilities for someone questioning you: they lack data or they have arrived at a different conclusion after weighing the data themselves.

In the first case, telling them to educate themselves notifies them of your superiority (you are in possession of data they lack) at the same time you deny access to yourself as a resource (you are directing them elsewhere for acquiring that data). It’s a dismissive brush off. In the second case, telling them to educate themselves is a judgement of their experience and conclusion without the benefit of explanation or debate. In both cases you are saying that you are better than they are and that they shouldn’t be allowed to participate in the discussion as a result.

If you honestly believe that someone disagrees with you due to lack of information, a better response would be a series of questions or even challenges geared towards examining their point better. “Have you considered” questions or “I disagree because” statements are more helpful; they give others the chance to respond and have the courtesy of taking someone seriously enough to invest in the discussion.

Weighty Matters

What if I’m already educated and I still disagree? What if I’ve done the recon, seen the scene, danced the dance, bought the souvenir and I still beg to differ with your Great Truth? How can two intelligent people look at the same data, having access to the same information, and arrive at two legitimately different conclusions?

Allow me to educate you. (heh)

You see, most decisions are complex and involve competing principles. Do I take the time to add an abstraction layer that will be useful later or do I YAGNI my way through with the simplest solution that works right now? Do I hassle with System.Data.Common to support multiple database providers or allow a strong dependency on SQL Server? Do I use test first to drive out my design or am I confident that my planned design is decoupled enough and tests ex-post-hackto will be adequate? All of these decisions break down into factors that we weight according to our experience and expectations.

I’ll illustrate what I mean. Lets take, for example, deciding if Georgette Heyer is better than Meg Cabot. It can be argued (and I would, indeed, so argue) that they have comparable ability with characterization, prose, story, and plot. However, Meg Cabot sets her characters in modern settings and Georgette Heyer’s books are chiefly set in Regency England. Two people could easily disagree over which author is best if one has a strong weight on “historical regency setting” without either being any the less intelligent or in need of education.

Back to software development. Consider some of the factors that might go into deciding to use TDD.

  • Force strongly decoupled design
  • Ensure a minimum level of test coverage
  • Early API exploration and (pseudo) documentation
  • Commit project to a test framework
  • Commit project to a mock framework
  • Commit project to automated testing infrastructure
  • Writing tests without the benefit of intellisense

How you weight each factor will determine your judgement of TDD. Someone who is not worried about forcing decoupled design (either because decoupling doesn’t pay off in their environment or because they feel they can achieve decoupling without force) will be less likely to chose TDD for their project.

And it gets even more complicated when you realize that each factor can, in turn, be comprised of additional factors with their own weight and trade offs (you could easily break “decoupled design” down further, for example).

Learning to look for underlying principles and the weights that others may apply can be interesting as well as useful. More importantly, it opens up the gray areas and opportunities for honest disagreement (without denigration) that are vital when working with IRealWorld.

Educating Yourself

As software professionals, we owe it to ourselves, our employers, and our clients, to be educated in our craft. That’s a not inconsiderable burden in a field that grows by leaps and bounds year after year with no sign of slowing any time soon. It is hard enough learning all the concepts, patterns, and practices (not to mention tools, environments, and platforms) that it is often tempting to find a core of experts that you rely on to make decisions for you (and there is some benefit in doing so initially as you come up to speed on the intricacies of our field).

But to be truly educated, you have to go beyond what your experts say and learn about the principles that exist underneath. That’s extra work but more importantly it is extra responsibility. It may be that when you break a practice down into its constituent principles that you will find yourself in a situation where “Best Practices” aren’t, in fact, best. To me, it is the ability to not only make that determination but to then act on it that truly makes a “professional”.

author: Jacob | posted @ Friday, February 06, 2009 7:17 PM | Feedback (5)