Empty Sign I spent some quality time googling this and even went and asked the nascent Stack Overflow community and didn’t come up with a satisfactory answer. Being the intrepid sort, I opened up a test project and started poking around, compiling information from a number of sources and playing until I got something that worked. For your amusement and/or edification, I’ll document what I found.

What I Want to Do

The basic scenario is that many typical “commodity” web applications use databases to store their information. Since most web hosting services come with a single database but charge extra for additional databases, it is common for web-type products to add identifier text to their table and stored procedure names*. The .Net blog software I sometimes contribute to, Subtext, is an example. Take the table I added to hold tags associated to posts a while back, “subtext_Tag”. Using the “subtext_” prefix means that we won’t run into naming collisions on our tables if someone has a wiki or forum application that also contains a table for tag entities named “Tag”.

* And yes, as one user on Stack Overflow suggested, you could use schema as a differentiator so we could as easily have used a “subtext” schema and our tables would be “subtext.Tag” instead of “dbo.Tag”. That solution is much trickier to setup than a simple table naming convention, though, so I think I prefer the table prefix for now.

The .Net Way

While I was initial open to any .Net OR/M the fact of the matter is that the only one I know anything about is SubSonic. Now I like SubSonic a lot, but their tools are geared towards code generation and I couldn’t find a handle into runtime manipulation of table names (such might exist, but I was unable to find it).

Since I’m even less familiar with the other third party .Net OR/M data tools (like LLBLGen or NHibernate) and nobody on Stack Overflow (who tend to be knowledgeable about these things) spoke up, I decided to check out what it would take to monkey with the table names at runtime using stuff I have actually used. Namely the Entity Framework and LINQ to SQL. It turns out to be possible in either, though I have to admit to being surprised that it is easier in LINQ to SQL than in EF.

My Test Database

To keep things simple, I created a test database. Since I am a highly creative professional, I named it “Test” and created a table there named “TestTable”.

Test Schema 

LINQ to SQL

Instantiating a new DataContext object in LINQ to SQL includes a constructor that allows you to feed in a MappingSource derivative. By default, L2S uses an attribute mapping object that pulls the metadata from attributes on your classes—in this case the “Name” property of a TableAttribute.

[Table(Name="dbo.TestTable")]
public partial class TestTable : INotifyPropertyChanging, INotifyPropertyChanged

Since attributes are immutable at runtime, using the default isn’t an option here. Fortunately there’s an XmlMappingSource object that can use an XML file (or fragment) to do what I need. Unfortunately, generating an initial mapping file is a touch cumbersome and requires use of the SQLMetal.exe tool provided with Visual Studio.

Here’s how (after adding a TestLINQ.dbml file and dragging the table onto it—pre-name-change of course):

  1. I opened the VS Command Prompt (it’s in a Visual Studio Tools folder in the start menu).
  2. Changed the directory to my project.
  3. Entered the command “sqlmetal /map:TestLINQ.map /code TestLINQ.dbml”. This generates a TestLINQ.map file.
  4. Right-clicked on the generated file in my project and selected “Include in Project”.
  5. Set the file’s “Copy to Output Directory” property to “Copy Always”.

The mapping file is pretty simple. The relevant bit is the Name attribute of the Table element:

<Table Name="dbo.test_TestTable" Member="TestTables">
  <Type Name="LINQ.TestTable">
    <Column Name="TestId" Member="TestId" Storage="_TestId" DbType="Int NOT NULL IDENTITY" IsPrimaryKey="true" IsDbGenerated="true" AutoSync="OnInsert" />
    <Column Name="TestOne" Member="TestOne" Storage="_TestOne" DbType="VarChar(50)" />
    <Column Name="TestTwo" Member="TestTwo" Storage="_TestTwo" DbType="VarChar(50)" />
  </Type>
</Table>

Make sure the name is what you want it to be and you’re golden. Here’s the code I used to test it out after the name change.

XmlMappingSource source = XmlMappingSource.FromUrl("TestLINQ.map");
using (LINQ.TestLINQDataContext context = new LINQ.TestLINQDataContext(Properties.Settings.Default.TestConnectionString, source))
{
    LINQ.TestTable table = new LINQ.TestTable()
    {
        TestOne = "firstLINQ",
        TestTwo = "secondLINQ"
    };
    context.TestTables.InsertOnSubmit(table);
    context.SubmitChanges();
 
    table.TestOne = "thirdLINQ";
    context.SubmitChanges();
 
    context.TestTables.DeleteOnSubmit(table);
    context.SubmitChanges();
}
 

XmlMappingSource even has a .FromXml() method that will create your map from an Xml fragment string.

Entity Framework

As I mentioned before this one is harder. This is a surprise because EF is ostensibly created to make it easier to keep your object definitions separate from your storage definitions. The reason it isn’t easier is understandable once you realize that EF is made to be highly configurable and thus its definition files are much more complex than the L2S mapping.

The first problem with EF, though, is that the documentation is schizophrenic. Also confusing. That’s because MS rolled the original three configuration files into the .edmx definition file so references on the web imply that those files are easily seen and edited. Even more confusing is that EF actually still uses the .csdl, .ssdl, and .msl files at runtime—it just generates those files from the .edmx and either packs them in the assembly as a resource (by default) or as files in your output directory.

Well, to monkey with the tables at runtime, you have to have access to the configuration files. To do so you need to change the default in the “Metadata Artifact Processing” property of your ConceptualEntityModel and rebuild the project. That’ll put perfectly good .csdl, .ssdl, and .msl files in your output bin directory (You don’t have to leave it at "Copy to Output Directory" once you have saved these files off for your own use and abuse.)

EF did another funky thing by deviating from the norm in what it pulls from the connection string that you feed it. If you look at your EF connection string in app.config (or web.config) you’ll see something like this:

<add name="TestEntities" 
connectionString="metadata=res://*/TestEF.csdl|res://*/TestEF.ssdl|res://*/TestEF.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=localhost;Initial Catalog=Test;Integrated Security=True;MultipleActiveResultSets=True&quot;"
providerName="System.Data.EntityClient" />
 

Notice that there’s a “provider connection string” embedded in the connectionString attribute—that’s the “normal” connection string that tells EF what database to use for storage. Also notice that the metadata property tells EF where to go for those configuration files (in this case, it’s telling EF to look in the assembly for resources TestEF.csdl, TestEF.ssdl and Test.msl).

Armed with this information, I was able to add a set of generated config files to the project stolen from those generated in the output directory. Once there, you have to edit the storage file and the mapping file to use the altered names. The table name used by EF is taken from the Name attribute of the EntitySet element in the .ssdl file. This is unfortunate because the Name attribute is a reference used by things like associations. Which means that you have to make sure you alter all the references to that EntitySet as well (fortunately, these are generally referenced using an EntitySet attribute on the relevant association and thus are relatively easy to find).

<EntitySet Name="test_TestTable" EntityType="TestModel.Store.TestTable" store:Type="Tables" Schema="dbo" />
 

It’s not necessary to alter the EntityType, though I found that if you alter it consistently across the file it will work if you do.

The mapping configuration in the .msl file just has to be updated so that the objects use the correct EntitySet for storage.

<EntitySetMapping Name="TestTable">
  <EntityTypeMapping TypeName="IsTypeOf(TestEF.TestTable)">
    <MappingFragment StoreEntitySet="test_TestTable">
      <ScalarProperty Name="TestId" ColumnName="TestId" />
      <ScalarProperty Name="TestOne" ColumnName="TestOne" />
      <ScalarProperty Name="TestTwo" ColumnName="TestTwo" />
    </MappingFragment>
  </EntityTypeMapping>
</EntitySetMapping>
 

Once those changes are made your code will work with the altered table name, though you need to alter the connection string to look for the correct configuration files. Here’s the final code I used to test it out.

string connection = "metadata=TestEF.csdl|TestEF.ssdl|TestEF.msl;provider=System.Data.SqlClient;provider connection string=\"Data Source=localhost;Initial Catalog=Test;Integrated Security=True;MultipleActiveResultSets=True\"";
using (TestEntities entities = new TestEntities(connection))
{
    TestTable table = new TestTable()
    {
        TestOne = "firstEF",
        TestTwo = "secondEF"
    };
    entities.AddToTestTable(table);
    entities.SaveChanges();
 
    table.TestOne = "thirdEF";
    entities.SaveChanges();
 
    entities.DeleteObject(table);
    entities.SaveChanges();
}

So Which is Better?

Well, that depends on what you want to accomplish but then, doesn’t it always? For me, the LINQ to SQL solution is much cleaner because I’m simply not going to use all the other goo that the Enterprise Framework includes. Plus, the LINQ to SQL solution can use an XML fragment so I can bury that mapping piece wherever I want to, including in inline code. EF requires a file reference so those files have to be either in the assembly resources or on the file system. EF also allows you to leverage Asp.Net Data Services but that’s a topic for another post entirely...