Should I use SQL Server identity columns for static lookup tables?

Most systems have static data lookup tables. These are tables that are not inserted into from the GUI or any programmatic process and usually contain a master list of ‘kinds’ or ‘types’.

These ‘types’ mean something to the business but because they either never change, change infrequently or require logic to be written against them there is no GUI to edit them. Quite often records in these tables get displayed in dropdown lists we see onscreen.

Two simple examples…

Example lookup tables

… so should we use auto-increment identity IDs in these tables or normal non identity IDs? Others may disagree but I recommend NOT using identity columns with these tables.

Why?

Identity columns are not needed

Identity columns are auto-increment columns which are most useful when we are inserting new records programmatically and need automatically generated IDs which haven’t been used previously.

For static lookup tables we have all the items ahead of time so can assign arbitrary IDs to them. We don’t need auto-increment IDs as the lookup records are inserted as part of the development and deployment process not programmatically.

Identity columns can lead to bugs

The contents of static lookup tables are often programmed against in our code. To use the membershipType table above as an example we might have logic that shows extra functionality if a members membershipType is premium…

Modelling enums

We’ve used an identity column for the membershipType ID column and no problems so far…. on DEV and UAT everything works. We push to PROD and since the premium membership is a new feature for this release we need to insert an entry for it into the MembershipType table.

The developer, unaware of the potential for mishap and knowing the IDs are automatically generated by the identity column scripts the insert without reference to an explicit ID.. eg…

Insert into identity column

The insert works fine, no complaints from SQL Server Management Studio..

…BUT…

as there was a couple of membership types previously inserted and deleted on the PROD table the premium option now gets an ID of 10. Remember identity IDs are not guaranteed to be contiguous but a lot of developers won’t know this.

Our logic expects that premium membership is ID 3 so we now have a bug which means our premium members won’t get to see the extra stuff.

I see the above kind of thing A LOT and although it’s not the fact we’re using identity but rather how we scripted the insert that caused the issue we could have avoided it in the first place by not using identity. For a static lookup table whose IDs we code logic against we don’t gain anything from using identity IDs and they actually require extra effort to use (as we have to include SET IDENTITY_INSERT).

What about static lookup tables which we don’t code logic against?

If we have a table which is not coded against, the case against identity is less strong as we don’t NEED to align IDs across environments. Even though we don’t need it aligning IDs gives us a few advantages…

We can write logic against the lookup type much easier in the future without having to rebuild our environments.

We can delete a lookup item by ID across environments.

We can select FK records by ID across environments.

If we decide that we’re going to align IDs then using identity is just extra work as we need to set IDENTITY_INSERT on/off each time we add records. If we really don’t care about ID alignment the one advantage of using identity is easier inserts as we don’t have to include the ID column.

Summary

In summary… if you don’t need identity IDs I wouldn’t use them and in particular if you need to attach any sort of meaning to IDs it’s best IMHO to stick with explicitly chosen IDs.

Client IP access error when connecting to Azure SQL Database from an App Service

If you need to connect to an Azure SQL Database from an App Service and are getting a ‘Client with IP address ‘XXX’ is not allowed to access the server.’ error you need to amend the server firewall settings in one of two ways…

Firewall settings Azure SQL DB

1 – Allow access for ALL Azure resources

Switch the ‘Allow Azure services and resources to access this server‘ button on in the server firewall settings page. This allows all resources in the Azure boundary to access the server (not the DB).

2 – Allow access for certain IPs

Add IP based rules for the App Services outbound IP addresses in the DB server firewall settings page.

Each app will have a set number of outbound IPs at any time and the specific IP is selected randomly at runtime so you need to grant access to all outbound IP addresses listed. To find the addresses go to your App Service in the portal and navigate to Settings -> Properties

Azure App Service Outbound IP address

According to Microsoft the set of outbound IP addresses for your app changes when you perform one of the following actions:

Delete an app and recreate it in a different resource group (deployment unit may change).

Delete the last app in a resource group and region combination and recreate it (deployment unit may change).

Scale your app between the lower tiers (Basic, Standard, and Premium) and the Premium V2 tier (IP addresses may be added to or subtracted from the set).

The ‘Outbound IP addresses’ section above shows the current IP addresses based on your current tier but ‘Additional Output IP Addresses’ shows all possible outbound IPs regardless of tier. Therefore when you’re adding your IP whitelist, I recommend using the additional list just in case you change SKU in the future.

Don’t forget to save your changes…

Fix the ‘Your client IP address does not have access to the server.’ error when connecting to an Azure SQL Database from SQL Server Management Studio

If you’ve just created an Azure SQL Database and are trying to connect to it from SQL Server Management Studio you may get an access error saying…

Your client IP address does not have access to the server.

To resolve this you need to amend the firewall settings on the server.  To do this navigate to your new database and click ‘Set server firewall’…

Go to the new Azure SQL Database

 

… and then on the Firewall settings page click ‘Add client IP‘ to have Azure populate a firewall rule for you as shown below. Don’t forget to click Save afterwards.

 

Add client IP in the firewall settings page

How to select more than 1000 rows by default in SQL Server Management Studio

In SQL Server Management Studio when we right-click a table we have an option to ‘Select Top 1000 Rows’ and ‘Edit Top 200 Rows’ as shown below…

Select Top 1000 Rows

SQL Server Management Studio chooses these relatively low defaults to prevent unresponsive GUIs however we can increase these if we wish through SSMS options…

Configure how many records to view/edit

Customizing SQL Server column types when using Entity Framework Core code first

In Entity Framework when we’re using the code first approach EF will examine our classes and using a defined mapping will create a DB structure based on them. This can result in an inefficient DB design as by default all C# properties of a certain type will be mapped to the same DB column type.

For example in Entity Framework Core 5.0 some default mappings are…

C# to SQL Server Mappings

Accepting EF default mappings across a whole DB can result in a lot of wasted space and potentially poorer performance. Thankfully there are a number of ways we can tell Entity Framework what column types we want our C# properties mapped to.

Define DB column types using data annotations

By using data annotations we can define the mappings directly in our classes immediately above each property. This means its very easy to see what type a particular property is mapped to.

From a domain driven design point of view however, data annotations may not be ideal as we’re essentially mixing in implementation details from our DB layer (SQL Server column types) with our domain objects.

As can be seen below data annotations are easy to use.

EF mappings using data annotations

Another in-class approach is to actually use different data types… so we’d use short where we want a smallint DB column and byte when we want a tinyint DB column etc. This level of granularity is not available for all data kinds though so it will only get you so far.

Define DB column types using fluent API

In Entity Framework Core the fluent API for defining property mappings is available by providing an override of the OnModelCreating method in our DbContext class. Below we can see the fluent API equivalent of the data annotations from above.

Note… I’ve included two mappings for the Decimal_Fee property just to show the new EF Core 5.0 syntax that’s now available for setting the DB precision on relevant properties.

EF to DB mappings using fluent API

Between data annotations and the fluent API I’d recommend using the fluent API as it allows us to keep all our mapping related code together and perhaps more importantly allows us to keep our domain models free of persistence related clutter.