Config files specified in SQL Agent being overridden by design time package configuration

In work, our DEV and our UAT DB environments reside on the same box, but with their own SQL instances. We prefer to store SSIS packages on disk as they are a little (actually a lot) easier to manage that way. Since our DBs are on the same box we could, when configuring SSIS packages just copy the packages to two separate file locations for DEV and UAT and alter the design time package configuration to point to DEV.dtsconfig for one and UAT.dtsconfig for the other.

Rather than this however we attempted to just have the SSIS package in one location and override the default .dtsconfig file which was defined at design time by passing in the the environment specific .dtsconfig as part of the two (DEV & UAT) SQL agent jobs set-up to execute the package on schedule.  This can be done from the configurations tab in the job properties.

overiding-design-time-config

 

We expected the package to take its settings from UAT.dtsConfig as that was what was defined in the job but as we found this was not the case. This is because as of SQL 2008 SSIS packages load configurations in the following order:

  1. The utility first applies the design-time configurations.
  2. The utility then applies the run-time options that you specified on the command line when you started the utility.
  3. Finally, the utility reloads and reapplies the design-time configurations.

which meant the .dtsConfig specified in design time configuration was used. According to Behaviour Changes to Integration Services Features in SQL Server 2008 R2 on the MSDN site one can use /set to change design time settings but not the location of settings.

To get the .dtsConfig specified in the job to be the effective one, we needed to disable package configurations in design time. Not delete the existing one, but just disable it. After that the config file specified in the SQL Agent job was used.

Google recaptcha firewall exception options

Google reCAPTCHA firewall options articleWe’ve implemented Google recaptcha in one of our web apps which is nice and works fine on our local machines. When we deploy to our dev server however we were hit by outbound firewall rules so .Net threw an exception along the lines of ‘Unable to connect to the remote server

Google reCAPTCHA firewall options

Upon investigating what firewall exceptions we would need for our various environments, we noted that Google recommends opening up the relevant port (80 if http is fine, 443 if the web app will run with a secure certificate/https) to outbound connections. This is highly insecure and most likely unacceptable to most network administrators. That option was therefore not a runner for us.

The best practice and more secure approach would be to create rules which are restrictive as possible using only specific IP addresses. We looked into this, however from researching online it seems that Google’s IP addresses change regularly which obviously creates a problem for maintainability and for the reliability of the application. One comment from April 2015 on the ‘How to access reCAPTCHA servers through a firewall‘ recaptcha wiki page in particular worried us:

Hi guys, we are using the Recaptcha solution for almost half a year now, and in this time we had to change the firewalls four or five time already, just to find out today that they have changed again..

We also noted similar comments such as the below:

Yesterday it worked fine once we configured the firewall to allow requests for the IP address that was causing the problem.  But today it’s requesting using a DIFFERENT IP address, which isn’t configured.  Not only is this a problem for us right now, but it makes me uneasy.  What happens if it changes IP addresses again?  More broadly, what is the issue, here?  Why did it use one IP yesterday and another today?

Additionally the ‘How to set up reCAPTCHA‘ wiki page also mentioned that IP addresses can change. One way to mitigate the risk of IP addresses changing might be to catch the exception that is thrown when the app attempts to communicate with http(s)://www.google.com/recaptcha/api when the firewall is blocking such communication. The catch block could then perhaps bypass reCATPCHA, considering the request to come from a human and also email IT support who can then verify if in fact IP addresses have changed. Of course this still isn’t a runner for most enterprise applications.

In the end it seems the best Google reCAPTCHA firewall approach for most will be to allow outbound requests on port (80 or 443) based on the hostname google.com. Using DNS allows us to abstract out any changes to the underlying IP addresses. Unfortunately however I believe this approach may not work for everyone as some firewalls will not support this configuration, furthermore I believe configuration of such rules is more complicated than the IP address based approach.  That being said it does provide reliability and is certainly more secure than just allowing all outbound connections on port 80 or port 443 so this would be the approach I would recommend.

What Google reCAPTCHA firewall rules are you guys using to enable use of this tool in your applications?

Relative path to config file in BIDS/SSIS 2008 package configurations

As we know using absolute paths in our code can complicate things from a deployment point of view, so it’s best to use relative paths were possible. In business intelligence studio 2008 however the package configuration wizard doesn’t allow you to enter relative paths when pointing to configuration files. If you attempt to type a relative path in, clicking next will replace it with the absolute path, so we appear to have a problem here.

package-configuration

There is an easy work around however, rather than using the GUI just edit the .dtsx file directly to point to the relative path. For example if in BIDS package configuration window you have entered the path as ‘c:\SSIS\active.dtsconfig’, simply change that to ‘..\SSIS\active.dtsconfig’ using a text editor and next time you open the wizard the relative path will be used.

What do senior developers do differently to other software developers?

Matt Briggs has recently wrote one of the best overviews of what it means to be a senior developer that I have read. In the post he contrasts the role of the senior developer with that of junior and intermediate developers noting what all three usually focus on.

Matt’s post helped crystallise my views on this but for me the difference between senior and other developers is like the difference between how and why. Decent developers of any career level are able to find out how technically to do things whereas seniors developers are more focused on the why… or perhaps why not… to use a certain technology or methodology. Remember just because something can be used or done, doesn’t mean it should, everything involves a trade off and there is no universal ‘right way’ just an estimated ‘most appropriate’ way given the current project particulars.

In agreement with Matt, I found senior developers to be much more pragmatic than their less senior counterparts who tend to want to produce more pure and ‘beautiful’ code. Senior developers are always looking for the most simplest not the most complex or most pure way of implementing solutions. Senior developers know the concept of ‘good enough’. They are able to reign themselves in when others might be trying to make their code look like art. They know that even though the big books might recommend the repository pattern (for example) in case an ORM needs to be ‘switched out’ in the future, implementing such an abstraction could result in an over engineered system. All other things being equal, the senior developer will choose the simplest solution that meets the requirements regardless of dogma.

Always considering the why rather than just the how and being pragmatic rather than purist are the main differences for me between seniors and others, but check out Matt’s excellent post for more. There is good discussion going on over there in the comments section including some readers asking about how they can become senior developers. Well before I finish I’d like to offer my views on that. Four or five years ago I was the intermediate developer Matt talks about in his post, I’d read a book about this or that that some Microsoft MVP had wrote and almost religiously think that was the way software should be written. The end result was that I needlessly made much software I wrote more complex than it needed to be. What has helped me progress is working for the last five years or so in consulting whereby I might be working on one project for 10 months, another for six months, then yet another for nine and so on. Given that all these projects are for different companies the sheer range of experiences, problems/solutions, domains, developers not to mention over engineered complex software you come into contact with and learn from is staggering. I don’t think you can become a senior developer without getting lots of varied experience under your belt. In that regard I’d recommend consultancy work if you can get it.

Discouraging use of the var keyword and ternary if operator

I would always favour typing more code to make it more explicit, more readable and to ensure consistency in style throughout a software system. Minimising the bytes and lines needed to do something shouldn’t take preference over readability. My two pet hates in this regard are the var keyword and ternary (?) if operator.

I know var is just syntactical sugar and everything is still type safe, but for me it just moves C# in the direction of a non type safe language at least in regard to syntax style and personally I just don’t like using it. I spoke to another developer about it recently and he was very dogmatic that it is a good thing as its shorter and more concise. I agree in some instances that that can certainly be the case but because it’s not appropriate for all declarations such as:

var myVariable = System.IO.File.Open("test.txt", FileMode.Create);

or

var id = GetId();

it means a developer will either a) use var everywhere including in statements like the above where the type is in fact not obvious or b) use explicit declarations for statements like above and use var declarations for statements such as:

var names = new List<string>();

which means you either have many instances of variable declarations which are hard to understand or inconsistent coding style. If var is used at all another developer will no doubt come along and use it inappropriately so I prefer to discourage its use.

As far as ternary operator (?) ifs are concerned, again I prefer not to use them. I’d rather just use a standard multi-line if through the whole system, this way everything is explicit and the judgement call of whether the use of ? actually makes a particular if statement easier to understand or not is eliminated. I mean for simple expressions they can be neat but the problem is that in a team environment the precedent set by using them at all results in their overuse by less skilled developers. For example it definitely wouldn’t surprise me to see statements like the below:

int a = b > 10 ? c < 20 ? 50 : 80 : e == 2 ? 4 : 8;

pop up in a code base which has instances of ? already for simple expressions. Again then for reasons related to removing ambiguity about the appropriateness or not of its use, I discourage writing if statements with the ternary operator.

Code is read much more than its written so don’t save a couple of seconds using c# shorthand when writing it if it’s possible this will slow down those maintaining it.