I was having problems allowing some of my co-workers connect to the instance of SQL Server on my machine. Of course I was aware of the ‘Allow remote connections to this server’ option so I checked that first. It was already on, so I checked out Google to see what I could find.
After fiddling about with firewalls and ports, it turns out that I didn’t have the TCP/IP protocol enabled within the SQL Server Configuration Manager tool. This meant network based communication with any databases on my SQL Server instance was impossible.
As soon as I enabled this protocol and restarted the instance (via SQL Server Services section above) incoming connections were possible. Of course your problem may be caused by something else so check out Daniel Walzenbach’s article on MSDN: How to enable remote connections in SQL server 2008? It helped me a lot.
I was getting page not found errors yesterday when trying to deploy an ASP.Net MVC website which of course worked fine in Visual Studio to IIS 7.5. I tried a couple of things to resolve based on what I had read online including:
ensuring appPool was set to pipeline mode of Integrated and targeted towards v4.0
re-registering the .Net framework 4.0 with the aspnet_regiis tool to get all the latest HTTP handlers.
I believe the first two might be required but it was running the aspnet_regiis tool which done the trick for me. As highlighted below, you need to run this as an administrator. Right click on cmd.exe and select ‘Run as administrator’ and then navigate to the .Net framework 4.0 install directory to run the tool.
After you’ve registered 4.0 with IIS you should now see much more HTTP handlers in your IIS configuration manager. The most important one I believe (but I didn’t disable to confirm) in this instance is ExtensionlessUrlHandler-Integrated-4.0
404’s disappeared for me after that. Of course during my research there seemed to be a lot of things which could have caused this problem but running aspnet_regiis is what worked for me. I think it might have been because I installed IIS after installing 4.0, but I’m not sure.
Perhaps your scripting rollout and rollback DDL SQL changes and you want to check for instance if a default constraint exists before attemping to drop it and its parent column. Most schema checks can be done using a collection of information schema views which SQL Server has built in. To check for example the existence of columns you can query the information schema view for columns with a query which looks something like:
IF EXISTS(SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'yourtablename' AND COLUMN_NAME = 'yourcolumnname')
however none of the information schema views have anything in them for checking for a named default constraint. For that you need to use something like:
IF EXISTS(SELECT * FROM SYS.OBJECTS WHERE TYPE_DESC = 'DEFAULT_CONSTRAINT' AND NAME = 'yourconstraintname')
IF EXISTS(SELECT * FROM SYS.OBJECTS WHERE TYPE = 'D' AND NAME = 'yourconstraintname')
If your database queries are taking an age and a half to execute you could do worse than investigate the addition of some indexes to the columns which are involved.
You’ll no doubt be familiar with indexes at the back of books which have the name of some concept with a list of page numbers beside it. Indexes for databases are used in much the same way as indexes in books are. They both essentially provide a quick way to look up the location of information you want to find. Each index is a form of categorisation of the overall data. A book usually has only one index as that’s all it needs and the categorisation of the index is usually concepts related to the book sorted alphabetically.
Advantages of indexes are increased search performance
The graphic above is a cropped screenshot of the index at the back of Succeeding with Agile: Software Development Using Scrum. Imagine for example your interested in finding mentions (or concepts related to him) of Kent Beck in that book. By looking in the index you can quickly see that he is mentioned on page 58, 288 and 289. If there was no index or the information you were looking for wasn’t in the index you would have to go through every page to look for it. This book has over 450 pages so it’s not going to be a quick task. What’s worse is that the information may not even be in the book at all.
Same idea applies in databases. Imagine your Eason’s, Barnes and Noble or Amazon and have an authors database table with 100,000 records in it. This table has a load of columns but one which we care about is called lastName. If you queried the authors table with something like select * from authors where lastName = ‘beck‘ and there was index on the lastName column the database engine will use a logical categorisation (each record categorised by lastName) of all the data in the authors table to quickly find where that record is physically located on the disk drive.
Specifics is for another day but it does this most likely by using special structures known as B-trees where the possible areas on the disk that the record(s) in question can be physically located gets smaller and smaller as the engine goes further into the tree. By continuously going in the general direction of the physical records via a B-tree the DB engine can find the records very quickly.
If you had the same search scenario without an index on lastName the database engine would have to look at every record to see if it matched ..where lastName = ‘beck’. That’s 100,000 records to look at. That’s a lot of IO and CPU time.
In database terms looking through the whole table when no relevant index is present is called a table scan whereas using an index is referred to as an index seek. There is also such a thing called an index scan which generally speaking is where an index does exist but the database engine deems it quicker to just look at the records one by one anyway. It might do this because the cost of traversing the index structure (e.g the B-tree which is data stored on disk and requires IO and CPU time just like table records) will be more expensive than simply going through the data pages (in which the records are stored) one by one. This might happen in cases where there is an index but the table size is small and/or the requested records make up a large percentage of total records in the database table.
Generally speaking seeking is good and scanning is bad. Adding indexes will cause an index seek to be used if that is less expensive than an index scan. In most instances it is so indexes can really improve the performance of your select queries. The improvement can sometimes be dramatic, cutting perhaps 99% execution time off a particular query in a large table. Of course “there’s no such thing as a free lunch” and indexes do have their disadvantages.
Disadvantages of indexes are slower updates and more disk space required
Yes adding indexes to your database tables do come at a cost. The cost is very likely to be something your willing to pay though. Let’s refer back to the image of the book index I first used above to explain the cost of having indexes.
We see information about Kent Beck is located on page 58, 288 and 289. Great.
What if the author of this book, as part of the update for the next edition inserted new text (about a whole page) into page 28 which caused almost all information to be moved to the next page? The author has done an ‘insert’ and thus the index is now out of date. The information about Kent Beck is not in fact still on pages 58, 288 and 289 but rather is it now on pages 59, 289 and 290. Of course all references in the index that refer to page 28 and above are out of date, not just the Kent Beck related ones. The index therefore needs to be updated to reflect the change. This may not be a problem for a book as new editions don’t come out that often. For a database however which might have millions of daily inserts, deletes or updates directly affecting indexed columns the IO and CPU cost to constantly reshuffle the indexes on disk so they are correct can be enormous. Thankfully in the vast majority of cases systems read from much more than write to databases.
Another disadvantage of adding indexes is the disk space requirements. The image above shows only a tiny excerpt of an index that is actually 11 pages long. All them extra characters and pages no doubt cost the publisher extra to print the book. Database indexes require extra space too. The specific amount will depend on the size of the table and the number of columns in the index.
Indexes are perhaps the single best way to increase the performance of your database queries. I used the analogy of an index in a book to aid in explaining what they are and how unfortunately the don’t come without a price (a fair price I think) but if you’ve any questions let me know in the comments. I find the topic of indexes fascinating (so expect more posts about them) and a big part of me wishes I could transpose myself back into my college data algorithms class to actually understand what the lecturer was going on about when he was talking about things like indexing algorithms and B-Trees. I really did think I would never use ‘that stuff’ again :-).
Just upgraded to Version 1.2.11 of log4net due to the fact that one of the improvements in this version is the support for the use of a connectionStringName which references a connection from the connectionStrings configuration section.
Previously this was not part of the core release so you either had to take extra steps (it didn’t just work) or use an explicit connection string in your log4net setting. It’s nice for maintenance if your 3rd party tools can just reference a connectionString rather than explicitly define one themselves.