WordPress preview post links not working? Sick of unsuccessfully trying different things you’ve found on Google like I was? Pretty much every article I read offered a different solution. Some suggested it was related to a bad theme update, others that is was related to caching, others still pointed the finger at mod rewrite/isapi rewrite rules. Eventually I gave up and just went with a preview post workaround.
WordPress preview post not working workaround
Well a quick workaround for you might be to just go ahead and publish the post but set itsvisibility to private. Private content is only visible when you are logged in so can be used to preview posts before publishing to everyone. Note however other editors and administrators will be able to see this content too when they are logged in, but that may be fine for your case. You can set a post to private from the right hand side of the add/edit post page as shown above.
FYI: this is what the post looked like on the site before I made it public
Recently working on an application based on a legacy database with a lot of char data types I was using automapper and needed to trim all non null strings for every model->viewModel mapping. I used the below code which trims ALL strings and also converts nulls to an empty string.
I’m really liking a lot of the ASP.NET 5 stuff which is coming down the line. Cross platform capabilities are cool, but more relevant and ‘exciting’ to me is the out of the box IOC container which hopefully will result in one less dependency on Ninject, CastleWindsor etc. and dynamic development which means I can change server side code, save and just refresh the browser without restarting the debug session.
On the MVC side, version 6 introduces tag helpers which I really like too. These allow us to get the same viewModel binding benefits of HTML helpers by applying data attributes to HTML tags rather than using the C# HTML helpers directly via @Html.TextBoxFor, @Html.LabelFor etc.
Markup in my opinion becomes less server side like, less ASP.NET MVC specific and more cleaner and client-side like meaning it would be easier for designers to work more fully on the views. The below example is from Dave Paquette’s introduction article on tag helpers which I link to below.
You can see that by using tag helpers we can specify things like classes, styles and other attributes (such as angular and knockout markup) in the normal HTML5 way rather than via an anonymous type. The only MVC specific part is the addition of the asp-for attribute. Matt DeKrey’s stackoverflow answer on the difference between html and tag helpers really sums up the benefits well I feel. I’m really looking forward to using razor this way, I think it lowers another barrier to working with MVC. If your preference however is to continue with HTML helpers they are still available of course.
By far the best coverage of MVC 6 tag helpers I’ve found is by Dave Paquette who has a tonne of articles about all the different tag helpers and also how you can create your own custom ones too. Start of with his Cleaner Forms using Tag Helpers in MVC6 article which is really good. Mike Brind’s Introducing TagHelpers in ASP.NET MVC 6 is another good introduction to tag helpers but also explores how they actually work under the hood.
In work, our DEV and our UAT DB environments reside on the same box, but with their own SQL instances. We prefer to store SSIS packages on disk as they are a little (actually a lot) easier to manage that way. Since our DBs are on the same box we could, when configuring SSIS packages just copy the packages to two separate file locations for DEV and UAT and alter the design time package configuration to point to DEV.dtsconfig for one and UAT.dtsconfig for the other.
Rather than this however we attempted to just have the SSIS package in one location and override the default .dtsconfig file which was defined at design time by passing in the the environment specific .dtsconfig as part of the two (DEV & UAT) SQL agent jobs set-up to execute the package on schedule. This can be done from the configurations tab in the job properties.
To get the .dtsConfig specified in the job to be the effective one, we needed to disable package configurations in design time. Not delete the existing one, but just disable it. After that the config file specified in the SQL Agent job was used.
We’ve implemented Google recaptcha in one of our web apps which is nice and works fine on our local machines. When we deploy to our dev server however we were hit by outbound firewall rules so .Net threw an exception along the lines of ‘Unable to connect to the remote server‘
Google reCAPTCHA firewall options
Upon investigating what firewall exceptions we would need for our various environments, we noted that Google recommends opening up the relevant port (80 if http is fine, 443 if the web app will run with a secure certificate/https) to outbound connections. This is highly insecure and most likely unacceptable to most network administrators. That option was therefore not a runner for us.
The best practice and more secure approach would be to create rules which are restrictive as possible using only specific IP addresses. We looked into this, however from researching online it seems that Google’s IP addresses change regularly which obviously creates a problem for maintainability and for the reliability of the application. One comment from April 2015 on the ‘How to access reCAPTCHA servers through a firewall‘ recaptcha wiki page in particular worried us:
Hi guys, we are using the Recaptcha solution for almost half a year now, and in this time we had to change the firewalls four or five time already, just to find out today that they have changed again..
We also noted similar comments such as the below:
Yesterday it worked fine once we configured the firewall to allow requests for the IP address that was causing the problem. But today it’s requesting using a DIFFERENT IP address, which isn’t configured. Not only is this a problem for us right now, but it makes me uneasy. What happens if it changes IP addresses again? More broadly, what is the issue, here? Why did it use one IP yesterday and another today?
Additionally the ‘How to set up reCAPTCHA‘ wiki page also mentioned that IP addresses can change. One way to mitigate the risk of IP addresses changing might be to catch the exception that is thrown when the app attempts to communicate with http(s)://www.google.com/recaptcha/api when the firewall is blocking such communication. The catch block could then perhaps bypass reCATPCHA, considering the request to come from a human and also email IT support who can then verify if in fact IP addresses have changed. Of course this still isn’t a runner for most enterprise applications.
In the end it seems the best Google reCAPTCHA firewall approach for most will be to allow outbound requests on port (80 or 443) based on the hostname google.com. Using DNS allows us to abstract out any changes to the underlying IP addresses. Unfortunately however I believe this approach may not work for everyone as some firewalls will not support this configuration, furthermore I believe configuration of such rules is more complicated than the IP address based approach. That being said it does provide reliability and is certainly more secure than just allowing all outbound connections on port 80 or port 443 so this would be the approach I would recommend.
What Google reCAPTCHA firewall rules are you guys using to enable use of this tool in your applications?