Setting a specific Brotli compression level when using response compression in ASP.NET Core

Brotli can compress HTML, CSS and JS files more than Gzip. If you can’t enable Brotli compression on the server level but want to use it in your ASP.NET Core app you can use the response compression middleware to enable it from your Startup.cs file. Setup is easy and looks like below..

Setting up MVC response compression

Note the app.UseResponseCompression() in the Configure method needs to be before app.UseStaticFiles() as otherwise headers have already been sent. The default setup above will use Brotli if the client supports it else it will use Gzip if it’s supported. By default the Brotli compression provider will be set to a compression level of 1 (fastest).

The compression level corresponds to how aggressive an algorithm will compress a document. Brotli has 11 levels, 1 being the fastest but least compressed and 11 being the most compressed but also the slowest. Brotli at its highest level (11) is extremely slow compared to Gzip at its highest level (9). Slow compression is fine for static CSS/JS files which we can compress ahead of time. For on the fly compression (eg. dynamic web pages) slow speeds can negatively impact web app performance..

SO

.. for Brotli compression to be advantageous over Gzip we need to find the compression level which gives us the best balance between compression amount and speed..

BUT

the problem with the Brotli compression provider in ASP.NET Core is that it only has three options in its enum for setting the level…

ResponseCompression Options

There is no enum option for the best balance between compression and speed. In this case we can just set the level with a number directly…

Setting Brotli Level

You need to experiment with different levels but a number of resources seem to be agreeing on level 4 or 5 as being the sweet spot…

https://blogs.akamai.com/2016/02/understanding-brotlis-potential.html

https://paulcalvano.com/2018-07-25-brotli-compression-how-much-will-it-reduce-your-content

https://quixdb.github.io/squash-benchmark/#results-table

Of course for your static content such as CS, JS, Fonts etc. compress to the max with 11. This will give you significant byte savings over Gzip.

 

Finding unused CSS and JavaScript with the Coverage tab in Google Developer Tools

Often over the course of time we add CSS and JavaScript to our pages without removing redundant styles and functions. This means…

Larger request sizes than required.

Slower first rendering of our pages if the unused code is referenced in external files rather than inline as these files will block the main thread from rendering.

Maintenance issues as its not clear what code is used and what isn’t.

so

it’s a good idea to remove unused CSS/JS as you go and finding it is easy with the Google Developer Tools Coverage tab.

The screen shot below (click on the image for a larger view) is taken from www.rte.ie and shows a lot of unused CSS and JavaScript as indicated by the red bars. The actual code that’s used in any particular CSS or JavaScript file is shown in the top half of the image.

RTE.ie unused CSS and JavaScript

Note this tool shows unused code for the specific page its run for, it can’t tell if code is used on some other page from the same site. Often sites have a shared layout file which includes all CSS/JS required for all pages. From a performance point of view ideally each page would only load the exact CSS/JS it needs however implementing this can be a challenge depending on what platform or CMS is being used.

Opening Google Developer Tools Coverage Tab

Open the command menu using Cmd+Shift+P on Mac or Ctrl+Shift+P on Windows, start typing Coverage and then select Show Coverage:

Opening the coverage tab

or access it through the right hand side menu…

Opening the coverage tab

Note Microsoft Edge also has this functionality as its a chromium based browser.

Extracting Google PageSpeed performance score in .NET

Most webmasters are aware that Google uses performance as a ranking signal for both desktop and mobile searches so its important your site is as fast as it can be. You can check your performance score according to Google in Chrome developer tools or online but since they provide a HTTP API you can get also get performance scores programmatically.

Calling the PageSpeed HTTP API and Serializing the JSON response into a DTO

Calling the API is easy as its a simple GET request as can be seen below…

How to call the PageSpeed API

You just need to make sure you set your key (you can get one from here but a key is not needed if your just testing) and the URL which you want to run the performance check for as query params. You can see above I’ve also set strategy which will default to ‘DESKTOP’ if not set.

After the response comes back (not very fast as the performance check takes a few seconds) I use the new System.Text.Json serializer released in .NET Core 3.0 to convert it into a simple DTO which matches the structure of the JSON. The performance score itself is located in lighthouseResult -> categories -> performance -> score.

As you can see to adhere with VS naming conventions I’ve made the properties start with uppercase and therefore needed to set the PropertyNameCasesInsentive flag above as otherwise the DTO would be empty after serialization.

Simple DTO for storing PageSpeed results

Calling the PageSpeed HTTP API and querying the JSON directly using JsonDocument

The above is nice and neat as we now have a strongly typed object which we could pass to our views etc. In this case however we only care about one property, the performance score itself so having to create a nested class or DTO just for this seems like a lot of effort.

Rather than serializing we can alternatively use JsonDocument and query the JSON string directly by chaining a couple of GetProperty calls together.

Use GetProperty to read JSON properties

Of course to use GetProperty you need to be sure the property will always exist in your JSON as otherwise an exception will be thrown. If your not sure about this use TryGetProperty instead and check if you’ve successfully got an element before moving on…

Use TryGetProperty to read JSON properties

Google PageSpeed Insights API Client Library for .NET

Google also has a PageSpeed API Client Library for .NET however I’ve not really looked into it much yet. Using this library over simple HTTP requests should allow you to have a more strongly typed approach where you don’t have to worry about matching JSON property names etc.

Googles WebP image format is approaching universal browser support but what is WebP and why should you care?

When Safari 14 drops around September or October 2020 (expected based on previous releases), support for the WebP image format will be pretty much universal as can be seen from the screenshot from caniuse.com below. But what is WebP and why should you care about it?

Browser support for Webp

What is WebP and why you should care?

WebP is Googles image format which they launched in 2010 and which is now part of their ‘Make the Web Faster’ initiative. Its claims to use a superior and more aggressive form of compression which results in smaller file sizes compared to JPG and PNG formats with only minimal, often imperceptible or negligible quality loss.

In terms of how much smaller WebP format files can be well Google claims that WebP lossless images are 26% smaller in size compared to PNGs. WebP lossy images are 25-34% smaller than comparable (quality wise) JPEG images.

These figures are impressive, however by reducing the quality level when creating WebP lossy mode images even larger size file reductions can be achieved. When these images are viewed in isolation no obvious quality issues appear particularly at higher quality figures such as 80 or 90%. If you compare a JPG with a lossy WebP at 80% quality side by side and you know what to look for (perhaps your a photographer or graphic designer) you’ll spot small differences. For the vast majority of users on the web however the two images will essentially look exactly the same but one will have a significantly smaller file size. Apart from using less of the users mobile data monthly cap (if capped) smaller file sizes matter because…

Web pages load faster

Smaller images mean faster loading pages. Faster loading pages mean a better user experience and thus an increased likelihood of site engagement and conversion. For example the BCC previously reported that for every additional second in load time 10% of users left, whereas when looking at mobile only data a recent Deloitte Ireland report showed even a 0.1 second speed increase can make real differences to how much users engage.

Deloitte Report

Faster loading pages also mean potentially higher SEO rankings on Google as they have confirmed speed is a ranking signal (albeit a minor one) for both desktop and mobile searches. This means that not only do faster pages result in users converting more once they are on a site but it may mean more users visit in the first place.

Less bandwidth and CDN costs

Smaller images means less cost to you in bandwidth terms. Most hosting providers allow a certain amount of bandwidth per month and most CDNs charge per GB worth of data transfers. Below are the current costs for Azure CDN.

Azure CDN costs

It doesn’t take too much to reach a GB these days especially if your a popular site with lots of images, perhaps an ecommerce or news site or a site which has user generated content. In this case using WebP can result in significant savings by shaving at least around 20-30% off your image file sizes.

Manually creating and rendering WebP images with graceful fallback using HTML5 Picture tag

There are lots of online convertors which will allow you to convert JPGs and PNGs to WebP format images. These will require you to upload your images one by one, although some support bulk upload. Plenty of desktop software options exist too including Photoshop (with a plugin) and if you use Visual Studio there is a new extension called WebP Toolkit which will enable you to create WebPs from within VS itself.

After you’ve got your .webp images you can render them using the HTML5 picture tag. The below snippet will show image.webp for supported browsers and image.jpg for those that support the picture tag but not WebP. Finally for Internet Explorer and Opera Mini a legacy img tag is provided as fallback as they don’t support the picture tag.

WebP render with graceful fallback

Providing the .jpg fallback obviously increases (more than doubling) your storage needs so keep an eye on your websites analytics to see what browsers are actually being used to hit your site. With the release of Safari 14 in the next couple of months, the need to provide a jpg (or PNG) fallback with diminish quickly.

Generating WebP dynamically on the fly using CDNs or CMS plugins

Converting images manually and including them in your pages like above is fine for many websites but you may have user generated content or may not want or be able to explicitly convert your images each time your content changes. In this case you could look into using a CDN such as Akamai or Cloudfare which will automatically convert and serve a WebP version of your image based on the users browsers capabilities.

If your using a publishing platform like WordPress there a many extensions which can automatically serve WebP versions of images such as EWWW Image Optimizer.