Front-end performance for web designers and front-end developers – Web Performance and Site Speed Consultant

  • Home
  • WEBSITES
  • Front-end performance for web designers and front-end developers – Web Performance and Site Speed Consultant
May 7, 2025


Written by on CSS Wizardry.

Table of Contents
  1. The basics
    1. Styles at the top, scripts at the bottom
    2. Featured case study: NHS
    3. Make fewer requests
    4. Maximising parallelisation
  2. HTTP requests and DNS lookups
    1. DNS prefetching
    2. Further reading
  3. Resource prefetching
    1. Further reading
  4. CSS and performance
    1. Further reading
  5. Gzipping and minifying
  6. Optimising images
    1. Spriting
    2. Retina images
    3. Progressive JPGs
    4. Use no images at all
  7. Summary
  8. Further reading

It’s hard, if not impossible, to deny that performance is by far one of
the most critical aspects of any decent web project, be it a small portfolio
site, a mobile-first web app, right through to a full-scale ecommerce project.
Studies, articles and personal experience all tell us that fast is best.

Performance is not only hugely important, it is incredibly interesting, and
something I am getting more and more involved in at both work (I’m forever
pestering our Lead Performance Engineer) and in side projects and CSS Wizardry
(I’m forever pestering Andy Davies).

I’m going to share – in this massive article – a load of quick, simple and
downright intriguing bits of perf knowledge to act as a primer for web designers
and front-end developers alike; hopefully this article will serve as a decent
introduction for anyone wanting to start learning about perf, and making their
front-ends blazingly fast. These tips are all things you can implement by
yourself very easily. It just takes a bit of cunning and some basic knowledge
of how browsers work and you’re ready to game the system!

This huge post won’t have loads of confusing graphs and numbers to crunch, but
instead concerns itself with theory and first-hand performance techniques that I
have arrived at as a result of reading, monitoring, collaborating and tinkering
(I spend a lot of time glued to CSS Wizardry’s waterfall charts). I will also
link to other articles on similar topics to help reinforce any key points. Enjoy!

N.B. This article does require a small amount of basic performance knowledge
up-front, but anything covered that you aren’t familiar with should be just a
Google search away!


  1. The basics
    1. Styles at the top, scripts at the bottom
    2. Make fewer requests
    3. Maximising parallelisation
  2. HTTP requests and DNS lookups
    1. DNS prefetching
  3. Resource prefetching
  4. CSS and performance
  5. Gzipping and minifying
  6. Optimising images
    1. Spriting
    2. Retina images
    3. Progressive JPGs
    4. Use no images at all
  7. Further reading

The basics

There are a few things all designers and front-end developers will likely
know about performance, things like making as few requests as possible,
optimising images, putting stylesheets in the , putting JS before the
, minifying JS and CSS and so on. These fundamentals will already get
you on your way to faster experiences for users, but there’s more… much more.

It is also very important to remember that – for all they give us headaches every
day of our working lives – browsers are very clever; they do a lot to optimise
performance for you, so a lot of perf knowledge combines knowing where the
browser is at work, and knowledge of how best to exploit that. A lot of perf
know-how is merely understanding, exploiting and manipulating what a browser
does for us already.

Styles at the top, scripts at the bottom

This is a really basic rule, and one that should be super easy to follow most of
the time, but why does it matter? Put very shortly:

  • CSS blocks rendering, so you need to deal with it right away (i.e. at the
    top of the document, in your ).
  • JS blocks downloads, so you need to deal with these last to ensure that
    they don’t hold up anything else on the page.

CSS blocks rendering because of a browsers desire to render pages progressively;
they want to render things as they get to them, and in order. If styles are a
long way down the page the browser can’t render that CSS until it gets to it.
This is so that the browser can avoid redraws of styles if they alter something
that was previously rendered further up the document. A browser won’t render a
page until it has all the available style information, and if you put that style
information at the bottom of the document you’re making the browser wait,
blocking rendering.

So, you put your CSS at the top of the page so that the browser can start
rendering right away.

JavaScript blocks downloads for a number of reasons (this is the browser being
clever again) but firstly, we need to know how downloading assets in browsers
actually happens; simply put, a browser will download as many assets as it can
from a single domain in parallel. The more domains it is pulling from, the
more assets can be downloaded, in parallel, at once.

JavaScript interrupts this process, blocking parallel downloads from any and
all domains, because:

  • The script being called might alter the page, meaning the browser will have to
    deal with that before it can move on to anything else. In order for it to deal
    with that eventuality it stops downloading anything else in order to focus
    soleley on that.
  • Scripts usually need to be loaded in a certain order for them to work, for
    example, loading jQuery before you load a plugin. Browsers block parallel
    downloads with JavaScript so that it doesn’t start downloading jQuery and your
    plugin at the same time; it should be pretty obvious that if you were to start
    downloading both in parallel, your plugin would arrive before jQuery would.

So, because browsers stop all other downloads whilst JavaScript is being fetched,
it is usually a good idea to put your JavaScript as late in the document as
possible. I’m sure you’ve all seen blank sections of pages where a third party
piece of JS is taking ages to load and blocking the fetching and rendering of
the rest of the page’s assets; this is JavaScript’s blocking in action.

Featured case study: NHS

How I helped the NHS rapidly build a brand new product.

Read case study…

Apparently, however, modern browsers get smarter still. I’m going to give you
an excerpt from an email from Andy Davies to
me, because he explains far better than I can:

Modern browsers will download JS in parallel and only rendering is blocked
until the script has been executed (it obviously has to be downloaded too).

Downloading of the script will often be done by the browser’s look ahead
pre-loader.

When a browser is blocked from rendering page e.g. waiting for CSS, or JS to
execute, the look ahead pre-parser scans the rest of the page looking for
resources it could download.

Some browsers e.g. Chrome, will prioritise the download of assets e.g. if
scripts and images are both waiting to be downloaded it will download the
script first.

Smart stuff!

So, to allow a page to begin rendering as fast as possible, put your styles at
the top. To prevent JS’ blocking affecting your rendering, put scripts at the
bottom.

Make fewer requests

The other really obvious and basic performance optimisation is simply
downloading less. Every asset a page requires is an extra HTTP request; the
browser has to go off and get every single asset required to render a page. Each
of these requests can incur DNS lookups, redirects, 404s etc. Every HTTP
request you make, whether it is for a stylesheet, an image, a web font, a JS
file, you name it, is a potentially very expensive operation. Minimising these
requests is one of the quickest performance optimisations you can make.

Going back to browsers and parallelisation; most browsers will only download a
handfull of assets from each referenced domain at a time, and JS, remember, will
block these downloads anyway. Every HTTP request you make should be well
justified, and not taken lightly.

Maximising parallelisation

In order to get the browser to download more assets in parallel, you can serve
them from different domains. If a browser can only fetch, say, two assets at once
from a domain, then serving content from two domains means it can fetch four
assets at once; serving from three domains means six parallel downloads.

A lot of sites have static/asset domains; Twitter, you can see, use
si0.twimg.com to serve static assets:

Facebook use fbstatic-a.akamaihd.net:

Using these static, asset domains, Twitter and Facebook can serve more assets in
parallel; assets from twitter.com and si0.twimg.com can be downloaded in
tandem. This is a really simple way to get more concurrent downloads happening
on your page, and even better when coupled with actual CDN technology that can
help decrease latency by serving assets from a more suitable physical location.

This is all well and good, but later we’ll discuss how serving from subdomains
can actually, in certain circumstances, be detrimental to performance.

So, these are our performance basics out of the way:

  • Put stylesheets at the top of a document
  • Put JavaScript at the bottom (where possible)
  • Make as few HTTP requests as possible
  • Serving assets from multiple domains can increase the number of assets a
    browser can download in parallel.

HTTP requests and DNS lookups

Every time you request an asset from any domain, out goes an HTTP request with
the relevant headers, the resource is reached, and a response is sent back. This
is a vast over-simplification of the process, but it’s about as much as you
really need to know. This is an HTTP request, and all assets you reference are
subject to this round trip. These requests are the main bottleneck when it comes
to front-end performance because, as we covered, browsers are limited by how
many of these requests can happen in parallel. This is why we often want to use
subdomains; to allow these request to happen on several domains, allowing a
greater number of requests to happen at the same time.

A problem with this, however, is DNS lookup. Each time (from a cold cache) a
new domain is referenced, the HTTP request is subject to a time-consuming DNS
lookup (anywhere between 20 and 120 milliseconds) in which the outgoing request
looks up where the asset actually lives; the internet is tied together by IP
addresses which are referenced by hostnames which are managed by DNS.

If each new domain you reference has the upfront cost of a DNS lookup, you have
to be sure that it’s actually going to be worth it. If you are a small site
(like CSS Wizardry, for example) then serving assets from a subdomain will
likely not be worth it; the browser can probably fetch several under-parallelised
assets from one domain quicker than it can perform DNS lookups across multiple
domains and parallelise those.

If you have perhaps a dozen assets, you might want to consider serving them from
one subdomain; an extra DNS lookup is probably worth it in order to better
parallelise that amount of assets. If you have, say, 40 assets, it might be
worth sharding those assets across two subdomains; two extra DNS lookups will
be worth it in order to serve your site from a total of three domains.

DNS lookups are expensive, so you need to determine which is more suitable for
your site; the overhead of lookups or just serving everything from one domain.

It is important to remember that as soon as the HTML is requested from, say,
foo.com, that DNS lookup for that host has already happened, so subsequent
requests to anything on foo.com are not subject to DNS lookups.

DNS prefetching

If you, like me, want to have a Twitter widget on your site, and Analytics, and
maybe some web fonts, then you will have to link to some other domains which
means you’ll have to incur DNS lookups. My advice would always be not to use
any and every widget without properly considering its performance impact first,
but for any you do deem necessary, the following is useful…

Because these things are on other domains it does mean that, for example, your
web font CSS will download in parallel to your own CSS, which is a benefit in
a way, but scripts will still block (unless they’re async).

The problem here, really, is the DNS lookups involved with third party domains.
Fortunately, there is a super quick and easy way to speed this process up: DNS
prefetching.

DNS prefetching does exactly what is says on the tin, and could not be simpler
to implement. If you need to request assets from, say, widget.foo.com, then
you can prefetch that hostname’s DNS by simply adding this early on in the
of your page:

That simple line will tell supportive browsers to start prefetching the DNS for
that domain a fraction before it’s actually needed. This means that the DNS
lookup process will already be underway by the time the browser hits the




Source link

Leave a Reply