Things that helped us pass our Core Web Vitals assessment

Kevin McCarthy
4 min readApr 10, 2021

--

Our Core Web Vitals as of today Saturday 10th April

It’s so small and underwhelming on the page but this part was the culmination of a lot of frustration and experimentation.

The above phrase makes me very very happy.

Here are some things we did that helped us reach this goal of passing the Core Web Vitals assessment.

We prioritised the work

This seems obvious but often performance work is not treated as a “feature” and so is done alongside other projects rather than getting its own dedicated attention.

We recognised that this was coming and so in September of 2020 we decided we would devote two engineers for one cycle (six weeks) to making it happen. (spoiler alert we were not passing Core Web Vitals at the end of the six weeks!).

When we learned that you had to pass all three to have an impact we devoted time again to focusing on getting our passing grade.

We captured our own Real User Metrics (RUM) / field data

Previously when we had done performance work we had used lab data from Lighthouse and eventually paid for a vendor (Calibre) to track our performance. Initially we thought this along with pagespeed insights would be enough.

We quickly discovered that lab data does not give an accurate portrait of what is happening to real users and can be helpful for investigation but is not a reliable data source for passing CWV.

Experimenting with front-end performance (especially CLS and LCP) requires real user data to know if you’re making headway or not.

We spent a lot of time figuring out how to use the data we pulled and ended up with a dashboard in DataStudio that we report on weekly.

We held everyone accountable for at least maintaining our Core Web Vitals scores while adding new features

While we were working on solving our CLS Core Web Vitals issues, our LCP fell below the passing mark due to other features delivered.

We did not have our data set up so we were unable to identify these in real-time.

We discovered that it was not just about going in and fixing things but ensuring new changes were not negatively impacting our scores.

LCP decreased from 75% to 72%

Since we got our own CWV data in a good state our scores are looked at on a daily basis and reported out to the wider business weekly.

Any negative changes were investigated and we worked with the team’s that introduced the issue to resolve it.

We trained all engineers on how to spot Core Web Vitals issues

We ran a workshop to share what we’d learned about identifying Core Web Vitals issues. This was in the hopes we could deal with the issues before deploying a change rather than working on problems reactively.

As part of the assignment the engineers worked in pairs to identify problem CLS elements on specific pages using some of the tools recommended by Google. Here’s our method for identifying issues for CLS.

We only focused on mobile

Google has indicated it is prioritising mobile experience for Core Web Vitals so we did not even set up a dashboard for the desktop scores. We were already doing quite well on desktop anyway but not having the distraction of looking at two sets of figures narrowed our scope.

We educated the entire business on Core Web Vitals

Google has done a great job of documenting CWV on their web.dev/vitals page including tons of great diagrams and gifs. This one is my absolute favourite!

We presented these explanations in simplest possible terms with examples from our own website at a number of all hands so everyone in the business became familiar with these terms and why they are important.

example showing our CLS problems on PDP

This was was useful when explaining to other teams why we needed to delay a popup appearing or why a banner sliding in from the top was a bad idea or why they needed to compress their images.

--

--

No responses yet