InnerWorks Logo
Return to main siteReturn to main site
Blogs

The Importance of Modern Web Performance Testing

The Importance of Modern Web Performance Testing

Marcin Zajkowski

22 Jun 2021 • 6 min read

Should performance testing be a feature in the project lifecycle? We say YES. Here's why.

Hello, youve stumbled into the old Cogblog archives

We've switched our blogging focus to our new Innerworks content, where tech community members can share inspiring stories, content, and top tips. Because of this, old Cogworks blogs will soon be deleted from the site, so enjoy it while you can. A few of these might refer to images in the text, but those have been deleted already sorry. Some of these subjects will return, some aren't relevant anymore, and some just don't fit the unbiased community initiative of Innerworks.

If you'd like to take on this subject yourself please submit a new blog!

Farewell Cogworks Blog 💚

In our shared pursuit to push the web to do more, we're running into a common problem: performance. Sites have more features than ever before. So much so, that many now struggle to achieve a high level of performance across a variety of network conditions and devices.

Why Does Performance Matter?

Performance issues vary. At best, they create minor delays that briefly annoy your users. At worst, they make your site completely inaccessible and unresponsive to user input... or both.

We want users to interact with what we build, right? If it’s a blog, we want people (you) to read it. If it’s an e-commerce site, we want customers to buy things. Performance plays a major role in the success (or failure) of any online business.

Performance is no longer something that is just “nice-to-have” in applications that we build and use. Nor is it a luxury when a site is running fast.

Performance doesn’t just mean speed; it can also mean resource consumption and user perception. It is a public picture of the brand, the company and therefore the creator. In a world where time (and money) matter most, we don't want ourselves... or our clients to lose.

Depending on your project goals, there may be a few areas where you may need to look at performance, such as:

- Retaining users.
- Improving conversions.
- User experience.
- “Savings”.

No matter what drives us or our project stakeholders to care more about performance, we still seem to underestimate the value of performance and performance testing.

Without testing (at this stage let’s skip if it’s an automated process integrated with our CI/CD pipeline i.e within Azure DevOps or manual process done by manual testers) we can’t even talk about performance!

Metrics are the key to understand what causes issues with our software or user experience and shows us why we are, or are not, using software to its full potential for a business.

Why are we not telling our customers…?

...with the next release, you’ll lose $450k per month because of the performance degradation of our newly delivered feature

...we’ve just caused your bounce rate to increase by 20%, congrats!”

...we don’t know if our website will handle this traffic, let’s see…

Language and communication are vital within a project, but as these examples show, they can sometimes be handled one sprint too late. The solution?

Why not make performance a feature?

What if we really care about it from the beginning?

Performance should be treated as a feature of an application and receive the same (or IMO much more) attention and care as all the other key features of the piece of software we’re building day by day.

The sooner we spend time attaching well-known established, on-the-market tools to help us in the area of gathering metrics and performing as many automated tests and reports as possible, the sooner we’ll feel relieved and confident about what we deliver.

Metrics can only take us so far. We're not always able to know if we'll break our software or attach a performance degradation within the next piece of code we move into the develop branch. However, popular approaches such as TDD (Test Driven Development) and BDD (Behaviour Driven Development) can go a long way. We're strongly in favor of PDD (Performance Driven Development), but there is nothing to stop us from combining two or three methods.

Begin with metrics

If we can’t measure, we can’t improve.

In order to measure, we must know what data to pay attention to. In the case of performance, this is execution times, bottlenecks, long-running operations, most “popular” operations, hot paths, load times, feel times, user behaviors, TTFB, time to interact, amount of data consumed, amount of memory allocated and so on…

The data is out there to help us understand and stop guessing! This is the first step for more caviar in our technical involvement within the project build process.

It all sounds complicated... but as developers and technical people, we can be pragmatic by setting up a step-by-step process to ensure performance is made simple. Perhaps if we can measure the most problematic and “stressed” area of an app from the start, we can be more efficient and avoid those extra costs involved in creating unnecessarily long insurance policies? If we know what we’re measuring, we can generate a concise and project-specific document that is solution-focused.

Continue with metrics

It’s just as important to continuously measure performance as it is to address it at the start line. If we’re to treat performance as a feature, let’s work on it constantly. By breaking it down into “simple” steps, we can more easily treat performance as a vital cog in the project process. Throughout the project, we should:

- Build.
- Measure.
- Optimize.
- Monitor.

Simple, right? These four principles can be integrated into the typical project cycle as shown below.

It’s impossible to discover all of the problems or potential issues with our application by only paying attention to this in the development phase.

Each stage has its own set of performance data which we should look at closely so that we can be ready to make improvements before the app is released into the wild! If something is not live yet... we can be ready. If it is live... we’re ready too because we know what performance metrics we need to look at.

Being pragmatic is key. Every project team should establish our own criteria for testing across these project stages. Only then is it worth implementing and executing if we know it will help... and not harm.

Unhelpfully, there are lots of testing myths to throw us off from doing it the right way. Like, codebases with “full test coverage” for example, we've all heard about this, but we don’t see it so often as it's extremely hard to accomplish! A lot of companies push this as a priority when really we should create our own criteria and best practice. When we start by doing this, the sooner this becomes a habit that we won’t be able to live without. The feeling that we are safe to "break stuff" is something we won’t want to give up easily...

The key part of the above chart is to understand the process behind it. Monitoring the development environment is useless in most of the cases, but we shouldn’t work with any kind of application which is not monitored in the production, right?

When it comes to decisions, it’s better to not act when we’re already fighting with fire, but when we have time to react. That’s what proper insurance should guarantee. We should feel safe and not focus too much on the unhandled consequences of our actions...

Why are we not treating performance seriously?

Can we ask ourselves this question? I mean, we’re all guilty of not doing it in every project in every possible case, right? Mea culpa, it took us too long to understand the real value of it and it’s why we’re writing this blog post - we hope it’ll help to open your eyes too

Community tech aid

Innerworks and Cogworks are proud to partner with Community TechAid who aim to enable sustainable access to technology and skills needed to ensure digital inclusion for all. Any support you can give is hugely appreciated.