Software quality is a vital yet often overlooked aspect of software development; in 2020, the Consortium for Information & Software Quality (CISQ) reported that the impact of poor software quality was around $2.08 trillion to US businesses.
The good news is with the implementation of a few software quality metrics we can track and significantly reduce our contribution to this alarming stat.
As the great Peter Drucker once said, “If you can’t measure it, you can’t improve it”.
In this article, I will outline the fundamentals of software quality in various frameworks and, in doing so, identify key software quality metrics for you to consider using in your product.
By the end, you should understand the fundamentals of software quality metrics and how to use them to improve your products.
What is software quality?
Software quality is a broad umbrella term that encompasses all aspects of building and maintaining a working product. The ultimate goal in software quality is that your final product has the least amount of defects possible while still maintaining the required functionality.
Software quality metrics are the tools that enable us to gauge the overall quality of your product.
To fully understand software quality let’s break down the fundamental characteristics and dig into a few software quality metrics that you can track.
Understanding software quality metrics
There is a wide variety of software quality frameworks in the tech industry that utilize an even broader spectrum of software quality metrics.
Some frameworks emphasize large macro-level metrics, like calculating defects in a code base and other broad quantitative approaches, but these are widely viewed as negligible in their benefits and can be harmful to engineer morale.
A majority of these frameworks share some common characteristics that address the fundamentals of software quality and software quality metrics. Three popular frameworks we will take a look at are:
- ISO/CISQ software quality metrics
- AWS Well-Architected Framework
- Google’s testing emphasis for quality
Industry-standard CISQ software quality metrics
ISO has a comprehensive model for software quality; however, I find it can be quite cumbersome to implement and maintain as it has thirteen separate characteristics covering software quality.
Fortunately, in March of 2021, CISQ updated its leaner ISO-adapted framework to outline the most vital characteristics of software quality. CISQ has defined four primary characteristics for an industry-standard of software quality as:
- Performance Efficiency
Let’s dig into the first three characteristics as they are generally shared among just about every popular framework.
Software reliability, in a nutshell, is will the product always function as intended, additionally what is the risk that the product might fail.
Three software quality metrics that can be used to measure reliability are:
Mean Time to Failure (MTTF)
This is the time interval between two failures, which is normally measured in hours. So an MTTF of 240 would indicate that a failure is to be expected every 240 hours.
Mean Time to Repair (MTTR)
Now that you have a failure, how long does it take to fix the problem? MTTR gives you the average time it takes the team to get the feature or product back to its normal functional state.
Mean Time Between Failure (MTBF)
This is one of the most used quality metrics in reliability, MTBF is the calculated average time between failures. By adding MTTF and MTTR together, we get the total time it takes for a failure to occur plus the time it took to resolve.
These three reliability metrics are great for an easy quantitative understanding of your reliability overall, especially in post mortem situations, but it is better to be more proactive in your quality metrics. The best metric you can use for reliability is to quantify your testing methods, which is a cornerstone of software quality at large tech companies like Google, but more on that in a minute.
Performance Efficiency is how well your product responds, and the time it takes to process functionality. This could mean how fast features, data, or web pages are loaded or how quickly the product responds to user input.
Let's discuss one key software quality metric for understating performance efficiency.
Load testing (Soak Testing)
Testing that your product can take sustained traffic and requests is integral to better understanding the performance efficiency of your product. If you were to have a significant increase in concurrent users, would all features maintain their stability?
Soak testing is a type of load testing that, most importantly, can help measure specific page load times and the load capabilities of key functions. Quantifying these load times under certain usage loads will provide an outline of how efficient the performance of the product truly is as you scale.
Security is obviously a key part of any software product and most certainly for providing a high standard of software quality. Security entails both vulnerabilities in the product but also how does your team responds once a vulnerability is realized.
Three key software quality metrics that encompass security are:
Mean time to resolution
This is similar to the reliability metric, although here we want to know how fast were you able to resolve an issue after a security breach.
Total security breachers / Total security incidents
By using this simple metric you can quantify the overall security of your product. Here I can calculate how many times the product was maliciously attacked divided by the times that an attack adversely affected functionality.
Security update adoption
Primarily used for mobile apps this measures what percentage of users have installed a security update. The higher the adoption rate of security updates the more secure your product should be.
Amazon Web Services software quality pillars
The AWS Well-Architected Framework was first created by Amazon for running workloads in the cloud and since has been adopted as an industry standard by both Google and Microsoft. The AWS pillars are integral to creating scalable cost-effective systems to support products and resource-intensive features.
AWS defines five pillars for their cloud software quality:
- operational excellence
- performance efficiency
- cost optimization
Fundamentally architecture software quality will have a significant emphasis on reliability, performance efficiency, and security. Since we have already discussed these characteristics, let’s go deeper into another vital category AWS has identified, cost optimization.
ROI as a software quality metric for cost optimization
Cost optimization is a characteristic of software quality that focuses on avoiding unnecessary costs and allocating funds to the appropriate functions.
The best and most simple cost optimization quality metric is understanding the return on investment (ROI) for your software quality needs. This ensures that the software quality of a product, or specific features of a product, are getting the best bang for your buck.
When considering building out a new feature or fixing an identified bug always take into account the impact of the defect and the cost to fix the defect. Improving software quality is an investment and should be subjected to a cost-benefit analysis to fully appreciate its impact.
For example, you may have a known bug in your product that affects a small number of users. The bug has a simple workaround and is not a huge pain point for the users. The bug is living somewhere in older legacy code that would need a massive overhaul. It may be worthwhile not fixing the issue at the moment as the ROI of asking an engineer to solve the issue is negligible.
It’s important to remember that providing software quality must balance with what is economically feasible for the business.
Google reliability testing: code coverage as a quality metric
Google has engrained testing into the heart of the development process so that every team is responsible for the quality of their products, and testers are only an external source used for creating automation.
Additionally, there is a significant emphasis for teams to commit smaller bite-size commits that can be easily rolled back if failures do arise.
One vital software quality metric widely used at Google is code coverage. Code coverage is the percentage of code that is covered by automated tests. At Google, over 90 percent of the projects are covered with automated testing tools.
By measuring what percentage of your codebase has code coverage you can better understand the risk that failures might occur. The closer a codebase is to 100% coverage means that all code is being run through testing in one way or another.
It’s also important to note that you want quality testing, not just tests that are passing bare minimum test cases to bump your code coverage percentage.
Google uses a four-step process for almost all testing:
- Testing by a dedicated internal testing team.
- Crowd testing, which can be crowdsourced or a Google-based group of testers.
- Dogfooders, Googlers who will use the product or feature in their daily work.
- Beta testers, a small group of end-users using a pre-release version of the product or feature.
At each of these steps, the Google team is attempting to de-risk the product and increase their confidence in a lack of major failure of defects. One more note, at every stage each group is testing different parts of the product and not doubling down on the same test flows.
Software Quality metrics for Agile Development
Let’s discuss how agile frameworks can consider software quality and use agile quality metrics throughout their product development life cycles.
Some might think, “we are agile, therefore we don’t have time to waste on writing down our test cases or quantifying metrics”, but documentation and software quality metrics are actually two of the most vital and key components for a high standard of software.
Here are three key software quality metrics which can be built into the agile framework at every stage:
Also known as an iterative residual trend chart, it shows how many story points have been completed in each sprint and how many are left until the feature or product is completed.
This is a great visual tool to track the progression and quality of the work being completed over a longer time frame.
Average Agile Velocity
This measures the average of commits, Jira story points, tickets, or epics that a team is able to complete in a sprint or iteration of a feature.
Open/close rates are measured by tracking production issues that arise during a given time period and how quickly they are resolved.
We outlined five fundamental characteristics of high-quality software and some important software quality metrics you can implement to help improve the quality of your product.
- Performance Efficiency
- Security Breaches / Security Incidents
- Security Adoption
- Cost Optimization
- Agile Development
- Burndown Charts
- Average Agile Velocity
- Open/Close Rate
By implementing just a few software quality metrics you will better understand the health of your product and where down the road you can actively improve your product quality.