top of page

QE Score: Calculation Methodology

‘A concrete example to understand the logic, criteria, and thresholds used'

QE score - animated example of a calculation

QE Score: eligibility and applicability of the calculation

‘Assess with precision, even in complex situations’

QE score - Application eligibility

Application eligibility

Only applications whose code is accessible (via Git, for example) can be evaluated with an QE Score. This excludes applications with proprietary code or "publisher" applications, as well as those offered solely in SaaS mode, since the team has no control over the code or the development practices.

QE Score - Tools applicability

Applicability of tools

Certain special cases may render a tool inapplicable. For example, an application with no webservices cannot be subjected to API tests. An ‘inapplicable’ tool will have its points redistributed to the other types of test and will not penalise the QE Score.

QE score - Weighting of measures

Weighting of measurements

The weight attributed to each type of measurement can vary depending on the context. For example, security tests will carry more weight for a government application, while performance will be crucial for an e-commerce site.

QE Score: Why revise the calculation?

‘Readjusting the QE Score: a key stage in continuous improvement’

The QE Score is not fixed. It evolves with practices, tools, and quality maturity and needs to be reviewed regularly. However, too frequent revisions lead to a loss of reference points and instability in the score. Without a clear explanation, this can be perceived as a loss of objectivity or fairness, or even call the credibility of the indicator into question. It is therefore essential to accompany each change with transparent communication about the reasons, the new criteria, and the objectives aimed for. A review is recommended once a year, with the project, quality, and product teams.

QE score - continuous improvement

​​​

1. Adapt weightings to project priorities

​

The QE Score weightings must be adjusted according to the organisation's specific priorities (security, performance, etc.).

​

2. Evolve thresholds as teams mature

 

As teams mature, it is legitimate to adjust the evaluation thresholds upwards.

​

3.  Consider the tools actually used

​

New tools can be added, while others that have become obsolete or inappropriate can be removed.

​

4. Involve teams in a continuous improvement process

​

This strengthens the commitment of the teams and ensures constant progress towards quality objectives.

bottom of page