top of page

QE Score: Fully Automated from End to End

'From collecting quality metrics to computing and displaying them on the dashboard'

QE Score - process - allows the collection of information from various types of tests or quality tools such as SonarQube, Fortify, Jira, GitLab, Karate, API testing, Selenium, Playwright, Gatling.

The QE Score is based on the automatic and factual collection of quality data, with no manual intervention. Here's how it works:

​

1. Quality Data Collection

​

Each time the code is scanned or automated tests are run, various quality-related elements are collected. These may include the scan date, execution time, and test results.

​

2. Automation and Centralization

 

Data is retrieved through APIs from testing and quality tools, and sent to a tracking sheet (Excel, Google Sheets, etc.) or a database (BigQuery, etc.), where all information is aggregated.

​

3.  QE Score Calculation

​

Points are assigned based on predefined rules and criteria, and are weighted according to the importance of each type of test.

​

4. Visualization and Monitoring

 

The results are displayed on a dashboard, allowing teams to track quality over time and quickly identify areas for improvement.

QE Score: Data collection at the heart of the system

'Data collection and score calculation centralized under a single tool'

Data Collection is the heart of the system, centralizing all the collected information. It ensures the reliability of the data and allows real-time access. By offering the ability to customize the criteria, thresholds, and weight of each data point, it becomes the engine for calculating the QE Score. However, it is recommended to use it solely for storing 'real-time' data and to archive the older data in another storage tool (such as BigQuery).

QE Score - the data collection is a file or a database that collects all quality data and archives it. It also allows for calculating the QE score

QE Score: Definition of a relevant data grid

'Structuring information for reliable analysis tailored to each context'

Before implementing the QE Score, it is essential to define a consistent grid on which it will apply. Indeed, depending on the organizations (example below), it is necessary to find the right hierarchical level to calculate the score in order to provide an overall view of the developed application. The risk is to choose a 'service' grid (e.g., backend), which often corresponds to a 'team,' and thus evaluate the quality of their work rather than that of the application itself. The data sources used may come from

​​

  • Application level: information related to functional and performance automated tests, Jira tickets, etc.

  • Git repository level: information related to code quality and security, CI/CD pipelines, etc.

​

Once the QE Score is calculated for each application, it can be aggregated to higher levels (product, functional domain, business unit) in order to provide a coherent and objective comparison between different entities.

QE Score - Example of different hierarchical levels to determine a relevant grid on which to apply the QE Score
Product
Quality engineering score
A complete offering addressing a business need, which may include multiple applications.
E-commerce platform (includes website, mobile application, back-office).
Contains multiple apps and services (e.g., e-commerce site + mobile app + ERP).
Application
QE Score - score rating in the form of letters from A to E
A functional software addressing part of the business need, with a front and/or back-end.
E-commerce web application (frontend + backend that handles orders).
May have a front (React, Angular...) and a backend (Java, ...).  
Service
Quality engineering score
A technical unit performing a specific task, often on the backend.
Payment service (backend service consumed by multiple applications).
Generally an isolated backend (e.g., authentication service).
bottom of page