MOT testers are required to access their TQI report (Test Quality Information) and analyse them to meet and maintain the standards set by the DVSA in the UK.

This article will explain what’s involved in analysing the TQI report. 

What does a TQI Report include?

In the first instance, it looks at:

  • the number of  tests completed by each tester at a site
  • the average vehicle age
  • the average test time 
  • the percentage of tests failed.

It then compares these stats to both the site average and the national average. Why is this useful? It means you can identify any trends or anomalies that may be forming. If you compare well to the national average, then all is good! But if you see marked differences between the national average and either a particular team member or your site as a whole, then you know action needs to be taken. 

That could take the form of additional training or even temporary supervision of your team to support them in the areas they’re struggling with.

The report then breaks down further into percentage failures by MOT category. Namely:

  • body, chassis, structure
  • brakes
  • buses and coaches supplementary tests
  • identification of the vehicle
  • lamps, reflectors and electrical equipment
  • noise, emissions and leaks
  • road wheels
  • seat belt installation check
  • seat belts and supplementary restraint systems
  • speedometer and speed limiter
  • steering
  • suspension
  • tyres
  • visibility.

How often is are TQI reports published?

The report is published monthly for the individual tester to access and see how they stack up against all the other testers in an area, such as the average time to complete a test and failure rates in each category (i.e. brakes, visibility, steering etc.). 

What should you be looking for when analysing your TQI Report?

To take the necessary actions mentioned before (additional training or supervision), you need to identify the areas the support is required.

For example, what reasons could there be for your tester to be far away from the national average test times? And in which directions are those times? Is your tester taking too long? Are they completing the tests far quicker than the average tester? 

Why do times matter? If a test time isn’t falling close to the national average, then it’s possible the tests are not being completed according to DVSA guidelines. 

Questions to consider if it’s too short are:

  • Has a thorough inspection been completed?
  • Is the tester’s workload too heavy, and they feels it’s necessary to rush?
  • Are all the vehicles serviced before they are MOT tested? 

At the other end of the scale is when tests take too long. If that’s your situation, then the questions to ask are:

  • Was the test being interrupted by customers? Or by other staff needing information or techs needing help?  (Remember, that’s a BIG no-no with the DVSA) 
  • Is the vehicle unfamiliar to the tester? 
  • Did they leave the vehicle unattended or forget to complete the test for any reason? (another, BIG no-no with you know who!)

If a tester is regularly taking more than an hour without good cause it’s also worth considering is whether complacency has set in. It’s more commonplace than you might think – mostly where sites haven’t been visited for a while, and bad practices have found their way in. Another point to bare in mind is the efficiency of the test centre. The DVSA booking system is in one hour slots. Therefore if one tester almost always takes say 1 hour 15 minutes to complete a test, over the course of a week the additional time that tester has a vehicle logged onto the system adds up to £274.85 per week of lost revenue.

Why might a MOT tester be far away from the national average failure rates? 

If your report identifies one (or more) of your testers’ results are not in sync with the national average failure rates, you should ask the following questions:

  • Have the vehicles been serviced before the MOT?
  • Is the tester up to date with the current failure criteria as laid down in the current MOT Manual? 
  • Does tester refer to the Manual regularly? It’s a “live” document and can change at any time; therefore, it’s crucial that they do.
  • Does they use PRS properly? 

How to interpret the TQI report

Your testers’ pass rates might be higher or lower than the site or national average. That does not automatically mean anything is wrong with their testing standard. Some test centres with lots of MOT testers all testing should also compare against the site averages, especially if you’re a dealership or specialist service centre.

You and your testers should regularly review this data. Look into any unusual differences, investigate any issues and record the outcome.

Factors to take into account

When you review the CSV data, check the details against your appointment records to ensure they match. These checks should include the following:

  • the date and time of the test
  • the vehicle registration mark (VRM)
  • vehicle make and model
  • user ID control activity
  • test status and type
  • IP address
  • test duration
  • date and time a contingency test is recorded
  • user ID of the tester recording contingency test.

Having considered the items above – what next? 

As manager of your MOT station, identifying any anomalies and taking the necessary action is key. The information is there to be used and to help you! Make a checklist so you always:

  1. Get into the habit of using the TQI Report as part of the regular QC process.
  2. Always discuss findings with the tester so you can offer support where it’s needed.
  3. Make notes on the TQI Report before filing it in an MOT compliance folder.
  4. Create an action plan where necessary.
  5. Review the report EVERY month.

Have you seen our post on Test Logs? When combined with TQI report analysis they can make your sites QC checks more robust and informative. If you have any questions on MOT Compliance, need some support or some MOT training, contact Karena by email or call 0800 1777 344