Newsletters

Customer Support:   (972) 395-3225

Home

Articles, News, Announcements - click Main News Page
Previous Story       Next Story
    
Improving The ROI Of Your QA Program Through Audit Deviation
Submitted by Mary Kay Murphy, J.Lodge

April 24, 2017

Improving the ROI of Your QA Program through Audit Deviation
Submitted By: Mary Kay Murphy, J.Lodge
 
The credibility of your Quality Assurance (QA) program is directly related to its value.  If the agents being measured do not believe in the accuracy of the QA audits, you are not only wasting your money but the unwanted behaviors will not improve. Over the past sixteen years J.Lodge has had the opportunity to work closely with many industry giants developing various processes that allow us to produce phone, email and chat audits that are consistently accurate despite the subjective nature of the measurement.   
 
From calibration calls to training sessions, and everything in between, the one process that has the most impact on measurement consistency is the use of an Audit Deviation Report (ADR).  It is the only process that provides a measurement over time rather than one calibration call, a single internal quality audit or one side-by-side audit.  While these types of evaluations provide a snapshot of an Analyst’s scoring accuracy, they do not eliminate the possibility that the Analyst will perform better on a known test verses their daily audit production.
 
ADRs compare the Analyst’s performance against the entire QA team with no evaluation necessary.  This provides instant acceptance by the Analyst and a desire to learn how to achieve uniformity.  Using ADRs avoids the accusation that you are trying to manage to a score; you are only managing to consistency.
 
The implementation of audit deviation analysis increased our scoring consistency by over 20% and reduced our audit disputes by over 30%.
 
What is Audit Deviation?
 
Most QA programs include a combination of objective and subjective behaviors. Objective behaviors confirm if the Customer Service Representative (CSR) followed compliance regulations and/or company policies.  Compliance metrics and company polices are typically black and white, the CSR either met the goal or they did not.  For example: Did the analyst properly brand the company or did the analyst follow the DNC (Do not call) policy?  The answer is either yes or no.
 
It is the subjective behaviors (was the CSR knowledgeable, did they use proper etiquette, sound upbeat and positive, take charge of the issue, etc.) that QA programs struggle to score consistently. Several individuals may listen to the same call or read the same chat and come away with a slightly different opinion of the behaviors that occurred on the contact. Even though the opinions may only differ slightly, there can be a much greater impact when it comes to scoring consistency. ADRs paint a clear picture of these scoring differences.
 
Usage of the ADR, quickly highlights inconsistencies in scoring amongst team members over a period of time, generally 30, 60 or 90 days.  An ADR compiles all audits per Analyst, determines the average score for each behavior and compares the average to the entire team. The ADR identifies when an individual Analyst or group of Analysts are scoring behaviors outside the norm; for example, consistently rating a particular behavior harsher or more lenient than other Analysts.
 
What does Deviation reports capture that other QA metrics miss?
 
Unlike other quality metrics that inherently involve smaller sample sizes, ADRs assess the scoring trends over every quality evaluation that is completed in the production environment. Other quality checks such as spot checks (audit reviews) or calibrations have limited sampling. In the case of a calibration, the Analysts will likely know they are being tested.
 
With larger sample sizes, ADRs quickly illustrate clear trends that indicate how often an Analyst scores each behavior a particular way, such as Pass versus Fail or Good versus Fair. Outliers can be difficult to spot with small samples but quickly identified using these reports. ADR trends are an actual depiction of how Analysts usually score behaviors in the production environment during all time spent monitoring contacts and not just when they know they are likely being tested.
 
Why does J.Lodge use ADRs as part of their QA process?
 
J.Lodge uses ADRs to compare behavior scores at the Analyst level to make sure everyone has the same understanding of how the behaviors are to be measured. Highlighting the inconsistencies with our Analyst’s scores, shows our Performance Coaches and Trainers where to focus individual coaching and/or group recursive training. This targeted approach allows management to focus on specific behaviors instead of a general or overall score for coaching, bringing efficiency to training by minimizing time spent on behaviors that are performing within average.
 
Coaching/Training is scheduled for Analysts classified as outliers within ADRs.  During these sessions, we are able to get a better understanding of the disconnect between the Analysts. The cause of the Audit Deviation can be a misinterpretation of how a behavior should be scored, unclear/missing documentation, or an opportunity identified by the Analyst.  In the end, we can ensure all training documentation is accurate and clearly understood. This targeted focus ensures everyone remains calibrated, safeguarding the accuracy of the data reported by J.Lodge.
 
Using a Deviation Report (What to Look For)
 
Deviations of 5% or less is considered accurate unless an Analyst continually scores a behavior at 5% while all other team member’s score vary by only 1-2%. When analyzing an ADR, we look at the overall audit score average each Analyst and team, as well as how each individual behavior is scored, highlighting any behavior average that is not scored accurately to the overall team average. These are classified as outliers.
 
A good starting point in Audit Deviation analysis is to compare the individual Analyst’s overall score to the team’s overall score. Figure 1 shows us that Analyst 1 overall average is well below the team average of 81.13% while Analyst 4 is well above that average. Delving deeper into audits completed by these two Analysts will uncover which behaviors are being scored differently from the team.
 
 
 
Note: Overall Score is the average of all the behaviors audited.


Let’s compare all the behaviors for Analysts 1 and 4.
 
  • Analyst 1 scored behaviors 1 and 2 to the team average but scored the remaining behaviors much lower than the team’s average.  These would be the areas of focus for this Analyst.
 
  • At first glance it appears Analyst 4’s Audit Deviation appears to be only on behaviors 3 and 5.  However, notice that this Analyst also scores every behavior higher than the team average.  This trend of higher scoring across all behaviors should reviewed to ensure accurate auditing.
 
 
Audit Deviation analysis allows for even a more in-depth look than what we have uncovered above.  Let’s look at what the illustration below tell us;
 
  • Analysts are scoring consistently for behaviors 1 and 2. This indicates that the team is calibrated in these areas.
  • In addition to Analyst 1, we see that Analysts 2 and 3 are also scoring Behavior 4 outside the team average.
 
Coaching/Training for Improvement
 
Once the behaviors and Analysts scored inconsistently are identified, the next step is identify why there is a difference in scoring.
 
Audits are extracted from our QA database (MyQuality) for the Analysts that scored inconsistent from the team.  These audits focus on the specific behaviors that deviated from the team average.  Our management team performs internal quality reviews of those audits to determine if there is a training opportunity.  Coaching may occur for specific Analysts to clarify areas of confusion or team/small group training sessions held to provide recursive instruction for that behavior.
 
J.Lodge’s Success through Audit Deviation Reporting
 
Through Audit Deviation analysis, J.Lodge consistently identifies trends and possible outliers within their audit data.  In doing so, we are able to;
 
  • Consistently maintain 85%+ accuracy across all programs
  • Improve Management Time with less accuracy checks and enhanced efficiency in understanding coaching/training needs
  • Increase Client/Vendor Confidence
  • Improve Analyst Acceptance
 
About J.Lodge
 
J.Lodge is a recognized leader in the science of contact monitoring.  Through comprehensive data analysis, J.Lodge not only provides world class quality assurance services but offers our clients cost saving solutions; often before the client is aware of the problem. For more information, please call Von Myers at J.Lodge, 844-614-7403.
 
 
 

    

______________

For additional articles on our site, please click here

 
Return to main news page