In my last post I mentioned starting a tracking program so we can report our data accuracy to our customers and potential customers. I am excited to share with you that our program is underway! There were quite a few details to take into consideration when designing our accuracy program. Luckily for us we were able to use the knowledge we gained from working with Dr. Tom Redman in the past to help frame the questions and determine how to best define our accuracy program. Dr. Redman (“the Data Doc”) is a renowned data consultant who helps organizations craft data programs that put newfound ideas into action and build organizational capability to execute those programs.
The first thing we did was determine how many schedules are changed within the RateAcuity database each month. This is the base number needed to calculate our accuracy percentage. Anytime there is an update to a tariff or schedule in the RateAcuity database, we can process the update accurately, or potentially introduce an error to the database. Therefore, any schedule we review to determine whether a rate has changed has the potential to cause an error. As we set out to find the number of schedules we changed each month, we determined we have a reliable count beginning in January 2020. Before that we were not tracking changes internally in a way that allows us to determine this number. So, we decided our accuracy tracking program will go from January 2020 forward.
Next we needed to determine how to account for customer reported errors. Any individual rate schedule contains multiple data points. If a schedule contains ten rate components, and one rate is wrong, is that a 90% accuracy rate? Again, previous work with Tom Redman prepared us to answer this question. We measure our accuracy at a stricter level, meaning we look at a schedule as a whole. From a customer perspective a schedule is either right or wrong. A customer doesn’t really care that 90% of the schedule was correct. He or she just wants it all to be correct! Therefore, we measure our accuracy based on a schedule being correct or incorrect rather than individual rates that are correct or incorrect. This is a harsher measurement than looking at each individual rate component. But we are always up for a challenge when it comes to data accuracy!
Also, we only include customer reported errors in our accuracy program stats. We know no one is perfect, and errors will potentially be introduced into the database as we make changes. Our goal is to find and fix them before they are made available to clients. Finding an error internally through our quality control processes means we’re meeting our goal and does not count as an error in our accuracy program.
Since we are measuring accuracy for each month, in any given month an updated schedule is either right or wrong. If a schedule has more than one update in a month, it is still only counted once in the number of schedules changed for the month. Similarly, if a schedule has more than one rate error in a month it can only be counted as wrong once. Remember, it is either right or wrong. We are not considering how many rate components are in the schedule itself.
The last thing we considered is how to define an error. Clearly, anything that is a typo or a change that was made incorrectly is an error. What month should the error be attributed to? The error was not necessarily made in the month in which it is found by a customer and reported to RateAcuity. We need the ability to determine when the error was made and apply it to that month’s accuracy measurement. To do that, we look at the effective date of a rate change that is in error and when the schedule was edited to determine what month an error should be attributed to. Another consideration is if the customer reported issue is a rate update that has not yet been applied to the RateAcuity database. If the update was available a few months ago and was not processed by the RateAcuity team, that is an error. But what if a customer is questioning an update because it is not yet in RateAcuity, but is only a few days past the effective date? The RateAcuity team has determined that all updates should be processed within five working days of the effective date or the date it becomes available to RateAcuity. If a customer questions an update that has not been processed but it is within that five-day time frame, it will not count as an error.
Given this information, we can determine number of schedules changed each month and the number of customer reported errors each month. Those two numbers are then used to calculate the accuracy stat. Here are the numbers for the first half of 2020:
So far, this is all great news! RateAcuity has not had any customer reported errors that are attributed to changes made in 2020. We would like to take a bow and pat ourselves on the back! But we know no one is perfect and errors will be found. And when that happens, we take it pretty hard and beat ourselves up about it a bit. But then we move on, and ask ourselves, “How did this happen?” and “What can we do to prevent it from happening again?” This is the root cause analysis part of our accuracy program. To me, this is the most important part of our process. As I have said, no one is perfect, and we all make mistakes. And I am ok with that. Once. We use those mistakes as learning experiences, implementing changes to prevent them from happening again, and therefore making our product better. But if the same mistake happens again, the RateAcuity team, including myself, has not done its job properly.
We will share our accuracy statistics every month. While we strive to have all months at 100% accuracy, we know there will be bumps along the road in this journey. And we will share those as they occur so you can share in this journey with us. More on how we intend to share the journey to come.