I suspect that most experienced web analysts have done at least one data reconciliation project during the course of their tenure. For something so common, however, it rarely gets discussed.
Sure, it's not sexy like Angelina Jolie, but even Plain Jane likes a little attention now and then.
Data reconciliation is an important foundational activity because, when done well, it will inspire people to have confidence in the data that you share with them. Data quality will never be perfect, but it should be good enough for everyone to feel that they can make sound business decisions based on what's available.
Enough pep talk. If you're on the brink of your first data reconciliation project, here's what to do:
1) Identify your two data sources
The need for data reconciliation arises when you have two separate systems that provide similar sets of data. One of these sources - let's call it "Primary Source" - will necessarily be your standard web analytics application. The other one - let's call it "Secondary Source" - can be one of several things, namely:
- An upstream system, like campaigns (banner, search, email)
- A downstream system, like commerce or downloads or form submissions
- A parallel system, such as when you migrate from one web analytics tool to another. In this case I'd advise you to break your project into smaller chunks according to individual reports you care to reconcile.
2) Learn how your primary source gets collected
Read the documentation and talk to your internal tech team. Be clear on the scope of the data you're collecting - ie exactly which pages are tagged, or exactly which log files are processed. If you're using page tags, know whether the tag is placed at the top or the bottom of the page (this will affect when the tag fires, which in turn affects the level of data loss to some extent). Make note of any special filters, transformations or business logic used here.
3) Learn how your secondary source gets collected
If your secondary source is a parallel web analytics system, repeat the process you followed in step 2, above.
If it's an upstream system you're stuck with whatever documentation and lore you can glean regarding how that works.
If it's a downstream system you'll need to identify the group within your business that owns that system, then grill them on how they do data collection and how they transform the data into the metric you're trying to reconcile. There's a lot of variability here, especially if your downstream system is homegrown, so be sure to do a thorough investigation. As in step 2, make note of any special filters, transformations or business logic used here.
4) Compare data sets
Pick a sensible date range and granularity level, then pull corresponding data from both sources. A good default would be daily totals for a month. If you're dealing with really high volume you may want to isolate a subset of your data based on some attribute that you can reliably pull from both sources, like a single URL (for downloads) or a single product (for commerce).
Now put your data sets side by side in Excel and calculate the delta. Compare the trends over time and see if you can explain the differences. Ask yourself, are you comfortable with the differences you see? If not, consider fine-tuning the way you pull data from your primary and/or secondary source in order to account for those differences.
Reality check: you're never going to get a perfect match. This is a good exercise, but know when to say when. Do not obsess!
5) Document and share your findings
This is the most important step. Write a report about what you've done and what you've found. Now go talk to people - give a verbal presentation of findings to your web analytics colleagues and your concerned data stakeholders.
At this point you should be able to speak with confidence about the differences in the two data sources, and your goal should be to pass this confidence on to the people around you. Save your report for future reference, as newcomers are likely to ask the questions you've already answered.
6) Plan to revisit if necessary
If reconciliation is part of tool migration, you are now done. Good work.
If your secondary source is an upstream or downstream system, plan a periodic audit to make sure your findings are still valid. If your systems are stable you can get away with doing this maybe once a year, but if you have any appreciable changes - like a major site redesign or a shopping cart overhaul - you may wish to do another quick round of reconciliation at that time.
Hey June,
Are you kidding me? This happens all the time! I went through a month of it with one client until we had sorted out all of the bugs.
I usually recommend a 5% threshold of tolerance, sometimes up to 10% if it's reasonably consistent. It all depends upon the metrics and confidence necessary to make business decisions vs. cost of sorting out the difference.
-Alex
Posted by: Alex | June 02, 2008 at 07:08 PM
One other thing to keep in mind is to not try to use the reconciliation to jump to a single "multiplier" that can be used to match data from one system to another. We found this out when running a log-based system and a tag-based system in parallel and doing this reconciliation. We were down to the point of test pages and line-by-line comparison of the page tag log file to the server log file. We could explain ~70% of the differences...but flat out left scratching our heads on the others.
One thing we realized, though, was that, while the trends would be the same over time, the % delta would vary based on the number of pages being reported on. Our theory there was that there were a lot of unfiltered (by our log-based system) or cloaked spiders (by the spider owner) that would hit random pages on our site, but not the entire site. These would boost the "Visits to the entire site" quite a bit. If we looked at a single page, especially a page that was buried in the site somewhere, then fewer of these spiders would hit the page.
We let the systems run in parallel for six months and told people to ask us if they wanted us to do a reconciliation. Typically, that meant that we would trend data from both systems and make sure they were consistent. Then, we'd respond with the list of differences that drove the difference, but didn't quantify them in each case.
I tend to shoot for <20%, actually. And that's primarily dealing with two tag-based systems -- one being a marketing automation that "includes web analytics" but is not a true web analytics tool.
Posted by: Tim Wilson | June 04, 2008 at 10:48 AM
Alex: Your comment makes it absolutely clear that we have a shared experience, here. :)
Tim: I appreciate your observation about the variation in delta being based on the number of pages in the sample set. I have seen that, too. That's exactly why I recommended, in step #4, isolating a subset of data based on a common attribute like URL. And yeah, spiders can be a major culprit - it's worth, in the beginning of a reconciliation project, learning exactly how each system handles the filtering of spiders and other traffic.
Posted by: June Dershewitz | June 05, 2008 at 12:37 PM